Data is everything and everything is data. For engineering systems, data collection is crucial for processes such as condition monitoring and performance analysis. Not only does this tell the operator about the current state of their system, but it is also used to predict future behaviour and useful lifetime, which is necessary for subsequent long-term planning and decision making. Operators can collect data, but actually understanding it is critical for large scale energy projects (wind farms and associated transmission infrastructure), the transportation sector (commercial airliner engine components), and industrial capital expenditures (semiconductor manufacturing). Knowledge is power, and understanding data can provide a competitive advantage by reducing risk, and saving time and money.
While there is already widespread awareness that data has great value – with the mantra of “the more gathered the better” – taking the next steps from raw data collection to actually utilising the data as a valuable asset can be challenging. As more and more sensors are integrated into monitoring systems, acquired datasets are growing in size, which consequently increases the complexity of analysis. In this regard, correct data processing is just as critical as data acquisition in efficiently obtaining useful information and insights. For scientific data processing, correct methodology is vital, as inaccuracies may ultimately lead to incorrect decision-making. Likewise, efficient processing is important as these procedures are typically computationally heavy and time-consuming. Therefore, data processing and analysis should always be conducted by a data scientist or engineer with the expertise required to get the most from your data. It’s often difficult to extract the signal from the noise.
At Xi Engineering, we have extensive experience in refining the value in clients’ scientific data through a variety of processing and analytical techniques. These techniques are applied to measured data, as well as to simulated data, which can be considered a kind of “computational” measurement. Indeed, in the development of simulated models, we validate results initially against known baselines using measured data from a physical system. This is a key component in the development of Digital Twins. We have engineering and physics knowledge across a range of disciplines, which ensures we can knowledgeably engage with clients and conduct data processing is correctly and efficiently. Our methods include:
- Pre-processing (data cleansing) – What contributes to noise in the signal? How are outliers defined?
- Multivariant analysis – Coupled interactions between different parameters can be complex!
- Signal processing – Frequency and time-domain techniques
- Machine learning – Clustering and identification of non-obvious trends and patterns in data, as well as condition estimation and prediction.
We work with clients who have collected large amounts of data and know what information they want to gain from it, but may need additional data science expertise to achieve their goals. Additionally, we also work with clients who are less sure what can be done with their data, and we can help them formulate achievable goals.