Machine learning is largely dependent on the ingestion of data – and not just any data. We need to look at the complexity that surrounds large industrial operations and the type of data manufacturing plants are probably already collecting.
- It starts with quality
In modern production environments, there are many places to measure the pulse of your operation: from takt time on your production line, to the size of your goods-receiving warehouse, and then to scrap and rework rates, and even the cost of non-quality. All of these have a direct impact on your business’s efficiency, and it is important to understand the dynamic interplay between these variables. - Following the process: identification and traceability
To gain useful insights into your entire process, the quality result needs to be connected to the conditions that either enabled a good quality result or led to a poor quality result. - Process control: set-points and targets
It’s important to determine the complex interplay between parameters and the extent to which a small change in one impacts adjacent parameters and how the change cascades down through subsequent processes.
A holistic representation: combining quality, traceability and control
This is the hardest part of realizing value from Industry 4.0 installations. Combining these three different data sources is crucial to creating a representation upon which an AI system can learn how process parameters influence quality and process yield.
DataProphet uses machine learning to understand how a plants process variables interact and combine. Machine learning is amazingly well suited to this problem and the output is a control plan that can enable production teams to fine tune processes to reduce defects and eliminate scrap.
Read the full paper on maturity self-assessment for industry 4.0 here.