Digitizing for Autonomous Manufacturing

|

How much data is required for AI?

The AI system that we’ve built is incredibly descriptive of the process that sits below it. Through the introduction of some really exciting data science methods, we’ve been able to generate very sensible results that remain consistent without much data. So typically, we look about 10x, given the number of parameters that operate in your plant. So with 60 parameters, we need about 600 samples. 

How does your data need to be structured to enable an Expert Execution System (EES)?

A lot of folks that I chat to are quite concerned that the data that they’re recording from their production systems may or may not be able to be used in these kinds of opportunities, particularly around expert execution systems and installation. I don’t think it matters that much. I’ll tell you why.

It turns out that we’re able to handle most of the extract, transform, and loading of your data into our Expert Execution System so the actual route format that your data is stored in on your production system doesn’t matter.

That is the most important thing; record your data, save it, make sure that you’re not aggregating it. However, don’t worry about the underlying schema, it turns out that isn’t too important. 

Why is real-time data not necessary to support an Expert Execution System (EES)?

When one operates in a reactive paradigm, having data as quickly as possible from the result is super important. Because you’re reactive, you’ve got to act on the change as quickly as possible. So you’ve had some defect occur, and every second that you wait, you’re making more of the defect – and so a two-second delay can result in 1000s of components being made effectively. 

If you’re operating in a prescriptive paradigm, you’re actually ahead of real-time. You’re making changes in anticipation of future risk. As a result of the change being made, the risk is never realized, and so you never incur the scrap nor incur the cost of non-quality. Resulting in your entire process being ahead of that wave of quality. 

The importance of quality data

Measuring the quality result of your production process is really important, not just recording the fact that you have a defect that has occurred on a specific component, but also what the type of defect is, where it is located on the component, and any other descriptive details that may be appropriate to your domain. 

It turns out, they’re being able to diagnose the root cause. This is what our AI system does. But being able to diagnose the root cause is really dependent on having a good description of the type of defect that you’ve got. 

How do you know which sensors to place next?

When we’re installing our AI system, we build some models that describe your process. In fact, it’s a very large monolithic model that’s able to understand the dynamics from the start of your process to the end of the process. 

Sometimes we find large sources of unexplained variance in systems. This is a signal to us if there is a sensor, or there is a measurement in that part of the process that could be added that would then describe that process. Allowing us to give you very specific guidance that in this process, there’s a large source of unexplained variance, and we need to go and measure what it could be. 

We work with your engineering teams to identify the kinds of dimensions we need to measure, be it some kind of process parameter or some improved logging and reporting.

SIGN UP FOR OUR NEWSLETTER

Sign up to receive personalized, hot-of-the-press news about DataProphet events, and our latest innovations and products.