Why is real-time manufacturing too late?

weld-machine-in-operation

Root causes of systemic production issues in manufacturing are rarely understood, let alone resolved holistically, unless operators can use AI to leverage data and establish a proactive production paradigm. Irrespective of the vertical, despite the manufacturer’s level of experience—and even with the latest industrial technology—continual process optimization remains one step ahead using traditional optimization techniques. The barriers to data-driven smart manufacturing are three-fold.

First, tighter margins for manufacturers in recent years have dovetailed with advances in manufacturing equipment and smart technology, with the numbers of process parameters and hence data points increasing far beyond the reach of human comprehension. Only artificial intelligence’s unsupervised, deep learning algorithms can leverage such vast quantities of process and quality data.

Second, in a bid to regain control over this growing complexity, much emphasis has been placed on process optimization based on this mass of real-time data. However, because this approach uses real-time data to confirm (or to predict) quality failures, it merely reacts to the presence of process anomalies when they occur on the line. The widely dispersed and insidious root causes remain hidden within the abundant complexity of the process in the absence of holistic, pre-emptive guidance. Meanwhile, production realities mean KPIs are missed, and opportunity costs are incurred when a corrective action has been identified.

Third, this quest for more real-time production data has lent itself to a self-defeating brand of troubleshooting: one that—paradoxically—renders control plans increasingly fragile and more difficult to execute.

REACTIVE VS PROACTIVE PRODUCTION

The good news for manufacturers is that a non-disruptive data-driven approach is available for continual manufacturing process optimization. With prescriptive analytics, AI-as-a-Service can empower manufacturers to move from a reactive paradigm (each solution serves to wind up the production system’s clock tighter) to a proactive paradigm. A proactive paradigm is pre-emptive: it automates root cause analysis and, in some cases, can loosen a particular non-critical tolerance, compensating for inevitable upstream variance and ensuring holistic production line control.

The prescriptive insights of the deep-learning system are self-consistent (so production, quality, and product teams remain aligned) and optimize the whole system. This means the setpoint adjustments that are applied positively reinforce one another. The result is that manifest production problems are avoided as manufacturing teams act coherently and ahead of real-time. 

In manufacturing environments, the opportunity costs associated with stopping production are unacceptably high. Production represents a huge capital investment—equipment, training, maintenance, salaries, and energy input, amongst other outlays. To keep the machine moving is imperative to recovering these investments. For this reason, missed KPIs are typically corrected for in the context of a “burning platform.” Once a KPI is missed, a failure mode and effects analysis (FMEA) document is usually referenced.

The FMEA is used to line up the observed symptoms of the issue with its understood root cause. If the mode of the failure is known, corrective action is applied. However, while the proverbial fire is put out, another conflagration (i.e., a new type of production failure) may have flared up, which, in turn, requires immediate attention.

THE ROOT CAUSE OF FRAGILE MANUFACTURING SYSTEMS

reactive-paradigm-manufacturing-process

And yet, this kind of reactive, real-time troubleshooting—of production failures that have already incurred waste—has further and more wide-reaching drawbacks for manufacturers. The incidence rate of some of these production failures often becomes high, meaning the failure has started to recur more frequently. If the rate of failure is high enough, the production team will conduct a root cause analysis which, in turn, prompts the quality team to produce a corrective action request indicating the need for a course correction.

For example, the product engineering team may then be required to generate a change note regarding the product’s design or its production process. This kind of root cause analysis is time-consuming, and while it is being conducted, defective units continue to be produced. 

Furthermore, such analysis relies on narrow human expertise built up over years of front-line experience. Whilst it can effectively reduce the incidence of a discrete production issue, it rarely gets to the heart of the problem. Why is this?

Apart from the wisdom of the analysis being interpreted by other teams with different professional perspectives, the ultimate corrective action is originally drawn from production data used to confirm the expert’s belief as to the root cause of the problem. This means the issue has only been understood at the outset in broad terms since it is not usually possible for anyone (no matter how experienced) to disentangle the problem from the complexity of its interdependencies. Thus, in the above cycle, somewhat useful yet incomplete, knowledge is enacted and codified as institutional. At the same time, the inevitable occurrences of new recurring problems will necessitate the cost and availability of more experts (who are often scarce).

Moreover, the associated corrective action request is invariably an order to restrict the process limit into a narrower band; thus, over time, a production line winds up with a multitude of tighter tolerances, making process parameter control increasingly difficult. Real-time optimization remains fundamentally reactive and is not conducive to determining robust, flexible operating regions across the system.

THE DOMINO EFFECT OF COMPLEX PRODUCTION FAILURE

To understand the impact of a tightly wound control plan, a good analogy is a highway. Although designed for one car to travel along, each lane is, for safety, ideally wide enough to allow for a vehicle to drift before veering into the next lane. However, the post hoc inclusion of subsequent lanes would mean a general narrowing of each lane that further increases the chance that vehicular drift will result in a collision. In the same way, the more a control plan is built on narrow tolerances (i.e., is tightly wound), the greater the chance that one of its values going out of control will create a cascade of failures.

In current manufacturing, this represents the fundamental conundrum faced by quality and production teams—to control a system whose process is made to be increasingly intolerant to inevitable upstream variance.

It is also important to understand the limitations of tight feedback loops to advanced manufacturing processes, which are inherently complex. A tight feedback loop for a production process comprises input (i.e., setpoint adjustments); action (i.e., a production run); feedback (i.e., throughput versus latent defects/quality), and evaluation (i.e., analysis for setpoint adjustments).

In multistep, complex manufacturing processes, the default is—understandably—to zero in on the feedback loop that applies to each sub-process. Therefore, be it in fabs, foundries, automotive assembly, or general manufacturing plants, feedback loops are typically narrowly constrained to discrete input from many expert fields: for example, the combined results of various experts’ evaluations—thermal; chemical; electrical, and product—are fed as setpoint adjustments into the beginning of the next iteration of the process. In this way, production feedback loops tend to apply to singular sub-processes and are variegated according to discipline, team, and human capacity.

If we consider that such tight feedback loops do not account for the domino effect present in complex manufacturing processes, the opportunity cost to potential process optimization in this approach becomes clear. Common sense tells us that the output of each step serves as an input to the next step. It follows then that knock-on effects will escape our notice if we react in real-time to decontextualized, even if accurate, feedback. This inherently fractal nature of tight feedback loops means the underlying processes that lead to non-quality remain hidden within acceptable tolerances. 

Because manufacturing processes are by nature multivariate, the impact of any univariate deviation in one step is felt—in the next step, but also within the sub-step itself. Thus, the relative impact of process deviance is compounded when the downstream process parameters towards the end of a manufacturing sequence are hypersensitive to (i.e., cannot tolerate) the accumulated deviation in earlier upstream processes.

The result of a tightly wound, real-time reaction system then is this: the cost of control for manufacturers increases, and defects are more likely to occur. This will always be the case when real-time data is used to glean and interpret information about actual production failures merely.

PRESCRIBING AI-AS-A-SERVICE

prescriptive-analytics-manufacturing-diagram

Yet, the skepticism of some manufacturers around AI’s capacity to deliver continual process optimization is understandable. This is because AI deployments typically only predict defective components or products. In such cases, customers are protected from receiving failed products; however, the price of improved quality is increased scrap and reduced throughput.

Classically predictive AI, therefore, is too static to realize process optimization. For this to happen, a production system needs to be looked at holistically; the AI must be designed to work proactively and deployed as an AI-as-a-Service.

The optimization of a complex process is guided not merely by process data about current or imminent failures but by what the data communicate about all the interdependent variables, including those that have historically produced the best results. Furthermore, it continually and pre-emptively determines which variables—if adjusted—are most likely to achieve an optimal production run.

AHEAD OF REAL-TIME: A STABLE TARGET FOR PROCESS OPTIMIZATION

By gaining a holistic view of any complex manufacturing process, DataProphet PRESCRIBE learns the relevant interdependencies between the many production process parameters, including those upstream and downstream of each process. PRESCRIBE then accurately projects the impact of set-point changes to a plant’s control plan and prescribes the next best, highest-impact step towards the best of best (BOB) region. This AI-as-a-Service approach guarantees that suboptimal production outcomes are relegated to potentialities because they are solved before they are realized.

Results have consistently shown that establishing a BOB region via the embedded expert guidance of data-driven machine learning provides a stable target, empowering plant controllers to sidestep the pitfalls of real-time and reactive optimization. 

AI-for-Manufacturing is only worth the investment if it proves to be a dependable tool towards achieving your Industry 4.0 ambitions. Its ROI needs to be measurable within a reasonable time frame (i.e., one year). Technically, it must learn the complex interconnectivity of a large number of process parameters, then deliver easily applicable recommendations for optimal plant and product metrics ahead of real-time.

The distinctions—between knowing what is happening in real-time, knowing what is likely to happen, and knowing precisely which changes to make to manifest a desired future production outcome (and when to make them)—are crucial. In reactive or predictive paradigms, ‘real-time’ is too late in the pursuit of Smart Manufacturing.

*Article can also be read on U.S. Tech: http://www.us-tech.com/RelId/2681014/issearch/dataprophet/ISvars/default/Get_Ahead_Real_Time_is_Too_Late_for_Manufacturers.htm