13 Oct 2020 | DataProphet| • Semiconductors
Semiconductor foundries represent some of the most sophisticated production processes of the modern world— rapid design iterations of increasingly complex packages, combined to form ever more capable System on Chips (SoCs). This product sophistication can come as a curse when optimization is required to improve yield rates. Some of these challenges are structural, the large number of unit processes performed by different machinery combined with the high number of different jobs means process data is often siloed in different data structures along the value chain.
Other challenges relate to the change in organizational thinking, given the focus put upon the relatively frequent quality stations and their accuracy, reactive thinking in process optimization is a much more comfortable fit. Given these challenges, why should foundries implement these advanced analytics techniques into the production process?
The motivation for applying any new methods will always need a strong business case, and when it comes to applying advanced analytics or AI to production processes, it does not matter how novel or exciting the technology—the new system must have a meaningful return if it is ever to leave an innovation budget and be passed into production.
Mechanical improvements to a process are often much easier quantified than improvements to existing machinery through analytics. The cost-benefit of implementing a new machine with improved capacity and a larger die size can easily be quantified with guaranteed throughput and well-quantified cost of material and energy. Analytics solutions are more often than not motivated by a sudden loss in yield due to an increase in defects, often due to unknown causes.
As a result of this reactive motivation—they are often focused on a sequence of one-off projects driven by experts in the process—whether in a continuous improvement manner, DMAIC, PDCA, or another paradigm. The motivation for the business case in this scenario is the recognition that production can’t afford to operate at the high defects rates rather than some explicit calculation of the Cost of Non-Quality (CNQ).
Herein lies one of the two largest challenges that AI faces. I deliberately use the term AI here to reflect analytics that drives through to a recommended action, often without the expert in the loop that more traditional analytics depends on. One further distinguishment—while AI can be used for parameter optimization upon a single unit process such as in Model Predictive Control (MPC) or other forms of Advanced Process Control (APC)—the AI for process optimization I am referring to works to optimize a sequence of unit processes and often multiple quality metrics in a holistic manner.
The proactive, continuous approach of AI to process optimization represents a mindset change from the reactive approach of traditional thinking both in the implementation as well as the business case. Where previously the business case was driven from some increase in defect rate above the ordinary—now the business case must be motivated by comparing the ordinary or accepted rate of defects to the expected new rate.
You would be surprised how rarely the ordinary or accepted rate of defects across the entire production line is commonly available, but it can be expected, as the reactive approach does not need to know the total line defect rate, only if there is an increase in defects at one of the quality stations. We find we often have to work with our customers on discovering the accepted rate of defects across the entire production line—which really assists us in our implementation to understand our customers’ business better.
The business case can then be built around that difference between the accepted rate of defects and the expected improved rate (this is better than the reactive return to normal). At DataProphet, we quantify the expected reduced rate of defects (and corresponding improvement in yield) by contrasting the accepted rate of defects to the customers’ best of the best experience in recent history—building the business case around realizing at least 50% of the difference between those two rates. In our experience—this commonly results in a 30 to 40% reduction in defects, a result which we have repeatedly demonstrated at our existing customers across a wide range of separate industries.
The second of the two largest challenges that AI faces to achieve continuous process optimization for production lines centers around the often siloed process data and quality data. This is especially the case in foundries where machines and quality stations in a sequence are provided by different vendors. Historically data has been siloed as it originally was designed for control and compliance purposes.
Reactive process control improvement paradigms have been able to leverage this data as they are usually performed in an ad hoc and focussed manner. More recently the necessity of orchestrating and attaching process data to the underlying product has been driven by traceability as well as the promise of AI solutions.
While we do see more and more good data orchestration platforms becoming available to modern manufacturing processes, it is critical that when implemented they support an application that will create value from the data for two reasons. The first reason is that data without use is not valuable.
We have a number of different manufacturers who have bought into data being valuable and implemented a data orchestration platform to then realize that further investment is needed either in developing their own data science team or contracting one. In both cases, it can often mean that quick value can better be realized by using the orchestration platform for visualization rather than that larger value from AI that the platform was originally intended to support.
This leads to the second reason for being clear from the outset as to what application will use the data as this will dictate the requirements for the data orchestration platform. If the goal is just visibility—then requirements for the platform will focus on how near to real-time the data is but not much more. However, if the intent is to implement a good AI system the requirements also extend to continuously attaching quality data to process data (often described as contextualized data) and having a clear feedback loop.
In our experience, when integrating into our customer’s data infrastructure, we frequently leverage existing data infrastructure orchestration platforms where they are deployed. In doing so, we often find that the existing data infrastructure needs to be augmented for the contextualized data.
With the business case established, and the data orchestrated as described above – achieving the first results can take as little as 2 to 4 weeks once the installation is complete; with an ROI within 3 to 6 months of the system going live into production. Furthermore – properly established, the desired end goal can also put the data infrastructure and supporting AI applications on a trajectory to support a much more autonomous manufacturing plant.
13 Oct 2020 DataProphet
Semiconductor foundries represent some of the most sophisticated production processes of the modern world— rapid ...