首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
Yield management in semiconductor manufacturing companies requires accurate yield prediction and continual control. However, because many factors are complexly involved in the production of semiconductors, manufacturers or engineers have a hard time managing the yield precisely. Intelligent tools need to analyze the multiple process variables concerned and to predict the production yield effectively. This paper devises a hybrid method of incorporating machine learning techniques together to detect high and low yields in semiconductor manufacturing. The hybrid method has strong applicative advantages in manufacturing situations, where the control of a variety of process variables is interrelated. In real applications, the hybrid method provides a more accurate yield prediction than other methods that have been used. With this method, the company can achieve a higher yield rate by preventing low-yield lots in advance.  相似文献   

2.
The complexity of semiconductor manufacturing is increasing due to the smaller feature sizes, greater number of layers, and existing process reentry characteristics. As a result, it is difficult to manage and clarify responsibility for low yields in specific products. This paper presents a comprehensive data mining method for predicting and classifying the product yields in semiconductor manufacturing processes. A genetic programming (GP) approach, capable of constructing a yield prediction system and performing automatic discovery of the significant factors that might cause low yield, is presented. Comparison with the results then is performed using a decision tree induction algorithm. Moreover, this research illustrates the robustness and effectiveness of this method using a well-known DRAM fab’s real data set, with discussion of the results. Received: November 2004 / Accepted: September 2005  相似文献   

3.
Optical inspection techniques have been widely used in industry as they are non-destructive. Since defect patterns are rooted from the manufacturing processes in semiconductor industry, efficient and effective defect detection and pattern recognition algorithms are in great demand to find out closely related causes. Modifying the manufacturing processes can eliminate defects, and thus to improve the yield. Defect patterns such as rings, semicircles, scratches, and clusters are the most common defects in the semiconductor industry. Conventional methods cannot identify two scale-variant or shift-variant or rotation-variant defect patterns, which in fact belong to the same failure causes. To address these problems, a new approach is proposed in this paper to detect these defect patterns in noisy images. First, a novel scheme is developed to simulate datasets of these 4 patterns for classifiers’ training and testing. Second, for real optical images, a series of image processing operations have been applied in the detection stage of our method. In the identification stage, defects are resized and then identified by the trained support vector machine. Adaptive resonance theory network 1 is also implemented for comparisons. Classification results of both simulated data and real noisy raw data show the effectiveness of our method.  相似文献   

4.
Yield forecasting is a very important task to a semiconductor manufacturing factory which is a typical group-decision-making environment. Namely, many experts will gather to predict the yields of products collaboratively. To enhance both the precision and accuracy of collaborative semiconductor yield forecasting, an online expert system is constructed in this study. The collaborative semiconductor yield forecasting system adopts the client–server architecture, and therefore the necessity for all experts to gather at the same place is relaxed, which is especially meaningful for a multiple-factory case. To demonstrate the applicability of the collaborative semiconductor yield forecasting system, an experimental system has been constructed and applied to two random-access-memory products in a real semiconductor manufacturing factory. Both the precision and accuracy of forecasting the yields of the two products were significantly improved. Besides, the collaborative semiconductor yield forecasting system was also considered as a convenient platform for the product engineers or quality control staff from different factories to share their opinions about the yield improvement process of a product being manufacturing with the same technology in multiple factories.  相似文献   

5.
Wafer bin maps (WBMs) that show specific spatial patterns can provide clue to identify process failures in the semiconductor manufacturing. In practice, most companies rely on experienced engineers to visually find the specific WBM patterns. However, as wafer size is enlarged and integrated circuit (IC) feature size is continuously shrinking, WBM patterns become complicated due to the differences of die size, wafer rotation, the density of failed dies and thus human judgments become inconsistent and unreliable. To fill the gaps, this study aims to develop a knowledge-based intelligent system for WBMs defect diagnosis for yield enhancement in wafer fabrication. The proposed system consisted of three parts: graphical user interface, the WBM clustering solution, and the knowledge database. In particular, the developed WBM clustering approach integrates spatial statistics test, cellular neural network (CNN), adaptive resonance theory (ART) neural network, and moment invariant (MI) to cluster different patterns effectively. In addition, an interactive converse interface is developed to present the possible root causes in the order of similarity matching and record the diagnosis know-how from the domain experts into the knowledge database. To validate the proposed WBM clustering solution, twelve different WBM patterns collected in real settings are used to demonstrate the performance of the proposed method in terms of purity, diversity, specificity, and efficiency. The results have shown the validity and practical viability of the proposed system. Indeed, the developed solution has been implemented in a leading semiconductor manufacturing company in Taiwan. The proposed WBM intelligent system can recognize specific failure patterns efficiently and also record the assignable root causes verified by the domain experts to enhance troubleshooting effectively.  相似文献   

6.
Accurate planning of produced quantities is a challenging task in semiconductor industry where the percentage of good parts (measured by yield) is affected by multiple factors. However, conventional data mining methods that are designed and tuned on “well-behaved” data tend to produce a large number of complex and hardly useful patterns when applied to manufacturing databases. This paper presents a novel, perception-based method, called Automated Perceptions Network (APN), for automated construction of compact and interpretable models from highly noisy data sets. We evaluate the method on yield data of two semiconductor products and describe possible directions for the future use of automated perceptions in data mining and knowledge discovery.  相似文献   

7.
Semiconductor manufacturing is one of the most complicated production processes with the challenges of dynamic job arrival, job re-circulation, shifting bottlenecks, and lengthy fabrication process. Owing to the lengthy wafer fabrication process, work in process (WIP) usually affects the cycle time and throughput in the semiconductor fabrication. As the applications of semiconductor have reached the era of consumer electronics, time to market has played an increasingly critical role in maintaining a competitive advantage for a semiconductor company. Many past studies have explored how to reduce the time of scheduling and dispatching in the production cycle. Focusing on real settings, this study aims to develop a manufacturing intelligence approach by integrating Gauss-Newton regression method and back-propagation neural network as basic model to forecast the cycle time of the production line, where WIP, capacity, utilization, average layers, and throughput are rendered as input factors for indentifying effective rules to control the levels of the corresponding factors as well as reduce the cycle time. Additionally, it develops an adaptive model for rapid response to change of production line status. To evaluate the validity of this approach, we conducted an empirical study on the demand change and production dynamics in a semiconductor foundry in Hsinchu Science Park. The approach proved to be successful in improving forecast accuracy and realigning the desired levels of throughput in production lines to reduce the cycle time.  相似文献   

8.
This study models global and local variations hidden in multichannel functional data (MFD) for the purpose of manufacturing process monitoring. With advances in sensing technology, online measurement of manufacturing process variables could take the shape of multichannel curves. Although MFD contains rich information about process conditions, it is a challenging issue to model and interpret complex variations in MFD for process change detection and process faulty condition discrimination. A new approach was developed in this paper to decompose each channel of functional data into global and local variation components. Based on the extracted patterns, a principal curve regression method was applied to detect and discriminate different process conditions. The method was validated by real data from a forging plant. A simulation study was also conducted to verify the approach for MFD with complex patterns.  相似文献   

9.
Finding correlated sequential patterns in large sequence databases is one of the essential tasks in data mining since a huge number of sequential patterns are usually mined, but it is hard to find sequential patterns with the correlation. According to the requirement of real applications, the needed data analysis should be different. In previous mining approaches, after mining the sequential patterns, sequential patterns with the weak affinity are found even with a high minimum support. In this paper, a new framework is suggested for mining weighted support affinity patterns in which an objective measure, sequential ws-confidence is developed to detect correlated sequential patterns with weighted support affinity patterns. To efficiently prune the weak affinity patterns, it is proved that ws-confidence measure satisfies the anti-monotone and cross weighted support properties which can be applied to eliminate sequential patterns with dissimilar weighted support levels. Based on the framework, a weighted support affinity pattern mining algorithm (WSMiner) is suggested. The performance study shows that WSMiner is efficient and scalable for mining weighted support affinity patterns.  相似文献   

10.
Since semiconductor manufacturing consists of hundreds of processes, a faulty wafer detection system, which allows for earlier detection of faulty wafers, is required. statistical process control (SPC) and virtual metrology (VM) have been used to detect faulty wafers. However, there are some limitations in that SPC requires linear, unimodal and single variable data and VM underestimates the deviations of predictors. In this paper, seven different machine learning-based novelty detection methods were employed to detect faulty wafers. The models were trained with Fault Detection and Classification (FDC) data to detect wafers having faulty metrology values. The real world semiconductor manufacturing data collected from a semiconductor fab were tested. Since the real world data have more than 150 input variables, we employed three different dimensionality reduction methods. The experimental results showed a high True Positive Rate (TPR). These results are promising enough to warrant further study.  相似文献   

11.
The International Technology Roadmap for Semiconductors (ITRS) identifies production test data as an essential element in improving design and technology in the manufacturing process feedback loop. One of the observations made from the high-volume production test data is that dies that fail due to a systematic failure have a tendency to form certain unique patterns that manifest as defect clusters at the wafer level. Identifying and categorising such clusters is a crucial step towards manufacturing yield improvement and implementation of real-time statistical process control. Addressing the semiconductor industry’s needs, this research proposes an automatic defect cluster recognition system for semiconductor wafers that achieves up to 95% accuracy (depending on the product type).  相似文献   

12.
A critical aspect of wire bonding is the quality of the bonding strength that contributes the major part of yield loss to the integrated circuit assembly process. This paper applies an integrated approach using a neural networks and genetic algorithms to optimize IC wire bonding process. We first use a back-propagation network to provide the nonlinear relationship between factors and the response based on the experimental data from a semiconductor manufacturing company in Taiwan. Then, a genetic algorithms is applied to obtain the optimal factor settings. A comparison between the proposed approach and the Taguchi method was also conducted. The results demonstrate the superiority of the proposed approach in terms of process capability.  相似文献   

13.
Defective wafer detection is essential to avoid loss of yield due to process abnormalities in semiconductor manufacturing. For most complex processes in semiconductor manufacturing, various sensors are installed on equipment to capture process information and equipment conditions, including pressure, gas flow, temperature, and power. Because defective wafers are rare in current practice, supervised learning methods usually perform poorly as there are not enough defective wafers for fault detection (FD). The existing methods of anomaly detection often rely on linear excursion detection, such as principal component analysis (PCA), k-nearest neighbor (kNN) classifier, or manual inspection of equipment sensor data. However, conventional methods of observing equipment sensor readings directly often cannot identify the critical features or statistics for detection of defective wafers. To bridge the gap between research-based knowledge and semiconductor practice, this paper proposes an anomaly detection method that uses a denoise autoencoder (DAE) to learn a main representation of normal wafers from equipment sensor readings and serve as the one-class classification model. Typically, the maximum reconstruction error (MaxRE) is used as a threshold to differentiate between normal and defective wafers. However, the threshold by MaxRE usually yields a high false positive rate of normal wafers due to the outliers in an imbalanced data set. To resolve this difficulty, the Hampel identifier, a robust method of outlier detection, is adopted to determine a new threshold for detecting defective wafers, called MaxRE without outlier (MaxREwoo). The proposed method is illustrated using an empirical study based on the real data of a wafer fabrication. Based on the experimental results, the proposed DAE shows great promise as a viable solution for on-line FD in semiconductor manufacturing.  相似文献   

14.
Intensive competition and rapid technology development of Twisted-Pair Cables (TPC) industry have left no room for competing manufacturers to harbour system inefficiencies. TPC are used in various communication and networks hardware applications; their manufacturing facilities face many challenges including various product configurations with different equipment settings, different product flows and Work in Process (WIP) space limitations. The quest for internal efficiency and external effectiveness forces companies to align their internal settings and resources with external requirements/orders, or in different words, significant factors must be set appropriately and identified prior to manufacturing processes. Integrated definition models (IDEF0, IDEF3) in conjunction with a simulation model and a design of experiments (DOE) have been developed to characterize the TPC production system, identify the significant process parameters and examine various production setting scenarios aiming to get the best product flow time.  相似文献   

15.
Improving manufacturing quality is an important challenge in various industrial settings. Data mining methods mostly approach this challenge by examining the effect of operation settings on product quality. We analyze the impact of operational sequences on product quality. For this purpose, we propose a novel method for visual analysis and classification of operational sequences. The suggested framework is based on an Iterated Function System (IFS), for producing a fractal representation of manufacturing processes. We demonstrate our method with a software application for visual analysis of quality-related data. The proposed method offers production engineers an effective tool for visual detection of operational sequence patterns influencing product quality, and requires no understanding of mathematical or statistical algorithms. Moreover, it enables to detect faulty operational sequence patterns of any length, without predefining the sequence pattern length. It also enables to visually distinguish between different faulty operational sequence patterns in cases of recurring operations within a production route. Our proposed method provides another significant added value by enabling the visual detection of rare and missing operational sequences per product quality measure. We demonstrate cases in which previous methods fail to provide these capabilities.  相似文献   

16.
It is well known that data mining is a process of discovering unknown, hidden information from a large amount of data, extracting valuable information, and using the information to make important business decisions. And data mining has been developed into a new information technology, including regression, decision tree, neural network, fuzzy set, rough set, and support vector machine. This paper puts forward a rough set-based multiple criteria linear programming (RS-MCLP) approach for solving classification problems in data mining. Firstly, we describe the basic theory and models of rough set and multiple criteria linear programming (MCLP) and analyse their characteristics and advantages in practical applications. Secondly, detailed analysis about their deficiencies are provided, respectively. However, because of the existing mutual complementarities between them, we put forward and build the RS-MCLP methods and models which sufficiently integrate their virtues and overcome the adverse factors simultaneously. In addition, we also develop and implement these algorithm and models in SAS and Windows system platforms. Finally, many experiments show that the RS-MCLP approach is prior to single MCLP model and other traditional classification methods in data mining, and remarkably improve the accuracy of medical diagnosis and prognosis simultaneously.  相似文献   

17.
Periodic subgraph mining in dynamic networks   总被引:1,自引:1,他引:1  
In systems of interacting entities such as social networks, interactions that occur regularly typically correspond to significant, yet often infrequent and hard to detect, interaction patterns. To identify such regular behavior in streams of dynamic interaction data, we propose a new mining problem of finding a minimal set of periodically recurring subgraphs to capture all periodic behavior in a dynamic network. We analyze the computational complexity of the problem and show that it is polynomial, unlike many related subgraph or itemset mining problems. We propose an efficient and scalable algorithm to mine all periodic subgraphs in a dynamic network. The algorithm makes a single pass over the data and is also capable of accommodating imperfect periodicity. We demonstrate the applicability of our approach on several real-world networks and extract interesting and insightful periodic interaction patterns. We also show that periodic subgraphs can be an effective way to uncover and characterize the natural periodicities in a system.  相似文献   

18.
Due to the rapid development of information technologies, abundant data have become readily available. Data mining techniques have been used for process optimization in many manufacturing processes in automotive, LCD, semiconductor, and steel production, among others. However, a large amount of missing values occurs in the data set due to several causes (e.g., data discarded by gross measurement errors, measurement machine breakdown, routine maintenance, sampling inspection, and sensor failure), which frequently complicate the application of data mining to the data set. This study proposes a new procedure for optimizing processes called missing values-Patient Rule Induction Method (m-PRIM), which handles the missing-values problem systematically and yields considerable process improvement, even if a significant portion of the data set has missing values. A case study in a semiconductor manufacturing process is conducted to illustrate the proposed procedure.  相似文献   

19.
This paper is concerned with the qualification management problem of parallel machines under high uncertainties in the semiconductor manufacturing industry. Product–machine qualification, or recipe–machine qualification, is a complicated, time-consuming process that is frequently encountered in semiconductor manufacturing. High uncertainty, a common aspect of the semiconductor manufacturing process, significantly enhances the complexity of this process. This paper mainly focuses on addressing such a complex scheduling problem by presenting a general two-stage stochastic programming formulation, which embeds uncertainty into the qualification management problem. The proposed model considers the capacity loss resulting from traditional random capacity factors, such as tool failures, and recipe–machine qualification, making it more applicable to real systems. To solve this problem, we propose a Lagrangian-relaxation-based surrogate subgradient approach. Numerical experiments indicate that this approach is capable of optimizing the problem in acceptable computation time. In addition, given that obtaining complete distribution information for random variables is unavailable in practice, a simplified approach is also developed to approximate the initial problem.  相似文献   

20.
Hypothesis testing using constrained null models can be used to compute the significance of data mining results given what is already known about the data. We study the novel problem of finding the smallest set of patterns that explains most about the data in terms of a global p value. The resulting set of patterns, such as frequent patterns or clusterings, is the smallest set that statistically explains the data. We show that the newly formulated problem is, in its general form, NP-hard and there exists no efficient algorithm with finite approximation ratio. However, we show that in a special case a solution can be computed efficiently with a provable approximation ratio. We find that a greedy algorithm gives good results on real data and that, using our approach, we can formulate and solve many known data-mining tasks. We demonstrate our method on several data mining tasks. We conclude that our framework is able to identify in various settings a small set of patterns that statistically explains the data and to formulate data mining problems in the terms of statistical significance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号