首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Wafer bin maps (WBMs) that show specific spatial patterns can provide clue to identify process failures in the semiconductor manufacturing. In practice, most companies rely on experienced engineers to visually find the specific WBM patterns. However, as wafer size is enlarged and integrated circuit (IC) feature size is continuously shrinking, WBM patterns become complicated due to the differences of die size, wafer rotation, the density of failed dies and thus human judgments become inconsistent and unreliable. To fill the gaps, this study aims to develop a knowledge-based intelligent system for WBMs defect diagnosis for yield enhancement in wafer fabrication. The proposed system consisted of three parts: graphical user interface, the WBM clustering solution, and the knowledge database. In particular, the developed WBM clustering approach integrates spatial statistics test, cellular neural network (CNN), adaptive resonance theory (ART) neural network, and moment invariant (MI) to cluster different patterns effectively. In addition, an interactive converse interface is developed to present the possible root causes in the order of similarity matching and record the diagnosis know-how from the domain experts into the knowledge database. To validate the proposed WBM clustering solution, twelve different WBM patterns collected in real settings are used to demonstrate the performance of the proposed method in terms of purity, diversity, specificity, and efficiency. The results have shown the validity and practical viability of the proposed system. Indeed, the developed solution has been implemented in a leading semiconductor manufacturing company in Taiwan. The proposed WBM intelligent system can recognize specific failure patterns efficiently and also record the assignable root causes verified by the domain experts to enhance troubleshooting effectively.  相似文献   

2.
Wafer fabrication for semiconductor manufacturing consists of multiple layers, in which the displacements (i.e., overlay errors) between layers should be reduced to enhance the yield. Although it can reduce variance between layers by fixing the exposure machine (i.e. steeper or scanner), it is not practical to expose the wafer on the same machine from layer to layer for the lengthy fabrication process in real setting. Thus, there is a critical need to determine the similarity machine subgroups, in which appreciate backups for unexpected machine down can be also prioritized. This study aims to develop a novel methodology to fill this gap based on the proposed similarity measurement of systematic overlay errors and residuals. The proposed methodology was validated via empirical study in a wafer fab and the results showed practical viability of this approach. Received: May 2005 / Accepted: December 2005  相似文献   

3.
Semiconductor manufacturing data consist of the processes and the machines involved in the production of batches of semiconductor circuit wafers. Wafer quality depends on the manufacturing line status and it is measured at the end of the line. We have developed a knowledge discovery system that is intended to help the yield analysis expert by learning the tentative causes of low quality wafers from an exhaustive amount of manufacturing data. The yield analysis expert, by using the knowledge discovered, will decide on which corrective actions to perform on the manufacturing process. This paper discusses the transformations carried out within the data from raw data to discovered knowledge, and also the two main tasks performed by the system. The features of the inductive algorithm performing those tasks are also described. Yield analysis experts at Lucent Technologies, Bell Labs Innovations in Spain are currently using this knowledge discovery application.  相似文献   

4.
Data plays a vital role as a source of information to organizations, especially in times of information and technology. One encounters a not-so-perfect database from which data is missing, and the results obtained from such a database may provide biased or misleading solutions. Therefore, imputing missing data to a database has been regarded as one of the major steps in data mining. The present research used different methods of data mining to construct imputative models in accordance with different types of missing data. When the missing data is continuous, regression models and Neural Networks are used to build imputative models. For the categorical missing data, the logistic regression model, neural network, C5.0 and CART are employed to construct imputative models. The results showed that the regression model was found to provide the best estimate of continuous missing data; but for categorical missing data, the C5.0 model proved the best method.  相似文献   

5.
To date, Inductive Logic Programming (ILP) systems have largely assumed that all data needed for learning have been provided at the onset of model construction. Increasingly, for application areas like telecommunications, astronomy, text processing, financial markets and biology, machine-generated data are being generated continuously and on a vast scale. We see at least four kinds of problems that this presents for ILP: (1) it may not be possible to store all of the data, even in secondary memory; (2) even if it were possible to store the data, it may be impractical to construct an acceptable model using partitioning techniques that repeatedly perform expensive coverage or subsumption-tests on the data; (3) models constructed at some point may become less effective, or even invalid, as more data become available (exemplified by the “drift” problem when identifying concepts); and (4) the representation of the data instances may need to change as more data become available (a kind of “language drift” problem). In this paper, we investigate the adoption of a stream-based on-line learning approach to relational data. Specifically, we examine the representation of relational data in both an infinite-attribute setting, and in the usual fixed-attribute setting, and develop implementations that use ILP engines in combination with on-line model-constructors. The behaviour of each program is investigated using a set of controlled experiments, and performance in practical settings is demonstrated by constructing complete theories for some of the largest biochemical datasets examined by ILP systems to date, including one with a million examples; to the best of our knowledge, the first time this has been empirically demonstrated with ILP on a real-world data set.  相似文献   

6.
Software change prediction is crucial in order to efficiently plan resource allocation during testing and maintenance phases of a software. Moreover, correct identification of change-prone classes in the early phases of software development life cycle helps in developing cost-effective, good quality and maintainable software. An effective software change prediction model should equally recognize change-prone and not change-prone classes with high accuracy. However, this is not the case as software practitioners often have to deal with imbalanced data sets where instances of one type of class is much higher than the other type. In such a scenario, the minority classes are not predicted with much accuracy leading to strategic losses. This study evaluates a number of techniques for handling imbalanced data sets using various data sampling methods and MetaCost learners on six open-source data sets. The results of the study advocate the use of resample with replacement sampling method for effective imbalanced learning.  相似文献   

7.
Traditionally, all the conventional semiconductor equipment must comply with the semiconductor equipment and materials international (SEMI) semiconductor equipment communication standard/generic equipment model (SECS/GEM) standards to interface with the cell controllers of a legacy manufacturing execution system (MES). In 1998, SEMATECH developed the computer integrated manufacturing (CIM) framework specification to facilitate the creation of an integrated, common, flexible, modular object model leading to an open, multi-supplier CIM system environment in the semiconductor industry. Later, SEMI developed an object-based equipment model (OBEM) standard to take full advantage of the CIM framework. With OBEM, the equipment can communicate with the CIM framework directly by method invocation. This work develops an OBEM-compliant object-based equipment (OBE) in the framework environment, so that this equipment can directly communicate with the MES by method invocation without using the SECS/GEM protocol. The unified modeling language is adopted as the major tool for analyzing and developing the target system. A SECS/GEM adapter is also added on the OBE to enable the OBE to communicate with a legacy factory that uses SECS/GEM standards as the communication protocol. A die bonder is used as an example for development demonstrations.  相似文献   

8.
In semiconductor manufacturing processes, sensor data are segmented and summarized in order to reduce storage space. This is conventionally done by segmenting the data based on predefined chamber step information and calculating statistics within the segments. However, segmentation via chamber steps often do not coincide with actual change points in data, which results in suboptimal summarization. This paper proposes a novel framework using abnormal difference and free knot spline with knot removal, to detect actual data change points and summarize on them. Preliminary experiments demonstrate that the proposed algorithm handles arbitrarily shaped data in a robust fashion and shows better performance than chamber step based segmentation and summarization. An evaluation metric based on linearity and parsimony is also proposed.  相似文献   

9.
In recent years, knowledge has received significant attention in manufacturing to built a competitive advantage in the sector. Knowledge induction from data is an important issue in manufacturing to find the failure of the process then predict and improve the future system performance.This research examines the improvement of manufacturing process via data mining. Not only do we detect and isolate machine breakdowns in carpet manufacturing, but also we propose a C4.5 decision tree model. In addition, we use attribute relevance analysis to select the qualitative attribute’s variables. Consequently, manufacturing process is redeveloped.  相似文献   

10.
《Information & Management》2006,43(3):364-377
For over a decade, empirical studies on the organizational performance of IT investment have been far from conclusive, and one major issue that has led to an ongoing debate is whether inadequate methods are applied in measuring IT values. Traditional measures have primarily been financial: return on investment and return on sales. Researchers have suggested a need to use other measures, although there has been little agreement on which precisely to use. Moreover, empirical studies showed limitations in using only a single organization-level measure, whereas a more complete assessment could involve measures from several levels. In addition, benefits from IT investments are normally realized over time. This study used an integrative assessment framework with a three-level structure of organizational hierarchy: corporate strategies, manufacturing decisions, and operational activities, along with a time-lag effect. Different levels of performance measures were examined over different time periods. The framework was verified by survey data. Our results indicated that time tag had positive impact on the performance measures of corporate strategies and that they were significantly correlated with operational activities.  相似文献   

11.
Virtual metrology involves the estimation of metrology values using a prediction model instead of metrological equipment, thereby providing an efficient means for wafer-to-wafer quality control. Because wafer characteristics change over time according to the influence of several factors in the manufacturing process, the prediction model should be suitably updated in view of recent actual metrology results. This gives rise to a trade-off relationship, as more frequent updates result in a higher accuracy for virtual metrology, while also incurring a heavier cost in actual metrology. In this paper, we propose an intelligent virtual metrology system to achieve a superior metrology performance with lower costs. By employing an ensemble of artificial neural networks as the prediction model, the prediction, reliability estimation, and model update are successfully integrated into the proposed virtual metrology system. In this system, actual metrology is only performed for those wafers where the current prediction model cannot perform reliable predictions. When actual metrology is performed, the prediction model is instantly updated to incorporate the results. Consequently, the actual metrology ratio is automatically adjusted according to the corresponding circumstances. We demonstrate the effectiveness of the method through experimental validation on actual datasets.  相似文献   

12.
This paper proposes a new procedure and an improved model to mine association rules of customer values. The market of online shopping industry in Taiwan is the research area. Research method adopts Ward’s method to partition online shopping market into three markets. Customer values are refined from an improved RFMDR model (based on RFM/RFMD model). Supervised Apriori algorithm is employed with customer values to create association rules. These effective rules are suggested to apply on a customized marketing function of a CRM system for enhancing their customer values to be higher grades.  相似文献   

13.
An ACS-based framework for fuzzy data mining   总被引:1,自引:0,他引:1  
Data mining is often used to find out interesting and meaningful patterns from huge databases. It may generate different kinds of knowledge such as classification rules, clusters, association rules, and among others. A lot of researches have been proposed about data mining and most of them focused on mining from binary-valued data. Fuzzy data mining was thus proposed to discover fuzzy knowledge from linguistic or quantitative data. Recently, ant colony systems (ACS) have been successfully applied to optimization problems. However, few works have been done on applying ACS to fuzzy data mining. This thesis thus attempts to propose an ACS-based framework for fuzzy data mining. In the framework, the membership functions are first encoded into binary-bits and then fed into the ACS to search for the optimal set of membership functions. The problem is then transformed into a multi-stage graph, with each route representing a possible set of membership functions. When the termination condition is reached, the best membership function set (with the highest fitness value) can then be used to mine fuzzy association rules from a database. At last, experiments are made to make a comparison with other approaches and show the performance of the proposed framework.  相似文献   

14.
Automatic defect classification for semiconductor manufacturing   总被引:4,自引:0,他引:4  
Visual defect inspection and classification are important parts of most manufacturing processes in the semiconductor and electronics industries. Defect classification provides relevant information to correct process problems, thereby enhancing the yield and quality of the product. This paper describes an automated defect classification (ADC) system that classifies defects on semiconductor chips at various manufacturing steps. The ADC system uses a golden template method for defect re-detection, and measures several features of the defect, such as size, shape, location and color. A rule-based system classifies the defects into pre-defined categories that are learnt from training samples. The system has been deployed in the IBM Burlington 16 M DRAM manufacturing line for more than a year. The system has examined over 100 000 defects, and has met the design criteria of over 80% classification rate and 80% classification accuracy. Issues involving system design tradeoff, implementation, performance, and deployment are closely examined.  相似文献   

15.
Software Quality Journal - In software engineering predictive modeling, early prediction of software modules or classes that possess high maintainability effort is a challenging task. Many...  相似文献   

16.
Although manufacturing information and data systems (MIDS) form an integral part of a manufacturing organization's infrastructure, the growth of corporate IT expenditure has left many companies questioning the value of their MIDS deployment. The use of traditional appraisal techniques to justify investments in MIDS typically relies on measures based on direct cost savings and incremental future cash flows. These techniques are considered no longer appropriate because of the largely intangible cost and benefit dimensions of many IT projects. This paper presents the results of a research study based on interviews in eighteen manufacturing companies, which investigated the methods used to assess the value and cost of MIDS investments. After showing that the concepts used to justify value vary significantly between respondents, the research indicates that intuition plays an important role in value prediction and assessment. A comparison between small and large companies indicates that the systems implementations are perceived in a better light in smaller companies. The results obtained are then compared with recent research.  相似文献   

17.
Resilience and flexibility in manufacturing and supply chains are widely discussed at times of the present crises situation due to COVID-19 pandemic. It is a question of ongoing debate among practitioners about the role of flexibility and complexity in building resilience for firms and their supply chains. In this research, we examine the flexible business strategies at the demand, supply, and process side of supply chains contributing to resilience. Five of the major flexible business strategies were acknowledged in this research and the single items for constructing them were posited. In this study, we observe the correlations among the constructs based on survey-based research in electronics manufacturing firms, followed by a dimensionality reduction of constructs using factor analysis. The data collected were subject to several initial tests for ensuring validity, reliability, and adequacy using relevant statistical indicators. The measurement model was converted into a structural model and path coefficients were determined. From the path analysis, the latent variables contributing to flexibility in supply chains were found to be independent estimators of resilience. Adding to it, we observe that the single items for measuring the flexibility of supply, process, product, and pricing strategies were evidenced to be strongly correlated. The results are useful to managers for taking decisions related to flexibility implementation, towards enhancing resilience in supply chains.  相似文献   

18.
Data mining is most commonly used in attempts to induce association rules from transaction data. In the past, we used the fuzzy and GA concepts to discover both useful fuzzy association rules and suitable membership functions from quantitative values. The evaluation for fitness values was, however, quite time-consuming. Due to dramatic increases in available computing power and concomitant decreases in computing costs over the last decade, learning or mining by applying parallel processing techniques has become a feasible way to overcome the slow-learning problem. In this paper, we thus propose a parallel genetic-fuzzy mining algorithm based on the master–slave architecture to extract both association rules and membership functions from quantitative transactions. The master processor uses a single population as a simple genetic algorithm does, and distributes the tasks of fitness evaluation to slave processors. The evolutionary processes, such as crossover, mutation and production are performed by the master processor. It is very natural and efficient to run the proposed algorithm on the master–slave architecture. The time complexities for both sequential and parallel genetic-fuzzy mining algorithms have also been analyzed, with results showing the good effect of the proposed one. When the number of generations is large, the speed-up can be nearly linear. The experimental results also show this point. Applying the master–slave parallel architecture to speed up the genetic-fuzzy data mining algorithm is thus a feasible way to overcome the low-speed fitness evaluation problem of the original algorithm.  相似文献   

19.
Combining data mining and Game Theory in manufacturing strategy analysis   总被引:1,自引:1,他引:0  
The work presented in this paper is result of a rapid increase of interest in game theoretical analysis and a huge growth of game related databases. It is likely that useful knowledge can be extracted from these databases. This paper argues that applying data mining algorithms together with Game Theory poses a significant potential as a new way to analyze complex engineering systems, such as strategy selection in manufacturing analysis. Recent research shows that combining data mining and Game Theory has not yet come up with reasonable solutions for the representation and structuring of the knowledge in a game. In order to examine the idea, a novel approach of fusing these two techniques has been developed in this paper and tested on real-world manufacturing datasets. The obtained results have been indicated the superiority of the proposed approach. Some fruitful directions for future research are outlined as well.  相似文献   

20.
During semiconductor manufacturing process, massive and various types of interrelated equipment data are automatically collected for fault detection and classification. Indeed, unusual wafer measurements may reflect a wafer defect or a change in equipment conditions. Early detection of equipment condition changes assists the engineer with efficient maintenance. This study aims to develop hierarchical indices for equipment monitoring. For efficiency, only the highest level index is used for real-time monitoring. Once the index decreases, the engineers can use the drilled down indices to identify potential root causes. For validation, the proposed approach was tested in a leading semiconductor foundry in Taiwan. The results have shown that the proposed approach and associated indices can detect equipment condition changes after preventive maintenance efficiently and effectively.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号