首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In surface inspection applications, the main goal is to detect all areas which might contain defects or unacceptable imperfections, and to classify either every single ‘suspicious’ region or the investigated part as a whole. After an image is acquired by the machine vision hardware, all pixels that deviate from a pre-defined ‘ideal’ master image are set to a non-zero value, depending on the magnitude of deviation. This procedure leads to so-called “contrast images”, in which accumulations of bright pixels may appear, representing potentially defective areas. In this paper, various methods are presented for grouping these bright pixels together into meaningful objects, ranging from classical image processing techniques to machine-learning-based clustering approaches. One important issue here is to find reasonable groupings even for non-connected and widespread objects. In general, these objects correspond either to real faults or to pseudo-errors that do not affect the surface quality at all. The impact of different extraction methods on the accuracy of image classifiers will be studied. The classifiers are trained with feature vectors calculated for the extracted objects found in images labeled by the user and showing surfaces of production items. In our investigation artificially created contrast images will be considered as well as real ones recorded on-line at a CD imprint production and at an egg inspection system.  相似文献   

2.
《Ergonomics》2012,55(12):1863-1876
The visual interfaces of virtual environments such as video games often show scenes where objects are superimposed on a moving background. Three experiments were designed to better understand the impact of the complexity and/or overall motion of two types of visual backgrounds often used in video games on the detection and use of superimposed, stationary items. The impact of background complexity and motion was assessed during two typical video game tasks: a relatively complex visual search task and a classic, less demanding shooting task. Background motion impaired participants' performance only when they performed the shooting game task, and only when the simplest of the two backgrounds was used. In contrast, and independently of background motion, performance on both tasks was impaired when the complexity of the background increased. Eye movement recordings demonstrated that most of the findings reflected the impact of low-level features of the two backgrounds on gaze control.  相似文献   

3.
This paper reviews the techniques available for detection and recognition training, and their application to industrial inspection. Systematic procedures are described for using these techniques as part of an integrated training scheme. Approaches to task analysis for inspector training are also discussed.  相似文献   

4.
On line automated visual inspection for quality and process control is becoming a very important requirement in an automated manufacturing environment. This paper examines the possibility of real-time inspection of standard part using machine vision.  相似文献   

5.
6.
7.
Recurrent neural network training with feedforward complexity   总被引:1,自引:0,他引:1  
This paper presents a training method that is of no more than feedforward complexity for fully recurrent networks. The method is not approximate, but rather depends on an exact transformation that reveals an embedded feedforward structure in every recurrent network. It turns out that given any unambiguous training data set, such as samples of the state variables and their derivatives, we need only to train this embedded feedforward structure. The necessary recurrent network parameters are then obtained by an inverse transformation that consists only of linear operators. As an example of modeling a representative nonlinear dynamical system, the method is applied to learn Bessel's differential equation, thereby generating Bessel functions within, as well as outside the training set.  相似文献   

8.

This article presents a two‐experiment series that strongly supports the hypothesis that user preference for interactive screens and performance using interactive screens is related to screen complexity. The relationship follows an inverted U‐shaped curve, with too little or too much complexity depressing preference and performance. The implication for interactive systems designers is that while a clear screen is a necessary condition for user satisfaction, it is not a sufficient one; the appropriate level of screen complexity must also be considered.  相似文献   

9.
The literature on inspection systems has demonstrated that such systems are error prone. Such systems exhibit type I and type II errors. Type I error is classifying a conforming item as non-conforming while type II error is classifying a non-conforming item as conforming. The errors may arise from the measurement system or the inspectors. It is essential to assess the impact of the inspection errors on the optimal parameters and objective functions of process targeting models. The purpose of this paper is to assess the impact of the inspection errors on the optimal parameters and objectives functions values of Duffuaa and El-Ga’aly multi-objectives optimization model recently developed for process targeting (2013a). In order to accomplish the objectives of the paper, Duffuaa and El-Ga’aly multi-objective optimization model is extended by introducing the measurement errors in the inspection system and penalties to mitigate the effect of the errors. The results of the extended model is compared with the previous model and employed to studying the impact of the errors on the values of objective function and the optimal process parameters in a multi-objectives environment. The results indicated that inspection errors have significant impact on the profit and uniformity objectives.  相似文献   

10.
A special NDE system built into a low-cost FPGA has been developed for detecting railway wheelflats. The system operates with the train moving at low-speed over a measuring rail. Ultrasonic surface wave pulses are sent at regular intervals and echoes are acquired and processed by the system. The variations in the round trip time-of-flight (RTOF) of the ultrasonic pulse allow to detect and quantify the flats size.The logic design optimizes the storage capabilities by keeping only the rail–wheel contact echo and its environment. For this purpose, a wheel tracking algorithm has been implemented. It allows reducing the data volume by controlling the delay time from pulse emission to the acquisition window following the running wheel. Furthermore, since signals are masked by the rail structural noise, they are processed before executing the tracking algorithm. This work presents the architecture and performance of the developed system with experimental data.  相似文献   

11.
12.
The paper is a summary of the more common measures that have been used, or suggested for use, in industry. A list of advantages and disadvantages for each measure is given, to indicate its value. Some general criteria for selecting a measure, and methods for obtaining the measure, are considered.  相似文献   

13.
As inspection moves from unaided human skills to human-computer hybrid tasks, there is a need for models of the human and the computer which have common parameters. With appropriate models, functions can be allocated to produce optimal designs, and assistance provided to the human inspector via job aids and training. A model was developed of the human in a two-component compound inspection task consisting of search and decision. Optimizing this model showed that the choice of optimal values of parameters in the two submodels was independent. Ten subjects were tested on a two-component inspection task, using components which had earlier been validated separately. Subjects showed some aspects of optimum behaviour, for example sub-model independence, stopping the search after an integral number of scans, and varying their decision criteria to respond to the probability and cost structure. However, in this more complex task, subjects often reverted to simpler decision rules, for example always stopping the search after one scan or accepting (or rejecting) all potential defects detected. The implication for hybrid automation systems is that humans will need help such as job aids or training if they are to perform optimally when given both search and decision tasks.  相似文献   

14.
Faults in system requirements can be very harmful. It is therefore often required that the inspection achieves a high fault detection ratio (FDR). To achieve this, a large number of inspectors is required. Large teams are known to be inefficient. Therefore, the N-fold requirements inspection method divides the inspectors intoN small efficient teams. All temas inspect the same requirements document. Experiments with both information and real-time systems demonstrate that the different teams detect different faults such that they achieve together a higher FDR value. The analysis suggests that the FDR is primarily a function of the level of expertise of the inspectors and of the number of teams. A quite simple probabilistic model that matches the experimental results enables the prediction of the FDR as a function of these two parameters. A diagram based on the model enables a fast estimation of the FDR and of the most effective number of inspections teams money-wise. The model may also be employed for measuring the efficiency of requirement inspection methods.  相似文献   

15.
When choosing a classification rule, it is important to take into account the amount of sample data available. This paper examines the performances of classifiers of differing complexities in relation to the complexity of feature-label distributions in the case of small samples. We define the distributional complexity of a feature-label distribution to be the minimal number of hyperplanes necessary to achieve the Bayes classifier if the Bayes classifier is achievable by a finite number of hyperplanes, and infinity otherwise. Our approach is to choose a model and compare classifier efficiencies for various sample sizes and distributional complexities. Simulation results are obtained by generating data based on the model and the distributional complexities. A linear support vector machine (SVM) is considered, along with several nonlinear classifiers. For the most part, we see that there is little improvement when one uses a complex classifier instead of a linear SVM. For higher levels of distributional complexity, the linear classifier degrades, but so do the more complex classifiers owing to insufficient training data. Hence, if one were to obtain a good result with a more complex classifier, it is most likely that the distributional complexity is low and there is no gain over using a linear classifier. Hence, under the model, it is generally impossible to claim that use of the nonlinear classifier is beneficial. In essence, the sample sizes are too small to take advantage of the added complexity. An exception to this observation is the behavior of the three-nearest-neighbor (3NN) classifier in the case of two variables (but not three) when there is very little overlap between the label distributions and the sample size is not too small. With a sample size of 60, the 3NN classifier performs close to the Bayes classifier, even for high levels of distributional complexity. Consequently, if one uses the 3NN classifier with two variables and obtains a low error, then the distributional complexity might be large and, if such is the case, there is a significant gain over using a linear classifier.  相似文献   

16.
Deals with computational issues of loading a fixed-architecture neural network with a set of positive and negative examples. This is the first result on the hardness of loading a simple three-node architecture which does not consist of the binary-threshold neurons, but rather utilizes a particular continuous activation function, commonly used in the neural-network literature. The authors observe that the loading problem is polynomial-time if the input dimension is constant. Otherwise, however, any possible learning algorithm based on particular fixed architectures faces severe computational barriers. Similar theorems have already been proved by Megiddo and by Blum and Rivest, to the case of binary-threshold networks only. The authors' theoretical results lend further suggestion to the use of incremental (architecture-changing) techniques for training networks rather than fixed architectures. Furthermore, they imply hardness of learnability in the probably approximately correct sense as well.  相似文献   

17.
The Object-Oriented (OO) paradigm has become increasingly popular in recent years. Researchers agree that, although maintenance may turn out to be easier for OO systems, it is unlikely that the maintenance burden will completely disappear. One approach to controlling software maintenance costs is the utilization of software metrics during the development phase, to help identify potential problem areas. Many new metrics have been proposed for OO systems, but only a few of them have been validated. The purpose of this research is to empirically explore the validation of three existing OO design complexity metrics and, specifically, to assess their ability to predict maintenance time. This research reports the results of validating three metrics, Interaction Level (IL), Interface Size (IS), and Operation Argument Complexity (OAC). A controlled experiment was conducted to investigate the effect of design complexity (as measured by the above metrics) on maintenance time. Each of the three metrics by itself was found to be useful in the experiment in predicting maintenance performance.  相似文献   

18.
GA-fuzzy modeling and classification: complexity and performance   总被引:11,自引:0,他引:11  
The use of genetic algorithms (GAs) and other evolutionary optimization methods to design fuzzy rules for systems modeling and data classification have received much attention in recent literature. Authors have focused on various aspects of these randomized techniques, and a whole scale of algorithms have been proposed. We comment on some recent work and describe a new and efficient two-step approach that leads to good results for function approximation, dynamic systems modeling and data classification problems. First, fuzzy clustering is applied to obtain a compact initial rule-based model. Then this model is optimized by a real-coded GA subjected to constraints that maintain the semantic properties of the rules. We consider four examples from the literature: a synthetic nonlinear dynamic systems model, the iris data classification problem, the wine data classification problem, and the dynamic modeling of a diesel engine turbocharger. The obtained results are compared to other recently proposed methods  相似文献   

19.
Quantized feedback control has been receiving much attention in the control community in the past few years. Quantization is indeed a natural way to take into consideration in the control design the complexity constraints of the controller as well as the communication constraints in the information exchange between the controller and the plant. In this paper, we analyze the stabilization problem for discrete time linear systems with multidimensional state and one-dimensional input using quantized feedbacks with a memory structure, focusing on the tradeoff between complexity and performance. A quantized controller with memory is a dynamical system with a state space, a state updating map and an output map. The quantized controller complexity is modeled by means of three indexes. The first index L coincides with the number of the controller states. The second index is the number M of the possible values that the state updating map of the controller can take at each time. The third index is the number N of the possible values that the output map of the controller can take at each time. The index N corresponds also to the number of the possible control values that the controller can choose at each time. In this paper, the performance index is chosen to be the time T needed to shrink the state of the plant from a starting set to a target set. Finally, the contraction rate C, namely the ratio between the volumes of the starting and target sets, is introduced. We evaluate the relations between these parameters for various quantized stabilizers, with and without memory, and we make some comparisons. Then, we prove a number of results showing the intrinsic limitations of the quantized control. In particular, we show that, in order to obtain a control strategy which yields arbitrarily small values of T/lnC (requirement which can be interpreted as a weak form of the pole assignability property), we need to have that LN/lnC is big enough.  相似文献   

20.
Microsystem Technologies - This paper explores how different layers in an organic light emitting diode (OLED) impacts its performance. Here, different layers of OLED similar to hole/electron...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号