首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Object recognition using laser range finder and machine learning techniques   总被引:1,自引:0,他引:1  
In recent years, computer vision has been widely used on industrial environments, allowing robots to perform important tasks like quality control, inspection and recognition. Vision systems are typically used to determine the position and orientation of objects in the workstation, enabling them to be transported and assembled by a robotic cell (e.g. industrial manipulator). These systems commonly resort to CCD (Charge-Coupled Device) Cameras fixed and located in a particular work area or attached directly to the robotic arm (eye-in-hand vision system). Although it is a valid approach, the performance of these vision systems is directly influenced by the industrial environment lighting. Taking all these into consideration, a new approach is proposed for eye-on-hand systems, where the use of cameras will be replaced by the 2D Laser Range Finder (LRF). The LRF will be attached to a robotic manipulator, which executes a pre-defined path to produce grayscale images of the workstation. With this technique the environment lighting interference is minimized resulting in a more reliable and robust computer vision system. After the grayscale image is created, this work focuses on the recognition and classification of different objects using inherent features (based on the invariant moments of Hu) with the most well-known machine learning models: k-Nearest Neighbor (kNN), Neural Networks (NNs) and Support Vector Machines (SVMs). In order to achieve a good performance for each classification model, a wrapper method is used to select one good subset of features, as well as an assessment model technique called K-fold cross-validation to adjust the parameters of the classifiers. The performance of the models is also compared, achieving performances of 83.5% for kNN, 95.5% for the NN and 98.9% for the SVM (generalized accuracy). These high performances are related with the feature selection algorithm based on the simulated annealing heuristic, and the model assessment (k-fold cross-validation). It makes possible to identify the most important features in the recognition process, as well as the adjustment of the best parameters for the machine learning models, increasing the classification ratio of the work objects present in the robot's environment.  相似文献   

2.
In this paper, a dropout prediction method for e-learning courses, based on three popular machine learning techniques and detailed student data, is proposed. The machine learning techniques used are feed-forward neural networks, support vector machines and probabilistic ensemble simplified fuzzy ARTMAP. Since a single technique may fail to accurately classify some e-learning students, whereas another may succeed, three decision schemes, which combine in different ways the results of the three machine learning techniques, were also tested. The method was examined in terms of overall accuracy, sensitivity and precision and its results were found to be significantly better than those reported in relevant literature.  相似文献   

3.
The growing prevalence of network attacks is a well-known problem which can impact the availability, confidentiality, and integrity of critical information for both individuals and enterprises. In this paper, we propose a real-time intrusion detection approach using a supervised machine learning technique. Our approach is simple and efficient, and can be used with many machine learning techniques. We applied different well-known machine learning techniques to evaluate the performance of our IDS approach. Our experimental results show that the Decision Tree technique can outperform the other techniques. Therefore, we further developed a real-time intrusion detection system (RT-IDS) using the Decision Tree technique to classify on-line network data as normal or attack data. We also identified 12 essential features of network data which are relevant to detecting network attacks using the information gain as our feature selection criterions. Our RT-IDS can distinguish normal network activities from main attack types (Probe and Denial of Service (DoS)) with a detection rate higher than 98% within 2 s. We also developed a new post-processing procedure to reduce the false-alarm rate as well as increase the reliability and detection accuracy of the intrusion detection system.  相似文献   

4.
Neural Computing and Applications - The application of artificial neural networks in mapping the mechanical characteristics of the cement-based materials is underlined in previous investigations....  相似文献   

5.
Given the importance of implicit communication in human interactions, it would be valuable to have this capability in robotic systems wherein a robot can detect the motivations and emotions of the person it is working with. Recognizing affective states from physiological cues is an effective way of implementing implicit human–robot interaction. Several machine learning techniques have been successfully employed in affect-recognition to predict the affective state of an individual given a set of physiological features. However, a systematic comparison of the strengths and weaknesses of these methods has not yet been done. In this paper, we present a comparative study of four machine learning methods—K-Nearest Neighbor, Regression Tree (RT), Bayesian Network and Support Vector Machine (SVM) as applied to the domain of affect recognition using physiological signals. The results showed that SVM gave the best classification accuracy even though all the methods performed competitively. RT gave the next best classification accuracy and was the most space and time efficient.  相似文献   

6.
Yield management in semiconductor manufacturing companies requires accurate yield prediction and continual control. However, because many factors are complexly involved in the production of semiconductors, manufacturers or engineers have a hard time managing the yield precisely. Intelligent tools need to analyze the multiple process variables concerned and to predict the production yield effectively. This paper devises a hybrid method of incorporating machine learning techniques together to detect high and low yields in semiconductor manufacturing. The hybrid method has strong applicative advantages in manufacturing situations, where the control of a variety of process variables is interrelated. In real applications, the hybrid method provides a more accurate yield prediction than other methods that have been used. With this method, the company can achieve a higher yield rate by preventing low-yield lots in advance.  相似文献   

7.
This paper describes a synergistic approach that is applicable to a wide variety of system control problems. The approach utilizes a machine learning technique, goal-directed conceptual aggregation (GDCA), to facilitate dynamic decision-making. The application domain employed is Flexible Manufacturing System (FMS) scheduling and control. Simulation is used for the dual purpose of providing a realistic depiction of FMSs, and serves as an engine for demonstrating the viability of a synergistic system involving incremental learning. The paper briefly describes prior approaches to FMS scheduling and control, and machine learning. It outlines the GDCA approach, provides a generalized architecture for dynamic control problems, and describes the implementation of the system as applied to FMS scheduling and control. The paper concludes with a discussion of the general applicability of this approach.  相似文献   

8.
9.
Consumer credit scoring is often considered a classification task where clients receive either a good or a bad credit status. Default probabilities provide more detailed information about the creditworthiness of consumers, and they are usually estimated by logistic regression. Here, we present a general framework for estimating individual consumer credit risks by use of machine learning methods. Since a probability is an expected value, all nonparametric regression approaches which are consistent for the mean are consistent for the probability estimation problem. Among others, random forests (RF), k-nearest neighbors (kNN), and bagged k-nearest neighbors (bNN) belong to this class of consistent nonparametric regression approaches. We apply the machine learning methods and an optimized logistic regression to a large dataset of complete payment histories of short-termed installment credits. We demonstrate probability estimation in Random Jungle, an RF package written in C++ with a generalized framework for fast tree growing, probability estimation, and classification. We also describe an algorithm for tuning the terminal node size for probability estimation. We demonstrate that regression RF outperforms the optimized logistic regression model, kNN, and bNN on the test data of the short-term installment credits.  相似文献   

10.
The performance of eight machine learning classifiers were compared with three aphasia related classification problems. The first problem contained naming data of aphasic and non-aphasic speakers tested with the Philadelphia Naming Test. The second problem included the naming data of Alzheimer and vascular disease patients tested with Finnish version of the Boston Naming Test. The third problem included aphasia test data of patients suffering from four different aphasic syndromes tested with the Aachen Aphasia Test. The first two data sets were small. Therefore, the data used in the tests were artificially generated from the original confrontation naming data of 23 and 22 subjects, respectively. The third set contained aphasia test data of 146 aphasic speakers and was used as such in the experiments. With the first and the third data set the classifiers could successfully be used for the task, while the results with the second data set were less encouraging. However, based on the results, no single classifier performed exceptionally well with all data sets, suggesting that the selection of the classifier used for classification of aphasic data should be based on the experiments performed with the data set at hand.  相似文献   

11.
Recognition of Chinese characters has been an area of major interest for many years, and a large number of research papers and reports have already been published in this area. There are several major problems with Chinese character recognition: Chinese characters are distinct and ideographic, the character size is very large and a lot of structurally similar characters exist in the character set. Thus, classification criteria are difficult to generate. This paper presents a new technique for the recognition of hand-printed Chinese characters using the C4.5 machine learning system. Conventional methods have relied on hand-constructed dictionaries which are tedious to construct and difficult to make tolerant to variation in writing styles. The paper discusses Chinese character recognition using theHough transform for feature extraction and C4.5 system. The system was tested with 900 characters written by different writers from poor to acceptable quality (each character has 40 samples) and the rate of recognition obtained was 84%.  相似文献   

12.
13.
Although in the past machine learning algorithms have been successfully used in many problems, their serious practical use is affected by the fact that often they cannot produce reliable and unbiased assessments of their predictions' quality. In last few years, several approaches for estimating reliability or confidence of individual classifiers have emerged, many of them building upon the algorithmic theory of randomness, such as (historically ordered) transduction-based confidence estimation, typicalness-based confidence estimation, and transductive reliability estimation. Unfortunately, they all have weaknesses: either they are tightly bound with particular learning algorithms, or the interpretation of reliability estimations is not always consistent with statistical confidence levels. In the paper we describe typicalness and transductive reliability estimation frameworks and propose a joint approach that compensates the above-mentioned weaknesses by integrating typicalness-based confidence estimation and transductive reliability estimation into a joint confidence machine. The resulting confidence machine produces confidence values in the statistical sense. We perform series of tests with several different machine learning algorithms in several problem domains. We compare our results with that of a proprietary method as well as with kernel density estimation. We show that the proposed method performs as well as proprietary methods and significantly outperforms density estimation methods. Matjaž Kukar is currently Assistant Professor in the Faculty of Computer and Information Science at University of Ljubljana. His research interests include machine learning, data mining and intelligent data analysis, ROC analysis, cost-sensitive learning, reliability estimation, and latent structure analysis, as well as applications of data mining in medical and business problems.  相似文献   

14.
Learning non-taxonomic relationships is a sub-field of Ontology Learning that aims at automating the extraction of these relationships from text. Several techniques have been proposed based on Natural Language Processing and Machine Learning. However just like for other techniques for Ontology Learning, evaluating techniques for learning non-taxonomic relationships is an open problem. Three general proposals suggest that the learned ontologies can be evaluated in an executable application or by domain experts or even by a comparison with a predefined reference ontology. This article proposes two procedures to evaluate techniques for learning non-taxonomic relationships based on the comparison of the relationships obtained with those of a reference ontology. Also, these procedures are used in the evaluation of two state of the art techniques performing the extraction of relationships from two corpora in the domains of biology and Family Law.  相似文献   

15.

Obstructive sleep apnea is a syndrome which is characterized by the decrease in air flow or respiratory arrest depending on upper respiratory tract obstructions recurring during sleep and often observed with the decrease in the oxygen saturation. The aim of this study was to determine the connection between the respiratory arrests and the photoplethysmography (PPG) signal in obstructive sleep apnea patients. Determination of this connection is important for the suggestion of using a new signal in diagnosis of the disease. Thirty-four time-domain features were extracted from the PPG signal in the study. The relation between these features and respiratory arrests was statistically investigated. The Mann–Whitney U test was applied to reveal whether this relation was incidental or statistically significant, and 32 out of 34 features were found statistically significant. After this stage, the features of the PPG signal were classified with k-nearest neighbors classification algorithm, radial basis function neural network, probabilistic neural network, multilayer feedforward neural network (MLFFNN) and ensemble classification method. The output of the classifiers was considered as apnea and control (normal). When the classifier results were compared, the best performance was obtained with MLFFNN. Test accuracy rate is 97.07 % and kappa value is 0.93 for MLFFNN. It has been concluded with the results obtained that respiratory arrests can be recognized through the PPG signal and the PPG signal can be used for the diagnosis of OSA.

  相似文献   

16.
Model trees are a particular case of decision trees employed to solve regression problems. They have the advantage of presenting an interpretable output, helping the end-user to get more confidence in the prediction and providing the basis for the end-user to have new insight about the data, confirming or rejecting hypotheses previously formed. Moreover, model trees present an acceptable level of predictive performance in comparison to most techniques used for solving regression problems. Since generating the optimal model tree is an NP-Complete problem, traditional model tree induction algorithms make use of a greedy top-down divide-and-conquer strategy, which may not converge to the global optimal solution. In this paper, we propose a novel algorithm based on the use of the evolutionary algorithms paradigm as an alternate heuristic to generate model trees in order to improve the convergence to globally near-optimal solutions. We call our new approach evolutionary model tree induction (E-Motion). We test its predictive performance using public UCI data sets, and we compare the results to traditional greedy regression/model trees induction algorithms, as well as to other evolutionary approaches. Results show that our method presents a good trade-off between predictive performance and model comprehensibility, which may be crucial in many machine learning applications.  相似文献   

17.
This paper presents a system for monitoring and prognostics of machine conditions using soft computing (SC) techniques. The machine condition is assessed through a suitable ‘monitoring index’ extracted from the vibration signals. The progression of the monitoring index is predicted using an SC technique, namely adaptive neuro-fuzzy inference system (ANFIS). Comparison with a machine learning method, namely support vector regression (SVR), is also presented. The proposed prediction procedures have been evaluated through benchmark data sets. The prognostic effectiveness of the techniques has been illustrated through previously published data on several types of faults in machines. The performance of SVR was found to be better than ANFIS for the data sets used. The results are helpful in understanding the relationship of machine conditions, the corresponding indicating features, the level of damage/degradation and their progression.  相似文献   

18.
Large area land-cover monitoring scenarios, involving large volumes of data, are becoming more prevalent in remote sensing applications. Thus, there is a pressing need for increased automation in the change mapping process. The objective of this research is to compare the performance of three machine learning algorithms (MLAs); two classification tree software routines (S-plus and C4.5) and an artificial neural network (ARTMAP), in the context of mapping land-cover modifications in northern and southern California study sites between 1990/91 and 1996. Comparisons were based on several criteria: overall accuracy, sensitivity to data set size and variation, and noise. ARTMAP produced the most accurate maps overall ( 84%), for two study areas — in southern and northern California, and was most resistant to training data deficiencies. The change map generated using ARTMAP has similar accuracies to a human-interpreted map produced by the U.S. Forest Service in the southern study area. ARTMAP appears to be robust and accurate for automated, large area change monitoring as it performed equally well across the diverse study areas with minimal human intervention in the classification process.  相似文献   

19.

Context

Software development effort estimation (SDEE) is the process of predicting the effort required to develop a software system. In order to improve estimation accuracy, many researchers have proposed machine learning (ML) based SDEE models (ML models) since 1990s. However, there has been no attempt to analyze the empirical evidence on ML models in a systematic way.

Objective

This research aims to systematically analyze ML models from four aspects: type of ML technique, estimation accuracy, model comparison, and estimation context.

Method

We performed a systematic literature review of empirical studies on ML model published in the last two decades (1991-2010).

Results

We have identified 84 primary studies relevant to the objective of this research. After investigating these studies, we found that eight types of ML techniques have been employed in SDEE models. Overall speaking, the estimation accuracy of these ML models is close to the acceptable level and is better than that of non-ML models. Furthermore, different ML models have different strengths and weaknesses and thus favor different estimation contexts.

Conclusion

ML models are promising in the field of SDEE. However, the application of ML models in industry is still limited, so that more effort and incentives are needed to facilitate the application of ML models. To this end, based on the findings of this review, we provide recommendations for researchers as well as guidelines for practitioners.  相似文献   

20.
A reinforcement learning approach based on modular function approximation is presented. Cerebellar Model Articulation Controller (CMAC) networks are incorporated in the Hierarchical Mixtures of Experts (HME) architecture and the resulting architecture is referred to as HME-CMAC. A computationally efficient on-line learning algorithm based on the Expectation Maximization (EM) algorithm is proposed in order to achieve fast function approximation with the HME-CMAC architecture.

The Compositional Q-Learning (CQ-L) framework establishes the relationship between the Q-values of composite tasks and those of elemental tasks in its decomposition. This framework is extended here to allow rewards in non-terminal states. An implementation of the extended CQ-L framework using the HME-CMAC architecture is used to perform task decomposition in a realistic simulation of a two-linked manipulator having non-linear dynamics. The context-dependent reinforcement learning achieved by adopting this approach has advantages over monolithic approaches in terms of speed of learning, storage requirements and the ability to cope with changing goals.  相似文献   


设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号