共查询到12条相似文献,搜索用时 0 毫秒
1.
A flexible sequence alignment approach on pattern mining and matching for human activity recognition
Po-Cheng Huang Sz-Shian Lee Yaw-Huang Kuo Kuan-Rong Lee 《Expert systems with applications》2010,37(1):298-306
This paper proposes a flexible sequence alignment approach for pattern mining and matching in the recognition of human activities. During pattern mining, the proposed sequence alignment algorithm is invoked to extract out the representative patterns which denote specific activities of a person from the training patterns. It features high performance and robustness on pattern diversity. Besides, the algorithm evaluates the appearance probability of each pattern as weight and allows adapting pattern length to various human activities. Both of them are able to improve the accuracy of activity recognition. In pattern matching, the proposed algorithm adopts a dynamic programming based strategy to evaluate the correlation degree between each representative activity pattern and the observed activity sequence. It can avoid the trouble on segmenting the observed sequence. Moreover, we are able to obtain recognition results continuously. Besides, the proposed matching algorithm favors recognition of concurrent human activities with parallel matching. The experimental result confirms the high accuracy of human activity recognition by the proposed approach. 相似文献
2.
Summarizing a set of sequences is an old topic that has been revived in the last decade, due to the increasing availability of sequential datasets. The definition of a consensus object is on the center of data analysis issues, since it crystallizes the underlying organization of the data.Dynamic Time Warping (DTW) is currently the most relevant similarity measure between sequences for a large panel of applications, since it makes it possible to capture temporal distortions. In this context, averaging a set of sequences is not a trivial task, since the average sequence has to be consistent with this similarity measure.The Steiner theory and several works in computational biology have pointed out the connection between multiple alignments and average sequences. Taking inspiration from these works, we introduce the notion of compact multiple alignment, which allows us to link these theories to the problem of summarizing under time warping. Having defined the link between the multiple alignment and the average sequence, the second part of this article focuses on the scan of the space of compact multiple alignments in order to provide an average sequence of a set of sequences. We propose to use a genetic algorithm based on a specific representation of the genotype inspired by genes. This representation of the genotype makes it possible to consistently paint the fitness landscape.Experiments carried out on standard datasets show that the proposed approach outperforms existing methods. 相似文献
3.
Multiple sequence alignment is of central importance to bioinformatics and computational biology. Although a large number of algorithms for computing a multiple sequence alignment have been designed, the efficient computation of highly accurate and statistically significant multiple alignments is still a challenge. In this paper, we propose an efficient method by using multi-objective genetic algorithm (MSAGMOGA) to discover optimal alignments with affine gap in multiple sequence data. The main advantage of our approach is that a large number of tradeoff (i.e., non-dominated) alignments can be obtained by a single run with respect to conflicting objectives: affine gap penalty minimization and similarity and support maximization. To the best of our knowledge, this is the first effort with three objectives in this direction. The proposed method can be applied to any data set with a sequential character. Furthermore, it allows any choice of similarity measures for finding alignments. By analyzing the obtained optimal alignments, the decision maker can understand the tradeoff between the objectives. We compared our method with the three well-known multiple sequence alignment methods, MUSCLE, SAGA and MSA-GA. As the first of them is a progressive method, and the other two are based on evolutionary algorithms. Experiments on the BAliBASE 2.0 database were conducted and the results confirm that MSAGMOGA obtains the results with better accuracy statistical significance compared with the three well-known methods in aligning multiple sequence alignment with affine gap. The proposed method also finds solutions faster than the other evolutionary approaches mentioned above. 相似文献
4.
Azzedine Boukerche Alba Cristina Magalhaes Alves de Melo Mauricio Ayala-Rincón Maria Emilia Machado Telles Walter 《Journal of Parallel and Distributed Computing》2007
Recently, many organisms have had their DNA entirely sequenced. This reality presents the need for comparing long DNA sequences, which is a challenging task due to its high demands for computational power and memory. Sequence comparison is a basic operation in DNA sequencing projects, and most sequence comparison methods currently in use are based on heuristics, which are faster but offer no guarantees of producing the best alignments possible. In order to alleviate this problem, Smith–Waterman proposed an algorithm. This algorithm obtains the best local alignments but at the expense of very high computing power and huge memory requirements. In this article, we present and evaluate our experiments involving three strategies to run the Smith–Waterman algorithm in a cluster of workstations using a Distributed Shared Memory System. Our results on an eight-machine cluster presented very good speed-up and indicate that impressive improvements can be achieved depending on the strategy used. In addition, we present a number of theoretical remarks concerning how to reduce the amount of memory used. 相似文献
5.
《Expert systems with applications》2014,41(11):5180-5189
Handwriting character recognition from three-dimensional (3D) accelerometer data has emerged as a popular technique for natural human computer interaction. In this paper, we propose a 3D gyroscope-based handwriting recognition system that uses stepwise lower-bounded dynamic time warping, instead of conventional 3D accelerometer data. The results of experiments conducted indicate that our proposed method is more effective and efficient than conventional methods for user-independent recognition of the 26 lowercase letters in the English alphabet. 相似文献
6.
Carlos Vivaracho-Pascual Author Vitae Marcos Faundez-Zanuy Author Vitae Author Vitae 《Pattern recognition》2009,42(1):183-193
This work presents a new proposal for an efficient on-line signature recognition system with very low computational load and storage requirements, suitable to be used in resource-limited systems like smart-cards. The novelty of the proposal is in both the feature extraction and classification stages, since it is based on the use of size normalized signatures, which allows for similarity estimation, usually based on dynamic time warping (DTW) or hidden Markov models (HMMs), to be performed by an easy distance calculation between vectors, which is computed using fractional distance, instead of the more typical Euclidean one, so as to overcome the concentration phenomenon that appears when data are high dimensional. Verification and identification tasks have been carried out using the MCYT database, achieving an EER (common threshold) of 6.6% and 1.8% with skilled and random forgeries, respectively, in the first task and 3.6% of error in the second. The proposed system outperforms DTW-based and HMM-based ones, even though these have proved to be very efficient in on-line signature recognition, with storage requirements between 9 and 90 times lesser and a processing speed between 181 and 713 times greater than the DTW-based systems. 相似文献
7.
There are two common methodologies to verify signatures: the functional approach and the parametric approach. In this paper, we propose a new warping technique for the functional approach in signature verification. The commonly used warping technique is dynamic time warping (DTW). It was originally used in speech recognition and has been applied in the field of signature verification with some success since two decades ago. The new warping technique we propose is named as extreme points warping (EPW). It proves to be more adaptive in the field of signature verification than DTW, given the presence of the forgeries. Instead of warping the whole signal as DTW does, EPW warps a set of selected important points. With the use of EPW, the equal error rate is improved by a factor of 1.3 and the computation time is reduced by a factor of 11. 相似文献
8.
We develop an autonomous system to detect and evaluate physical therapy exercises using wearable motion sensors. We propose the multi-template multi-match dynamic time warping (MTMM-DTW) algorithm as a natural extension of DTW to detect multiple occurrences of more than one exercise type in the recording of a physical therapy session. While allowing some distortion (warping) in time, the algorithm provides a quantitative measure of similarity between an exercise execution and previously recorded templates, based on DTW distance. It can detect and classify the exercise types, and count and evaluate the exercises as correctly/incorrectly performed, identifying the error type, if any. To evaluate the algorithm's performance, we record a data set consisting of one reference template and 10 test executions of three execution types of eight exercises performed by five subjects. We thus record a total of 120 and 1200 exercise executions in the reference and test sets, respectively. The test sequences also contain idle time intervals. The accuracy of the proposed algorithm is 93.46% for exercise classification only and 88.65% for simultaneous exercise and execution type classification. The algorithm misses 8.58% of the exercise executions and demonstrates a false alarm rate of 4.91%, caused by some idle time intervals being incorrectly recognized as exercise executions. To test the robustness of the system to unknown exercises, we employ leave-one-exercise-out cross validation. This results in a false alarm rate lower than 1%, demonstrating the robustness of the system to unknown movements. The proposed system can be used for assessing the effectiveness of a physical therapy session and for providing feedback to the patient. 相似文献
9.
10.
This paper deals with the problem of scheduling a flow shop operating in a sequence dependent setup time environment. The objective is to determine the sequence that minimises the makespan. Two efficient neighbourhood search-based heuristics have been developed and tested using 960 problems, and the results obtained reveal their usefulness. The algorithms make use of two existing constructive heuristics. A neighbourhood search known as variable neighbourhood descent is used to improve the two constructive heuristics. Experimentation is carried out on the 96 groups of problems with 10 problem instances in each group. Performance analysis is carried out using the relative performance improvement of each heuristic. The analysis shows a consistently better performance of the neighbourhood-based improvement heuristics. A paired comparison test is used for validating the superiority of the proposed heuristics. The statistical analysis reveals that the performance of the neighbourhood-based heuristics is very much dependent on the initial constructive heuristics used. 相似文献
11.
Kerem Altun Author Vitae Author Vitae Orkun Tunçel Author Vitae 《Pattern recognition》2010,43(10):3605-3620
This paper provides a comparative study on the different techniques of classifying human activities that are performed using body-worn miniature inertial and magnetic sensors. The classification techniques implemented and compared in this study are: Bayesian decision making (BDM), a rule-based algorithm (RBA) or decision tree, the least-squares method (LSM), the k-nearest neighbor algorithm (k-NN), dynamic time warping (DTW), support vector machines (SVM), and artificial neural networks (ANN). Human activities are classified using five sensor units worn on the chest, the arms, and the legs. Each sensor unit comprises a tri-axial gyroscope, a tri-axial accelerometer, and a tri-axial magnetometer. A feature set extracted from the raw sensor data using principal component analysis (PCA) is used in the classification process. A performance comparison of the classification techniques is provided in terms of their correct differentiation rates, confusion matrices, and computational cost, as well as their pre-processing, training, and storage requirements. Three different cross-validation techniques are employed to validate the classifiers. The results indicate that in general, BDM results in the highest correct classification rate with relatively small computational cost. 相似文献
12.
V. Vuori J. Laaksonen E. Oja J. Kangas 《International Journal on Document Analysis and Recognition》2001,3(3):150-159
This paper describes an adaptive recognition system for isolated handwritten characters and the experiments carried out with it. The characters used in our experiments are alphanumeric characters, including both the upper- and lower-case versions of the Latin alphabets and three Scandinavian diacriticals. The writers are allowed to use their own natural style of writing. The recognition system is based on the k-nearest neighbor rule. The six character similarity measures applied by the system are all based on dynamic time warping. The aim of the first experiments is to choose the best combination of the simple preprocessing and normalization operations and the dissimilarity measure for a multi-writer system. However, the main focus of the work is on online adaptation. The purpose of the adaptations is to turn a writer-independent system into writer-dependent and increase recognition performance. The adaptation is carried out by modifying the prototype set of the classifier according to its recognition performance and the user's writing style. The ways of adaptation include: (1) adding new prototypes; (2) inactivating confusing prototypes; and (3) reshaping existing prototypes. The reshaping algorithm is based on the Learning Vector Quantization. Four different adaptation strategies, according to which the modifications of the prototype set are performed, have been studied both offline and online. Adaptation is carried out in a self-supervised fashion during normal use and thus remains unnoticed by the user. Received June 30, 1999 / Revised September 29, 2000 相似文献