首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Human action recognition is an active research topic in both computer vision and machine learning communities, which has broad applications including surveillance, biometrics and human computer interaction. In the past decades, although some famous action datasets have been released, there still exist limitations, including the limited action categories and samples, camera views and variety of scenarios. Moreover, most of them are designed for a subset of the learning problems, such as single-view learning problem, cross-view learning problem and multi-task learning problem. In this paper, we introduce a multi-view, multi-modality benchmark dataset for human action recognition (abbreviated to MMA). MMA consists of 7080 action samples from 25 action categories, including 15 single-subject actions and 10 double-subject interactive actions in three views of two different scenarios. Further, we systematically benchmark the state-of-the-art approaches on MMA with respective to all three learning problems by different temporal-spatial feature representations. Experimental results demonstrate that MMA is challenging on all three learning problems due to significant intra-class variations, occlusion issues, views and scene variations, and multiple similar action categories. Meanwhile, we provide the baseline for the evaluation of existing state-of-the-art algorithms.  相似文献   

2.
Human identification by gait analysis has attracted a great deal of interest in the computer vision and forensics communities as an unobtrusive technique that is capable of recognizing humans at range. In recent years, significant progress has been made, and a number of approaches capable of this task have been proposed and developed. Among them, approaches based on single source features are the most popular. However, the recognition rate of these methods is often unsatisfactory due to the lack of information contained in single feature sources. Consequently, in this paper, a hierarchal and multi-featured fusion approach is proposed for effective gait recognition. In practice, using more features for fusion does not necessarily mean a better recognition rate and features should in fact be carefully selected such that they are complementary to each other. Here, complementary features are extracted in three groups: Dynamic Region Area; Extension and Space features; and 2D Stick Figure Model features. To balance the proportion of features used in fusion, a hierarchical feature-level fusion method is proposed. Comprehensive results of applying the proposed techniques to three well-known datasets have demonstrated that our fusion-based approach can improve the overall recognition rate when compared to a benchmark algorithm.  相似文献   

3.
In this paper, we propose a research work on speaker discrimination using a multi-classifier fusion with focus on feature reduction effects. Speaker discrimination consists in the automatic distinction between two speakers using the vocal characteristics of their speeches. A number of features are extracted using Mel Frequency Spectral Coefficients and then reduced using Relative Speaker Characteristic (RSC) along with the Principal Components Analysis (PCA). Several classification methods are implemented to ensure the discrimination task. Since different classifiers are employed, two fusion algorithms at the decision level, referred to as Weighted Fusion and Fuzzy Fusion, are proposed to boost the classification performances. These algorithms are based on the weighting of the different classifiers outputs. Furthermore, the effects of speaker gender and feature reduction on the speaker discrimination task have been examined too. The evaluation of our approaches was conducted on a subset of Hub-4 Broadcast-News. The experimental results have shown that the speaker discrimination accuracy is improved by 5–15% using the (RSC–PCA) feature reduction. In addition, the proposed fusion methods recorded an improvement of about 10% compared to the individual scores of the classifiers. Finally, we noticed that the gender has an important impact on the discrimination performances.  相似文献   

4.
Efficiently answering reachability queries, which checks whether one vertex can reach another in a directed graph, has been studied extensively during recent years. However, the size of the graph that people are facing and generating nowadays is growing so rapidly that simple algorithms, such as BFS and DFS, are no longer feasible. Although Refined Online Search algorithms can scale to large graphs, they all suffer from the false positive problem. In this paper, we analyze the cause of false positive and propose an efficient High Dimensional coordinate generating method based on Graph Dominance Drawing (HD-GDD) to answer reachability queries in linear or even constant time. We conduct experiments on different graph structures and different graph sizes to fully evaluate the performance and behavior of our proposal. Empirical results demonstrate that our method outperforms state-of-the-art algorithms and can handle extensive graphs.  相似文献   

5.
Heterogeneous gap among different modalities emerges as one of the critical issues in multimedia retrieval areas. Unlike traditional unimodal cases, where raw features are extracted and directly measured, the heterogeneous nature of crossmodal tasks requires the intrinsic semantic representation to be compared in a unified framework. Based on a flexible “feature up-lifting and down projecting” mechanism, this paper studies the learning of crossmodal semantic features that can be retrieved across different modalities. Two effective methods are proposed to mine semantic correlations. One is for traditional handcrafted features, and the other is based on deep neural network. We treat them respectively as normal and deep version of our proposed shared discriminative semantic representation learning (SDSRL) framework. We evaluate both of these two methods on two public multimodal datasets for crossmodal and unimodal retrieval tasks. The experimental results demonstrate that our proposed methods outperform the compared baselines and achieve state-of-the-art performance in most scenarios.  相似文献   

6.
To address the issue of malware detection through large sets of applications, researchers have recently started to investigate the capabilities of machine-learning techniques for proposing effective approaches. So far, several promising results were recorded in the literature, many approaches being assessed with what we call in the lab validation scenarios. This paper revisits the purpose of malware detection to discuss whether such in the lab validation scenarios provide reliable indications on the performance of malware detectors in real-world settings, aka in the wild. To this end, we have devised several Machine Learning classifiers that rely on a set of features built from applications’ CFGs. We use a sizeable dataset of over 50 000 Android applications collected from sources where state-of-the art approaches have selected their data. We show that, in the lab, our approach outperforms existing machine learning-based approaches. However, this high performance does not translate in high performance in the wild. The performance gap we observed—F-measures dropping from over 0.9 in the lab to below 0.1 in the wild—raises one important question: How do state-of-the-art approaches perform in the wild?  相似文献   

7.
This paper addresses the important problem of discerning hateful content in social media. We propose a detection scheme that is an ensemble of Recurrent Neural Network (RNN) classifiers, and it incorporates various features associated with user-related information, such as the users’ tendency towards racism or sexism. This data is fed as input to the above classifiers along with the word frequency vectors derived from the textual content. We evaluate our approach on a publicly available corpus of 16k tweets, and the results demonstrate its effectiveness in comparison to existing state-of-the-art solutions. More specifically, our scheme can successfully distinguish racism and sexism messages from normal text, and achieve higher classification quality than current state-of-the-art algorithms.  相似文献   

8.
Rapid advances in image acquisition and storage technology underline the need for real-time algorithms that are capable of solving large-scale image processing and computer-vision problems. The minimum st cut problem, which is a classical combinatorial optimization problem, is a prominent building block in many vision and imaging algorithms such as video segmentation, co-segmentation, stereo vision, multi-view reconstruction, and surface fitting to name a few. That is why finding a real-time algorithm which optimally solves this problem is of great importance. In this paper, we introduce to computer vision the Hochbaum’s pseudoflow (HPF) algorithm, which optimally solves the minimum st cut problem. We compare the performance of HPF, in terms of execution times and memory utilization, with three leading published algorithms: (1) Goldberg’s and Tarjan’s Push-Relabel; (2) Boykov’s and Kolmogorov’s augmenting paths; and (3) Goldberg’s partial augment-relabel. While the common practice in computer-vision is to use either BK or PRF algorithms for solving the problem, our results demonstrate that, in general, HPF algorithm is more efficient and utilizes less memory than these three algorithms. This strongly suggests that HPF is a great option for many real-time computer-vision problems that require solving the minimum st cut problem.  相似文献   

9.
This work introduces a new scheme for action unit detection in 3D facial videos. Sets of features that define action unit activation in a robust manner are proposed. These features are computed based on eight detected facial landmarks on each facial mesh that involve angles, areas and distances. Support vector machine classifiers are then trained using the above features in order to perform action unit detection. The proposed AU detection scheme is used in a dynamic 3D facial expression retrieval and recognition pipeline, highlighting the most important AU s, in terms of providing facial expression information, and at the same time, resulting in better performance than state-of-the-art methodologies.  相似文献   

10.
Why-not and why questions can be posed by database users to seek clarifications on unexpected query results. Specifically, why-not questions aim to explain why certain expected tuples are absent from the query results, while why questions try to clarify why certain unexpected tuples are present in the query results. This paper systematically explores the why-not and why questions on reverse top-k queries, owing to its importance in multi-criteria decision making. We first formalize why-not questions on reverse top-k queries, which try to include the missing objects in the reverse top-k query results, and then, we propose a unified framework called WQRTQ to answer why-not questions on reverse top-k queries. Our framework offers three solutions to cater for different application scenarios. Furthermore, we study why questions on reverse top-k queries, which aim to exclude the undesirable objects from the reverse top-k query results, and extend the framework WQRTQ to efficiently answer why questions on reverse top-k queries, which demonstrates the flexibility of our proposed algorithms. Extensive experimental evaluation with both real and synthetic data sets verifies the effectiveness and efficiency of the presented algorithms under various experimental settings.  相似文献   

11.
Representative skyline computation is a fundamental issue in database area, which has attracted much attention in recent years. A notable definition of representative skyline is the distance-based representative skyline (DBRS). Given an integer k, a DBRS includes k representative skyline points that aims at minimizing the maximal distance between a non-representative skyline point and its nearest representative. In the 2D space, the state-of-the-art algorithm to compute the DBRS is based on dynamic programming (DP) which takes O(k m 2) time complexity, where m is the number of skyline points. Clearly, such a DP-based algorithm cannot be used for handling large scale datasets due to the quadratic time cost. To overcome this problem, in this paper, we propose a new approximate algorithm called ARS, and a new exact algorithm named PSRS, based on a carefully-designed parametric search technique. We show that the ARS algorithm can guarantee a solution that is at most ?? larger than the optimal solution. The proposed ARS and PSRS algorithms run in O(klog2mlog(T/??)) and O(k 2 log3m) time respectively, where T is no more than the maximal distance between any two skyline points. We also propose an improved exact algorithm, called PSRS+, based on an effective lower and upper bounding technique. We conduct extensive experimental studies over both synthetic and real-world datasets, and the results demonstrate the efficiency and effectiveness of the proposed algorithms.  相似文献   

12.
Time series classification is related to many different domains, such as health informatics, finance, and bioinformatics. Due to its broad applications, researchers have developed many algorithms for this kind of tasks, e.g., multivariate time series classification. Among the classification algorithms, k-nearest neighbor (k-NN) classification (particularly 1-NN) combined with dynamic time warping (DTW) achieves the state of the art performance. The deficiency is that when the data set grows large, the time consumption of 1-NN with DTWwill be very expensive. In contrast to 1-NN with DTW, it is more efficient but less effective for feature-based classification methods since their performance usually depends on the quality of hand-crafted features. In this paper, we aim to improve the performance of traditional feature-based approaches through the feature learning techniques. Specifically, we propose a novel deep learning framework, multi-channels deep convolutional neural networks (MC-DCNN), for multivariate time series classification. This model first learns features from individual univariate time series in each channel, and combines information from all channels as feature representation at the final layer. Then, the learnt features are applied into a multilayer perceptron (MLP) for classification. Finally, the extensive experiments on real-world data sets show that our model is not only more efficient than the state of the art but also competitive in accuracy. This study implies that feature learning is worth to be investigated for the problem of time series classification.  相似文献   

13.
Efficient and effective processing of the distance-based join query (DJQ) is of great importance in spatial databases due to the wide area of applications that may address such queries (mapping, urban planning, transportation planning, resource management, etc.). The most representative and studied DJQs are the K Closest Pairs Query (KCPQ) and εDistance Join Query (εDJQ). These spatial queries involve two spatial data sets and a distance function to measure the degree of closeness, along with a given number of pairs in the final result (K) or a distance threshold (ε). In this paper, we propose four new plane-sweep-based algorithms for KCPQs and their extensions for εDJQs in the context of spatial databases, without the use of an index for any of the two disk-resident data sets (since, building and using indexes is not always in favor of processing performance). They employ a combination of plane-sweep algorithms and space partitioning techniques to join the data sets. Finally, we present results of an extensive experimental study, that compares the efficiency and effectiveness of the proposed algorithms for KCPQs and εDJQs. This performance study, conducted on medium and big spatial data sets (real and synthetic) validates that the proposed plane-sweep-based algorithms are very promising in terms of both efficient and effective measures, when neither inputs are indexed. Moreover, the best of the new algorithms is experimentally compared to the best algorithm that is based on the R-tree (a widely accepted access method), for KCPQs and εDJQs, using the same data sets. This comparison shows that the new algorithms outperform R-tree based algorithms, in most cases.  相似文献   

14.
Image Forgery is a field that has attracted the attention of a significant number of researchers in the recent years. The widespread popularity of imagery applications and the advent of powerful and inexpensive cameras are among the numerous reasons that have contributed to this upward spike in the reach of image manipulation. A considerable number of features – including numerous texture features – have been proposed by various researchers for identifying image forgery. However, detecting forgery in images utilizing texture-based features have not been explored to its full potential – especially a thorough evaluation of the texture features have not been proposed. In this paper, features based on image textures are extracted and combined in a specific way to detect the presence of image forgery. First, the input image is converted to YCbCr color space to extract the chroma channels. Gabor Wavelets and Local Phase Quantization are subsequently applied to these channels to extract the texture features at different scales and orientations. These features are then optimized using Non-negative Matrix Factorization (NMF) and fed to a Support Vector Machine (SVM) classifier. This method leads to the classification of images with accuracies of 99.33%, 96.3%, 97.6%, 85%, and 96.36% for the CASIA v2.0, CASIA v1.0, CUISDE, IFS-TC and Unisa TIDE datasets respectively showcasing its ability to identify image forgeries under varying conditions. With CASIA v2.0, the detection accuracy outperforms the recent state-of-the-art methods, and with the other datasets, it gives a comparable performance with much reduced feature dimensions.  相似文献   

15.
Outlier detection algorithms are often computationally intensive because of their need to score each point in the data. Even simple distance-based algorithms have quadratic complexity. High-dimensional outlier detection algorithms such as subspace methods are often even more computationally intensive because of their need to explore different subspaces of the data. In this paper, we propose an exceedingly simple subspace outlier detection algorithm, which can be implemented in a few lines of code, and whose complexity is linear in the size of the data set and the space requirement is constant. We show that this outlier detection algorithm is much faster than both conventional and high-dimensional algorithms and also provides more accurate results. The approach uses randomized hashing to score data points and has a neat subspace interpretation. We provide a visual representation of this interpretability in terms of outlier sensitivity histograms. Furthermore, the approach can be easily generalized to data streams, where it provides an efficient approach to discover outliers in real time. We present experimental results showing the effectiveness of the approach over other state-of-the-art methods.  相似文献   

16.
In this paper, we propose a content-based image retrieval scheme in discrete cosine transform compressed domain with the help of genetic algorithm (GA). A combination of three image features, i.e., color histogram, color moments, and edge histogram, is extracted directly from the compressed domain and is used for similarity matching using the Euclidian distance. However, all the above image features are not equally important in image retrieval. Before similarity matching, GA is used to provide optimal weight factor (importance) on the image features to improve the system performance. Extensive experiments are carried out on three publicly available databases, and the comparison results demonstrate the outperforming performance of the proposed method over state-of-the-art techniques. Moreover, it is also seen that the use of GA improves 5.83% in precession (P), 6.36% in recall (R), and 5.84% in F-score values over non-GA-based techniques.  相似文献   

17.
18.
In this paper, we propose a novel method for fast face recognition called L 1/2-regularized sparse representation using hierarchical feature selection. By employing hierarchical feature selection, we can compress the scale and dimension of global dictionary, which directly contributes to the decrease of computational cost in sparse representation that our approach is strongly rooted in. It consists of Gabor wavelets and extreme learning machine auto-encoder (ELM-AE) hierarchically. For Gabor wavelets’ part, local features can be extracted at multiple scales and orientations to form Gabor-feature-based image, which in turn improves the recognition rate. Besides, in the presence of occluded face image, the scale of Gabor-feature-based global dictionary can be compressed accordingly because redundancies exist in Gabor-feature-based occlusion dictionary. For ELM-AE part, the dimension of Gabor-feature-based global dictionary can be compressed because high-dimensional face images can be rapidly represented by low-dimensional feature. By introducing L 1/2 regularization, our approach can produce sparser and more robust representation compared to L 1-regularized sparse representation-based classification (SRC), which also contributes to the decrease of the computational cost in sparse representation. In comparison with related work such as SRC and Gabor-feature-based SRC, experimental results on a variety of face databases demonstrate the great advantage of our method for computational cost. Moreover, we also achieve approximate or even better recognition rate.  相似文献   

19.
Scaling skyline queries over high-dimensional datasets remains to be challenging due to the fact that most existing algorithms assume dimensional independence when establishing the worst-case complexity by discarding correlation distribution. In this paper, we present HashSkyline, a systematic and correlation-aware approach for scaling skyline queries over high-dimensional datasets with three novel features: First, it offers a fast hash-based method to prune non-skyline points by utilizing data correlation characteristics and speed up the overall skyline evaluation for correlated datasets. Second, we develop \(HashSkyline_{GPU}\), which can dramatically reduce the response time for anti-correlated and independent datasets by capitalizing on the parallel processing power of GPUs. Third, the HashSkyline approach uses the pivot cell-based mechanism combined with the correlation threshold to determine the correlation distribution characteristics for a given dataset, enabling adaptive configuration of HashSkyline for skyline query evaluation by auto-switching of \(HashSkyline_{CPU}\) and \(HashSkyline_{GPU}\). We evaluate the validity of HashSkyline using both synthetic datasets and real datasets. Our experiments show that HashSkyline consumes significantly less pre-processing cost and achieves significantly higher overall query performance, compared to existing state-of-the-art algorithms.  相似文献   

20.
With the emergence of biometric-based authentication systems in real-world applications, template protection in biometrics is a significant issue to be considered in the recent years. This paper presents two feature set computation algorithms, namely nearest neighbor feature set (NNFS) and Delaunay triangle feature set (DTFS), for a fingerprint sample. Further, the match scores obtained from these algorithms are fused using weighted sum rule and T-operators (T-norms and T-conorms). The experimental evaluation done on FVC 2002 databases confirms the credibility of fusion method compared to each individual algorithm used for fusing. The EER obtained for proposed method is 0 %, 0.059 %, and 3.93 % for FVC 2002 DB1, DB2, and DB3 databases, respectively. This paper also aims to prove the effectiveness of applying T-operators for fusion at score level in fingerprint template protection.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号