首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 20 毫秒
1.
This paper proposes a novel framework of writer adaptation based on deeply learned features for online handwritten Chinese character recognition. Our motivation is to further boost the state-of-the-art deep learning-based recognizer by using writer adaptation techniques. First, to perform an effective and flexible writer adaptation, we propose a tandem architecture design for the feature extraction and classification. Specifically, a deep neural network (DNN) or convolutional neural network (CNN) is adopted to extract the deeply learned features which are used to build a discriminatively trained prototype-based classifier initialized by Linde–Buzo–Gray clustering techniques. In this way, the feature extractor can fully utilize the useful information of a DNN or CNN. Meanwhile, the prototype-based classifier could be designed more compact and efficient as a practical solution. Second, the writer adaption is performed via a linear transformation of the deeply learned features which is optimized with a sample separation margin-based minimum classification error criterion. Furthermore, we improve the generalization capability of the previously proposed discriminative linear regression approach for writer adaptation by using the linear interpolation of two transformations and adaptation data perturbation. The experiments on the tasks of both the CASIA-OLHWDB benchmark and an in-house corpus with a vocabulary of 20,936 characters demonstrate the effectiveness of our proposed approach.  相似文献   

2.
Identifying a discriminative feature can effectively improve the classification performance of aerial scene classification. Deep convolutional neural networks (DCNN) have been widely used in aerial scene classification for its learning discriminative feature ability. The DCNN feature can be more discriminative by optimizing the training loss function and using transfer learning methods. To enhance the discriminative power of a DCNN feature, the improved loss functions of pretraining models are combined with a softmax loss function and a centre loss function. To further improve performance, in this article, we propose hybrid DCNN features for aerial scene classification. First, we use DCNN models with joint loss functions and transfer learning from pretrained deep DCNN models. Second, the dense DCNN features are extracted, and the discriminative hybrid features are created using linear connection. Finally, an ensemble extreme learning machine (EELM) classifier is adopted for classification due to its general superiority and low computational cost. Experimental results based on the three public benchmark data sets demonstrate that the hybrid features obtained using the proposed approach and classified by the EELM classifier can result in remarkable performance.  相似文献   

3.
在癫痫脑电信号分类检测中,传统机器学习方法分类效果不理想,深度学习模型虽然具有较好的特征学习优势,但其“黑盒”学习方式不具备可解释性,不能很好地应用于临床辅助诊断;并且,现有的多视角深度TSK模糊系统难以有效表征各视角特征之间的相关性.针对以上问题,提出一种基于视角-规则的深度Takagi-SugenoKang (TSK)模糊分类器(view-to-rule Takagi-Sugeno-Kang fuzzy classifier, VR-TSK-FC),并将其应用于多元癫痫脑电信号检测中.该算法在原始数据上构建前件规则以保证模型的可解释性,利用一维卷积神经网络(1-dimensional convolutional neural network, 1D-CNN)从多角度抓取多元脑电信号深度特征.每个模糊规则的后件部分分别采用一个视角的脑电信号深度特征作为其后件变量,视角-规则的学习方式提高了VR-TSK-FC表征能力.在Bonn和CHB-MIT数据集上, VR-TSK-FC算法模糊逻辑推理过程保证可解释的基础上达到了较好分类效果.  相似文献   

4.
5.
目的 糖尿病性视网膜病变(DR)是目前比较严重的一种致盲眼病,因此,对糖尿病性视网膜病理图像的自动分类具有重要的临床应用价值。基于人工分类视网膜图像的方法存在判别性特征提取困难、分类性能差、耗时费力且很难得到客观统一的医疗诊断等问题,为此,提出一种基于卷积神经网络和分类器的视网膜病理图像自动分类系统。方法 首先,结合现有的视网膜图像的特点,对图像进行去噪、数据扩增、归一化等预处理操作;其次,在AlexNet网络的基础上,在网络的每一个卷积层和全连接层前引入一个批归一化层,得到一个网络层次更复杂的深度卷积神经网络BNnet。BNnet网络用于视网膜图像的特征提取网络,对其训练时采用迁移学习的策略利用ILSVRC2012数据集对BNnet网络进行预训练,再将训练得到的模型迁移到视网膜图像上再学习,提取用于视网膜分类的深度特征;最后,将提取的特征输入一个由全连接层组成的深度分类器将视网膜图像分为正常的视网膜图像、轻微病变的视网膜图像、中度病变的视网膜图像等5类。结果 实验结果表明,本文方法的分类准确率可达0.93,优于传统的直接训练方法,且具有较好的鲁棒性和泛化性。结论 本文提出的视网膜病理图像分类框架有效地避免了人工特征提取和图像分类的局限性,同时也解决了样本数据不足而导致的过拟合问题。  相似文献   

6.
Owing to the inherent lack of training data in visual tracking, recent work in deep learning-based trackers has focused on learning a generic representation offline from large-scale training data and transferring the pre-trained feature representation to a tracking task. Offline pre-training is time-consuming, and the learned generic representation may be either less discriminative for tracking specific objects or overfitted to typical tracking datasets. In this paper, we propose an online discriminative tracking method based on robust feature learning without large-scale pre-training. Specifically, we first design a PCA filter bank-based convolutional neural network (CNN) architecture to learn robust features online with a few positive and negative samples in the high-dimensional feature space. Then, we use a simple soft-thresholding method to produce sparse features that are more robust to target appearance variations. Moreover, we increase the reliability of our tracker using edge information generated from edge box proposals during the process of visual tracking. Finally, effective visual tracking results are achieved by systematically combining the tracking information and edge box-based scores in a particle filtering framework. Extensive results on the widely used online tracking benchmark (OTB-50) with 50 videos validate the robustness and effectiveness of the proposed tracker without large-scale pre-training.  相似文献   

7.
Breakthrough performances have been achieved in computer vision by utilizing deep neural networks. In this paper we propose to use random forest to classify image representations obtained by concatenating multiple layers of learned features of deep convolutional neural networks for scene classification. Specifically, we first use deep convolutional neural networks pre-trained on the large-scale image database Places to extract features from scene images. Then, we concatenate multiple layers of features of the deep neural networks as image representations. After that, we use random forest as the classifier for scene classification. Moreover, to reduce feature redundancy in image representations we derived a novel feature selection method for selecting features that are suitable for random forest classification. Extensive experiments are conducted on two benchmark datasets, i.e. MIT-Indoor and UIUC-Sports. Obtained results demonstrated the effectiveness of the proposed method. The contributions of the paper are as follows. First, by extracting multiple layers of deep neural networks, we can explore more information of image contents for determining their categories. Second, we proposed a novel feature selection method that can be used to reduce redundancy in features obtained by deep neural networks for classification based on random forest. In particular, since deep learning methods can be used to augment expert systems by having the systems essentially training themselves, and the proposed framework is general, which can be easily extended to other intelligent systems that utilize deep learning methods, the proposed method provide a potential way for improving performances of other expert and intelligent systems.  相似文献   

8.
Named entity disambiguation (NED) is the task of linking mentions of ambiguous entities to their referenced entities in a knowledge base such as Wikipedia. We propose an approach to effectively disentangle the discriminative features in the manner of collaborative utilization of collective wisdom (via human-labeled crowd labels) and deep learning (via human-generated data) for the NED task. In particular, we devise a crowd model to elicit the underlying features (crowd features) from crowd labels that indicate a matching candidate for each mention, and then use the crowd features to fine-tune a dynamic convolutional neural network (DCNN). The learned DCNN is employed to obtain deep crowd features to enhance traditional hand-crafted features for the NED task. The proposed method substantially benefits from the utilization of crowd knowledge (via crowd labels) into a generic deep learning for the NED task. Experimental analysis demonstrates that the proposed approach is superior to the traditional hand-crafted features when enough crowd labels are gathered.  相似文献   

9.
Convolutional kernels have significant affections on feature learning of convolutional neural network (CNN). However, it is still a challenging problem to determine appropriate kernel width. Moreover, some features learned by convolutional layers are still redundant and noisy. Thus, adaptive selection of kernel width and feature selection of feature maps are key techniques to improve feature learning performance of CNNs. In this paper, a new deep neural network (DNN) model, adaptive kernel sparse network (AKSNet) is proposed to extract multi-scale fault features from one-dimensional (1-D) vibration signals. Firstly, an adaptive kernel selection method is developed, where multiple branches with different kernels are used to extract multi-scale features from vibration signals. Channel-wise attention is developed to fuse features generated by these kernels to obtain different informative scales. Secondly, a spatial attention is used for dynamic receptive field to focus on salient region of feature maps. Thirdly, a sparse regularization layer is embedded in the deep network to further filter noise and highlight impaction of the feature maps. Finally, two cases are adopted to verify effectiveness of AKSNet-based feature learning for bearing fault diagnosis. Experimental results show that AKSNet can effectively extract features from multi-channel vibration signals and then improves fault diagnosis performance of the classifier significantly. AKSNet shows better recognition performance in comparison with that of shallow neural networks and other typical DNNs.  相似文献   

10.
Chinese calligraphy draws a lot of attention for its beauty and elegance. The various styles of calligraphic characters make calligraphy even more charming. But it is not always easy to recognize the calligraphic style correctly, especially for beginners. In this paper, an automatic character styles representation for recognition method is proposed. Three kinds of features are extracted to represent the calligraphic characters. Two of them are typical hand-designed features: the global feature, GIST and the local feature, scale invariant feature transform. The left one is deep feature which is extracted by a deep convolutional neural network (CNN). The state-of-the-art classifier modified quadratic discriminant function was employed to perform recognition. We evaluated our method on two calligraphic character datasets, the unconstraint real-world calligraphic character dataset (CCD) and SCL (the standard calligraphic character library). And we also compare MQDF with other two classifiers, support vector machine and neural network, to perform recognition. In our experiments, all three kinds of feature are evaluated with all three classifiers, respectively, finding that deep feature is the best feature for calligraphic style recognition. We also fine-tune the deep CNN (alex-net) in Krizhevsky et al. (Advances in Neural Information Processing Systems, pp. 1097–1105, 2012) to perform calligraphic style recognition. It turns out our method achieves about equal accuracy comparing with the fine-tuned alex-net but with much less training time. Furthermore, the algorithm style discrimination evaluation is developed to evaluate the discriminative style quantitatively.  相似文献   

11.
In some image classification tasks, similarities among different categories are different and the samples are usually misclassified as highly similar categories. To distinguish highly similar categories, more specific features are required so that the classifier can improve the classification performance. In this paper, we propose a novel two-level hierarchical feature learning framework based on the deep convolutional neural network (CNN), which is simple and effective. First, the deep feature extractors of different levels are trained using the transfer learning method that fine-tunes the pre-trained deep CNN model toward the new target dataset. Second, the general feature extracted from all the categories and the specific feature extracted from highly similar categories are fused into a feature vector. Then the final feature representation is fed into a linear classifier. Finally, experiments using the Caltech-256, Oxford Flower-102, and Tasmania Coral Point Count (CPC) datasets demonstrate that the expression ability of the deep features resulting from two-level hierarchical feature learning is powerful. Our proposed method effectively increases the classification accuracy in comparison with flat multiple classification methods.  相似文献   

12.
13.
This paper proposes an effective segmentation-free approach using a hybrid neural network hidden Markov model (NN-HMM) for offline handwritten Chinese text recognition (HCTR). In the general Bayesian framework, the handwritten Chinese text line is sequentially modeled by HMMs with each representing one character class, while the NN-based classifier is adopted to calculate the posterior probability of all HMM states. The key issues in feature extraction, character modeling, and language modeling are comprehensively investigated to show the effectiveness of NN-HMM framework for offline HCTR. First, a conventional deep neural network (DNN) architecture is studied with a well-designed feature extractor. As for the training procedure, the label refinement using forced alignment and the sequence training can yield significant gains on top of the frame-level cross-entropy criterion. Second, a deep convolutional neural network (DCNN) with automatically learned discriminative features demonstrates its superiority to DNN in the HMM framework. Moreover, to solve the challenging problem of distinguishing quite confusing classes due to the large vocabulary of Chinese characters, NN-based classifier should output 19900 HMM states as the classification units via a high-resolution modeling within each character. On the ICDAR 2013 competition task of CASIA-HWDB database, DNN-HMM yields a promising character error rate (CER) of 5.24% by making a good trade-off between the computational complexity and recognition accuracy. To the best of our knowledge, DCNN-HMM can achieve a best published CER of 3.53%.  相似文献   

14.
In this paper, a visual object tracking method is proposed based on sparse 2-dimensional discrete cosine transform (2D DCT) coefficients as discriminative features. To select the discriminative DCT coefficients, we give two propositions. The propositions select the features based on estimated mean of feature distributions in each frame. Some intermediate tracking instances are obtained by (a) computing feature similarity using kernel, (b) finding the maximum classifier score computed using ratio classifier, and (c) combinations of both. Another intermediate tracking instance is obtained using incremental subspace learning method. The final tracked instance amongst the intermediate instances are selected by using a discriminative linear classifier learned in each frame. The linear classifier is updated in each frame using some of the intermediate tracked instances. The proposed method has a better tracking performance as compared to state-of-the-art video trackers in a dataset of 50 challenging video sequences.  相似文献   

15.
It is a critical step to choose visual features in object tracking. Most existing tracking approaches adopt handcrafted features, which greatly depend on people’s prior knowledge and easily become invalid in other conditions where the scene structures are different. On the contrary, we learn informative and discriminative features from image data of tracking scenes itself. Local receptive filters and weight sharing make the convolutional restricted Boltzmann machines (CRBM) suit for natural images. The CRBM is applied to model the distribution of image patches sampled from the first frame which shares same properties with other frames. Each hidden variable corresponding to one local filter can be viewed as a feature detector. Local connections to hidden variables and max-pooling strategy make the extracted features invariant to shifts and distortions. A simple naive Bayes classifier is used to separate object from background in feature space. We demonstrate the effectiveness and robustness of our tracking method in several challenging video sequences. Experimental results show that features automatically learned by CRBM are effective for object tracking.  相似文献   

16.
基于深度卷积神经网络的行人检测   总被引:1,自引:0,他引:1  
行人检测一直是目标检测研究与应用中的热点。目前行人检测主要通过设计有效的特征提取方法建立对行人特征的描述,然后利用分类器实现二分类。卷积神经网络作为深度学习的重要组成,在图像、语音等领域得到了成功应用。针对人工设计的特征提取方法难以有效表达复杂环境下行人特征的问题,提出采用多层网络构建深度卷积神经网络实现对行人检测的方法。系统分析了卷积神经网络层数、卷积核大小、特征维数等对识别效果的影响,优化了网络参数。实验结果表明该方法对于行人检测具有很高的识别率,优于传统方法。  相似文献   

17.
Video semantic analysis (VSA) has received significant attention in the area of Machine Learning for some time now, particularly video surveillance applications with sparse representation and dictionary learning. Studies have shown that the duo has significantly impacted on the classification performance of video detection analysis. In VSA, the locality structure of video semantic data containing more discriminative information is very essential for classification. However, there has been modest feat by the current SR-based approaches to fully utilize the discriminative information for high performance. Furthermore, similar coding outcomes are missing from current video features with the same video category. To handle these issues, we first propose an improved deep learning algorithm—locality deep convolutional neural network algorithm (LDCNN) to better extract salient features and obtain local information from semantic video. Second, we propose a novel DL method, called deep locality-sensitive discriminative dictionary learning (DLSDDL) for VSA. In the proposed DLSDDL, a discriminant loss function for the video category based on sparse coding of sparse coefficients is introduced into the structure of the locality-sensitive dictionary learning (LSDL) method. After solving the optimized dictionary, the sparse coefficients for the testing video feature samples are obtained, and then the classification result for video semantic is realized by reducing the error existing between the original and recreated samples. The experiment results show that the proposed DLSDDL technique considerably increases the efficiency of video semantic detection as against competing methods used in our experiment.  相似文献   

18.

Deep learning models have attained great success for an extensive range of computer vision applications including image and video classification. However, the complex architecture of the most recently developed networks imposes certain memory and computational resource limitations, especially for human action recognition applications. Unsupervised deep convolutional neural networks such as PCANet can alleviate these limitations and hence significantly reduce the computational complexity of the whole recognition system. In this work, instead of using 3D convolutional neural network architecture to learn temporal features of video actions, the unsupervised convolutional PCANet model is extended into (PCANet-TOP) which effectively learn spatiotemporal features from Three Orthogonal Planes (TOP). For each video sequence, spatial frames (XY) and temporal planes (XT and YT) are utilized to train three different PCANet models. Then, the learned features are fused after reducing their dimensionality using whitening PCA to obtain spatiotemporal feature representation of the action video. Finally, Support Vector Machine (SVM) classifier is applied for action classification process. The proposed method is evaluated on four benchmarks and well-known datasets, namely, Weizmann, KTH, UCF Sports, and YouTube action datasets. The recognition results show that the proposed PCANet-TOP provides discriminative and complementary features using three orthogonal planes and able to achieve promising and comparable results with state-of-the-art methods.

  相似文献   

19.
侯建华  张国帅  项俊 《自动化学报》2020,46(12):2690-2700
近年来, 深度学习在计算机视觉领域的应用取得了突破性进展, 但基于深度学习的视频多目标跟踪(Multiple object tracking, MOT)研究却相对甚少, 而鲁棒的关联模型设计是基于检测的多目标跟踪方法的核心.本文提出一种基于深度神经网络和度量学习的关联模型:采用行人再识别(Person re-identification, Re-ID)领域中广泛使用的度量学习技术和卷积神经网络(Convolutional neural networks, CNNs)设计目标外观模型, 即利用三元组损失函数设计一个三通道卷积神经网络, 提取更具判别性的外观特征构建目标外观相似度; 再结合运动模型计算轨迹片间的关联概率.在关联策略上, 采用匈牙利算法, 首先以逐帧关联方式得到短小可靠的轨迹片集合, 再通过自适应时间滑动窗机制多级关联, 输出各目标最终轨迹.在2DMOT2015、MOT16公开数据集上的实验结果证明了所提方法的有效性, 与当前一些主流算法相比较, 本文方法取得了相当或者领先的跟踪效果.  相似文献   

20.
Nowadays the use of deep network architectures has become widespread in machine learning. Deep belief networks (DBNs) have deep network architectures to create a powerful generative model using training data. Deep belief networks can be used in classification and feature learning. A DBN can be learned unsupervised, and then the learned features are suitable for a simple classifier (like a linear classifier) with a few labeled data. In addition, according to researches, by using sparsity in DBNs we can learn useful low-level feature representations for unlabeled data. In sparse representation, we have the property that learned features can be interpreted, i.e., correspond to meaningful aspects of the input, and capture factors of variation in the data. Different methods are proposed to build sparse DBNs. In this paper, we proposed a new method that has different behavior according to deviation of the activation of the hidden units from a (low) fixed value. In addition, our proposed regularization term has a variance parameter that can control the force degree of sparseness. According to the results, our new method achieves the best recognition accuracy on the test sets in different datasets with different applications (image, speech and text) and we can achieve incredible results when using a different number of training samples, especially when we have a few samples for training.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号