首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The human visual system is often able to learn to recognize difficult object categories from only a single view, whereas automatic object recognition with few training examples is still a challenging task. This is mainly due to the human ability to transfer knowledge from related classes. Therefore, an extension to Randomized Decision Trees is introduced for learning with very few examples by exploiting interclass relationships. The approach consists of a maximum a posteriori estimation of classifier parameters using a prior distribution learned from similar object categories. Experiments on binary and multiclass classification tasks show significant performance gains  相似文献   

2.
It is a remarkable fact that images are related to objects constituting them. In this paper, we propose to represent images by using objects appearing in them. We introduce the novel concept of object bank (OB), a high-level image representation encoding object appearance and spatial location information in images. OB represents an image based on its response to a large number of pre-trained object detectors, or ‘object filters’, blind to the testing dataset and visual recognition task. Our OB representation demonstrates promising potential in high level image recognition tasks. It significantly outperforms traditional low level image representations in image classification on various benchmark image datasets by using simple, off-the-shelf classification algorithms such as linear SVM and logistic regression. In this paper, we analyze OB in detail, explaining our design choice of OB for achieving its best potential on different types of datasets. We demonstrate that object bank is a high level representation, from which we can easily discover semantic information of unknown images. We provide guidelines for effectively applying OB to high level image recognition tasks where it could be easily compressed for efficient computation in practice and is very robust to various classifiers.  相似文献   

3.
The explosion of the Internet provides us with a tremendous resource of images shared online. It also confronts vision researchers the problem of finding effective methods to navigate the vast amount of visual information. Semantic image understanding plays a vital role towards solving this problem. One important task in image understanding is object recognition, in particular, generic object categorization. Critical to this problem are the issues of learning and dataset. Abundant data helps to train a robust recognition system, while a good object classifier can help to collect a large amount of images. This paper presents a novel object recognition algorithm that performs automatic dataset collecting and incremental model learning simultaneously. The goal of this work is to use the tremendous resources of the web to learn robust object category models for detecting and searching for objects in real-world cluttered scenes. Humans contiguously update the knowledge of objects when new examples are observed. Our framework emulates this human learning process by iteratively accumulating model knowledge and image examples. We adapt a non-parametric latent topic model and propose an incremental learning framework. Our algorithm is capable of automatically collecting much larger object category datasets for 22 randomly selected classes from the Caltech 101 dataset. Furthermore, our system offers not only more images in each object category but also a robust object category model and meaningful image annotation. Our experiments show that OPTIMOL is capable of collecting image datasets that are superior to the well known manually collected object datasets Caltech 101 and LabelMe.  相似文献   

4.
This paper addresses the problem of image-based event recognition by transferring deep representations learned from object and scene datasets. First we empirically investigate the correlation of the concepts of object, scene, and event, thus motivating our representation transfer methods. Based on this empirical study, we propose an iterative selection method to identify a subset of object and scene classes deemed most relevant for representation transfer. Afterwards, we develop three transfer techniques: (1) initialization-based transfer, (2) knowledge-based transfer, and (3) data-based transfer. These newly designed transfer techniques exploit multitask learning frameworks to incorporate extra knowledge from other networks or additional datasets into the fine-tuning procedure of event CNNs. These multitask learning frameworks turn out to be effective in reducing the effect of over-fitting and improving the generalization ability of the learned CNNs. We perform experiments on four event recognition benchmarks: the ChaLearn LAP Cultural Event Recognition dataset, the Web Image Dataset for Event Recognition, the UIUC Sports Event dataset, and the Photo Event Collection dataset. The experimental results show that our proposed algorithm successfully transfers object and scene representations towards the event dataset and achieves the current state-of-the-art performance on all considered datasets.  相似文献   

5.
目的 随着3D扫描技术和虚拟现实技术的发展,真实物体的3D识别方法已经成为研究的热点之一。针对现有基于深度学习的方法训练时间长,识别效果不理想等问题,提出了一种结合感知器残差网络和超限学习机(ELM)的3D物体识别方法。方法 以超限学习机的框架为基础,使用多层感知器残差网络学习3D物体的多视角投影特征,并利用提取的特征数据和已知的标签数据同时训练了ELM分类层、K最近邻(KNN)分类层和支持向量机(SVM)分类层识别3D物体。网络使用增加了多层感知器的卷积层替代传统的卷积层。卷积网络由改进的残差单元组成,包含多个卷积核个数恒定的并行残差通道,用于拟合不同数学形式的残差项函数。网络中半数卷积核参数和感知器参数以高斯分布随机产生,其余通过训练寻优得到。结果 提出的方法在普林斯顿3D模型数据集上达到了94.18%的准确率,在2D的NORB数据集上达到了97.46%的准确率。该算法在两个国际标准数据集中均取得了当前最好的效果。同时,使用超限学习机框架使得本文算法的训练时间比基于深度学习的方法减少了3个数量级。结论 本文提出了一种使用多视角图识别3D物体的方法,实验表明该方法比现有的ELM方法和深度学习等最新方法的识别率更高,抗干扰性更强,并且其调节参数少,收敛速度快。  相似文献   

6.
目的 卫星图像往往目标、背景复杂而且带有噪声,因此使用人工选取的特征进行卫星图像的分类就变得十分困难。提出一种新的使用卷积神经网络进行卫星图像分类的方案。使用卷积神经网络可以提取卫星图像的高层特征,进而提高卫星图像分类的识别率。方法 首先,提出一个包含六类图像的新的卫星图像数据集来解决卷积神经网络的有标签训练样本不足的问题。其次,使用了一种直接训练卷积神经网络模型和3种预训练卷积神经网络模型来进行卫星图像分类。直接训练模型直接在文章提出的数据集上进行训练,预训练模型先在ILSVRC(the ImageNet large scale visual recognition challenge)-2012数据集上进行预训练,然后在提出的卫星图像数据集上进行微调训练。完成微调的模型用于卫星图像分类。结果 提出的微调预训练卷积神经网络深层模型具有最高的分类正确率。在提出的数据集上,深层卷积神经网络模型达到了99.50%的识别率。在数据集UC Merced Land Use上,深层卷积神经网络模型达到了96.44%的识别率。结论 本文提出的数据集具有一般性和代表性,使用的深层卷积神经网络模型具有很强的特征提取能力和分类能力,且是一种端到端的分类模型,不需要堆叠其他模型或分类器。在高分辨卫星图像的分类上,本文模型和对比模型相比取得了更有说服力的结果。  相似文献   

7.

Visible face recognition systems are subjected to failure when recognizing the faces in unconstrained scenarios. So, recognizing faces under variable and low illumination conditions are more important since most of the security breaches happen during night time. Near Infrared (NIR) spectrum enables to acquire high quality images, even without any external source of light and hence it is a good method for solving the problem of illumination. Further, the soft biometric trait, gender classification and non verbal communication, facial expression recognition has also been addressed in the NIR spectrum. In this paper, a method has been proposed to recognize the face along with gender classification and facial expression recognition in NIR spectrum. The proposed method is based on transfer learning and it consists of three core components, i) training with small scale NIR images ii) matching NIR-NIR images (homogeneous) and iii) classification. Training on NIR images produce features using transfer learning which has been pre-trained on large scale VIS face images. Next, matching is performed between NIR-NIR spectrum of both training and testing faces. Then it is classified using three, separate SVM classifiers, one for face recognition, the second one for gender classification and the third one for facial expression recognition. It has been observed that the method gives state-of-the-art accuracy on the publicly available, challenging, benchmark datasets CASIA NIR-VIS 2.0, Oulu-CASIA NIR-VIS, PolyU, CBSR, IIT Kh and HITSZ for face recognition. Further, for gender classification the Oulu-CASIA NIR-VIS, PolyU,and IIT Kh has been analyzed and for facial expression the Oulu-CASIA NIR-VIS dataset has been analyzed.

  相似文献   

8.
基于U-Net的高分辨率遥感图像语义分割方法   总被引:1,自引:0,他引:1       下载免费PDF全文
图像分割是遥感解译的重要基础环节,高分辨率遥感图像中包含复杂的地物目标信息,传统分割方法应用受到极大限制,以深度卷积神经网络为代表的分割方法在诸多领域取得了突破进展。针对高分辨遥感图像分割问题,提出一种基于U-Net改进的深度卷积神经网络,实现了端到端的像素级语义分割。对原始数据集做了扩充,对每一类地物目标训练一个二分类模型,随后将各预测子图组合生成最终语义分割图像。采用了集成学习策略来提高分割精度,在“CCF卫星影像的AI分类与识别竞赛”数据集上取得了94%的训练准确率和90%的测试准确率。实验结果表明,该网络在拥有较高分割准确率的同时还具有良好的泛化能力,能够用于实际工程。  相似文献   

9.
The main objective of this paper is to investigate the use of Quality Threshold ARTMAP (QTAM) neural network in classifying the feature vectors generated by moment invariant for the insect recognition task. In this work, six different types of moment invariant technique are adopted to extract the shape features of the insect images. These moment techniques are Geometrical Moment Invariant (GMI), United Moment Invariant (UMI), Zernike Moment Invariant (ZMI), Legendre Moment Invariant (LMI), Tchebichef Moment Invariant (TMI) and Krawtchouk Moment Invariant (KMI). All the moment techniques are analyzed using the concept of intraclass and interclass analysis. In intraclass analysis, several computation methods are introduced in order to examine the invariance properties of adopted moment techniques for the same insect object. Meanwhile, the classification accuracy of neural networks is adopted to measure the interclass characteristic and the effectiveness of moment technique in extracting the shape features of insect images. Other types of neural networks are also utilized in this research work. This includes novel enhancement technique based on the Gaussian and Mahalanobis function that design to increase its prediction accuracy. All the other networks used to classify the feature vectors are based on the Fuzzy ARTMAP (FAM) neural network. The experimental results indicated that the Krawtchouk Moment Invariant technique generated the highest classification accuracy for most of the networks used and generated the smallest error for the intraclass analysis. Using different normalization technique, the Quality Threshold ARTMAP and Mahalanobis distance function (QTAM-m) network gave the highest insect recognition results when compared to other networks.  相似文献   

10.
In this paper we investigate the fine-grained object categorization problem of determining fish species in low-quality visual data (images and videos) recorded in real-life settings. We first describe a new annotated dataset of about 35,000 fish images (MA-35K dataset), derived from the Fish4Knowledge project, covering 10 fish species from the Eastern Indo-Pacific bio-geographic zone. We then resort to a label propagation method able to transfer the labels from the MA-35K to a set of 20 million fish images in order to achieve variability in fish appearance. The resulting annotated dataset, containing over one million annotations (AA-1M), was then manually checked by removing false positives as well as images with occlusions between fish or showing partially fish. Finally, we randomly picked more than 30,000 fish images distributed among ten fish species and extracted from about 400 10-minute videos, and used this data (both images and videos) for the fish task of the LifeCLEF 2014 contest. Together with the fine-grained visual dataset release, we also present two approaches for fish species classification in, respectively, still images and videos. Both approaches showed high performance (for some fish species the precision and recall were close to one) in object classification and outperformed state-of-the-art methods. In addition, despite the fact that dataset is unbalanced in the number of images per species, both methods (especially the one operating on still images) appear to be rather robust against the long-tail curse of data, showing the best performance on the less populated object classes.  相似文献   

11.
Building information modeling (BIM) has a semantic scope that encompasses all building systems, e.g. architectural, structural, mechanical, electrical, and plumbing. Automated, comprehensive digital modeling of buildings will require methods for semantic segmentation of images and 3D reconstructions capable of recognizing all building component classes. However, prior building component recognition methods have had limited semantic coverage and are not easily combined or scaled. Here we show that a deep neural network can semantically segment RGB-D (i.e. color and depth) images into 13 building component classes simultaneously despite the use of a small training dataset with only 1490 object instances. For this task, the method achieves an average intersection over union (IoU) of 0.5. The dataset was designed using a common building taxonomy to ensure comprehensive semantic coverage and was collected from a diversity of buildings to ensure intra-class diversity. As a consequence of its semantic scope, it was necessary to perform pre-segmentation and 3D to 2D projection as leverage for dataset annotation. In creating our deep learning pipeline, we found that transfer learning, class balancing, and prevention of overfitting effectively overcame the dataset’s borderline adequate class representation. Our results demonstrate how the semantic coverage of a building component recognition method can be scaled to include a larger diversity of building systems. We anticipate our method to be a starting point for broadening the scope of the semantic segmentation methods involved in digital modeling of buildings.  相似文献   

12.
The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection.  相似文献   

13.
Accurate Object Recognition with Shape Masks   总被引:1,自引:0,他引:1  
In this paper we propose an object recognition approach that is based on shape masks—generalizations of segmentation masks. As shape masks carry information about the extent (outline) of objects, they provide a convenient tool to exploit the geometry of objects. We apply our ideas to two common object class recognition tasks—classification and localization. For classification, we extend the orderless bag-of-features image representation. In the proposed setup shape masks can be seen as weak geometrical constraints over bag-of-features. Those constraints can be used to reduce background clutter and help recognition. For localization, we propose a new recognition scheme based on high-dimensional hypothesis clustering. Shape masks allow to go beyond bounding boxes and determine the outline (approximate segmentation) of the object during localization. Furthermore, the method easily learns and detects possible object viewpoints and articulations, which are often well characterized by the object outline. Our experiments reveal that shape masks can improve recognition accuracy of state-of-the-art methods while returning richer recognition answers at the same time. We evaluate the proposed approach on the challenging natural-scene Graz-02 object classes dataset.  相似文献   

14.

Automated identification of insects is a tough task where many challenges like data limitation, imbalanced data count, and background noise needs to be overcome for better performance. This paper describes such an image dataset which consists of a limited, imbalanced number of images regarding six genera of subfamily Cicindelinae (tiger beetles) of order Coleoptera. The diversity of image collection is at a high level as the images were taken from different sources, angles and on different scales. Thus, the salient regions of the images have a large variation. Therefore, one of the main intentions in this process was to get an idea about the image dataset while comparing different unique patterns and features in images. The dataset was evaluated on different classification algorithms including deep learning models based on different approaches to provide a benchmark. The dynamic nature of the dataset poses a challenge to the image classification algorithms. However transfer learning models using softmax classifier performed well on the current dataset. The tiger beetle classification can be challenging even to a trained human eye, therefore, this dataset opens a new avenue for the classification algorithms to develop, to identify features which human eyes have not identified.

  相似文献   

15.
This article presents a system for texture-based probabilistic classification and localisation of three-dimensional objects in two-dimensional digital images and discusses selected applications. In contrast to shape-based approaches, our texture-based method does not rely on object features extracted using image segmentation techniques. Rather, the objects are described by local feature vectors computed directly from image pixel values using the wavelet transform. Both gray level and colour images can be processed. In the training phase, object features are statistically modelled as normal density functions. In the recognition phase, the system classifies and localises objects in scenes with real heterogeneous backgrounds. Feature vectors are calculated and a maximisation algorithm compares the learned density functions with the extracted feature vectors and yields the classes and poses of objects found in the scene. Experiments carried out on a real dataset of over 40,000 images demonstrate the robustness of the system in terms of classification and localisation accuracy. Finally, two important real application scenarios are discussed, namely recognising museum exhibits from visitors’ own photographs and classification of metallography images.  相似文献   

16.
目的 细粒度车型识别旨在通过任意角度及场景下的车辆外观图像识别出其生产厂家、品牌型号、年款等信息,在智慧交通、安防等领域具有重要意义。针对该问题,目前主流方法已由手工特征提取向卷积神经网络为代表的深度学习方法过渡。但该类方法仍存在弊端,首先是识别时须指定车辆的具体位置,其次是无法充分利用细粒度目标识别其视觉差异主要集中在关键的目标局部的特点。为解决这些问题,提出基于区域建议网络的细粒度识别方法,并成功应用于车型识别。方法 区域建议网络是一种全卷积神经网络,该方法首先通过卷积神经网络提取图像深层卷积特征,然后在卷积特征上滑窗产生区域候选,之后将区域候选的特征经分类层及回归层得到其为目标的概率及目标的位置,最后将这些区域候选通过目标检测网络获取其具体类别及目标的精确位置,并通过非极大值抑制算法得到最终识别结果。结果 该方法在斯坦福BMW-10数据集的识别准确率为76.38%,在斯坦福Cars-196数据集识别准确率为91.48%,不仅大幅领先于传统手工特征方法,也取得了与目前最优的方法相当的识别性能。该方法同时在真实自然场景中取得了优异的识别效果。结论 区域建议网络不仅为目标检测提供了目标的具体位置,而且提供了具有区分度的局部区域,为细粒度目标识别提供了一种新的思路。该方法克服了传统目标识别对于目标位置的依赖,并且能够实现一图多车等复杂场景下的车型细粒度识别,具有更好的鲁棒性及实用性。  相似文献   

17.
零样本学习是机器学习和图像识别领域重要的研究热点.零样本学习方法通常利用未见类与可见类之间的类别语义信息,将从可见类样本学习到的知识转移到未见类,实现对未见类样本的分类识别.提出了一种基于视觉特征组合构造的零样本学习方法,采用特征组合的方式构造产生大量未见类样例特征,将零样本学习问题转化为标准的监督学习分类问题.该方法模拟了人类的联想认知过程,其主要包括4步:特征-属性关系提取、样例构造、样例过滤、特征域适应.在可见类样本上抽取类别属性与特征维度的对应关系;利用特征-属性关系,通过视觉特征的组合构造的方式,产生未见类样例;引入非相似表示,过滤掉不合理的未见类样例;提出半监督特征域适应和无监督特征域适应,实现未见类样例的线性转换,产生更有效的未见类样例.在3个基准数据集(AwA,AwA2和SUN)上的实验结果显示,该方法效能优越,在数据集AwA上获得了当前最优的Top-1分类正确率82.6%.实验结果证明了该方法的有效性和先进性.  相似文献   

18.
目的 当前的大型数据集,例如ImageNet,以及一些主流的网络模型,如ResNet等能直接高效地应用于正常场景的分类,但在雾天场景下则会出现较大的精度损失。雾天场景复杂多样,大量标注雾天数据成本过高,在现有条件下,高效地利用大量已有场景的标注数据和网络模型完成雾天场景下的分类识别任务至关重要。方法 本文使用了一种低成本的数据增强方法,有效减小图像在像素域上的差异。基于特征多样性和特征对抗的思想,提出多尺度特征多对抗网络,通过提取数据的多尺度特征,增强特征在特征域分布的代表性,利用对抗机制,在多个特征上减少特征域上的分布差异。通过缩小像素域和特征域分布差异,进一步减小领域偏移,提升雾天场景的分类识别精度。结果 在真实的多样性雾天场景数据上,通过消融实验,使用像素域数据增强方法后,带有标签的清晰图像数据在风格上更趋向于带雾图像,总的分类精度提升了8.2%,相比其他的数据增强方法,至少提升了6.3%,同时在特征域上使用多尺度特征多对抗网络,相比其他的网络,准确率至少提升了8.0%。结论 像素域数据增强以及多尺度特征多对抗网络结合的雾天图像识别方法,综合考虑了像素域和特征域的领域分布差异,结合了多尺度的丰富特征信息,同时使用多对抗来缩小雾天数据的领域偏移,在真实多样性雾天数据集上获得了更好的图像分类识别效果。  相似文献   

19.
目的 在高分辨率遥感图像场景识别问题中,经典的监督机器学习算法大多需要充足的标记样本训练模型,而获取遥感图像的标注费时费力。为解决遥感图像场景识别中标记样本缺乏且不同数据集无法共享标记样本问题,提出一种结合对抗学习与变分自动编码机的迁移学习网络。方法 利用变分自动编码机(variational auto-encoders,VAE)在源域数据集上进行训练,分别获得编码器和分类器网络参数,并用源域编码器网络参数初始化目标域编码器。采用对抗学习的思想,引入判别网络,交替训练并更新目标域编码器与判别网络参数,使目标域与源域编码器提取的特征尽量相似,从而实现遥感图像源域到目标域的特征迁移。结果 利用两个遥感场景识别数据集进行实验,验证特征迁移算法的有效性,同时尝试利用SUN397自然场景数据集与遥感场景间的迁移识别,采用相关性对齐以及均衡分布适应两种迁移学习方法作为对比。两组遥感场景数据集间的实验中,相比于仅利用源域样本训练的网络,经过迁移学习后的网络场景识别精度提升约10%,利用少量目标域标记样本后提升更为明显;与对照实验结果相比,利用少量目标域标记样本时提出方法的识别精度提升均在3%之上,仅利用源域标记样本时提出方法场景识别精度提升了10%~40%;利用自然场景数据集时,方法仍能在一定程度上提升场景识别精度。结论 本文提出的对抗迁移学习网络可以在目标域样本缺乏的条件下,充分利用其他数据集中的样本信息,实现不同场景图像数据集间的特征迁移及场景识别,有效提升遥感图像的场景识别精度。  相似文献   

20.
Face datasets are considered a primary tool for evaluating the efficacy of face recognition methods. Here we show that in many of the commonly used face datasets, face images can be recognized accurately at a rate significantly higher than random even when no face, hair or clothes features appear in the image. The experiments were done by cutting a small background area from each face image, so that each face dataset provided a new image dataset which included only seemingly blank images. Then, an image classification method was used in order to check the classification accuracy. Experimental results show that the classification accuracy ranged between 13.5% (color FERET) to 99% (YaleB). These results indicate that the performance of face recognition methods measured using face image datasets may be biased. Compilable source code used for this experiment is freely available for download via the Internet.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号