首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 995 毫秒
1.
Given an unstructured collection of captioned images of cluttered scenes featuring a variety of objects, our goal is to simultaneously learn the names and appearances of the objects. Only a small fraction of local features within any given image are associated with a particular caption word, and captions may contain irrelevant words not associated with any image object. We propose a novel algorithm that uses the repetition of feature neighborhoods across training images and a measure of correspondence with caption words to learn meaningful feature configurations (representing named objects). We also introduce a graph-based appearance model that captures some of the structure of an object by encoding the spatial relationships among the local visual features. In an iterative procedure, we use language (the words) to drive a perceptual grouping process that assembles an appearance model for a named object. Results of applying our method to three data sets in a variety of conditions demonstrate that, from complex, cluttered, real-world scenes with noisy captions, we can learn both the names and appearances of objects, resulting in a set of models invariant to translation, scale, orientation, occlusion, and minor changes in viewpoint or articulation. These named models, in turn, are used to automatically annotate new, uncaptioned images, thereby facilitating keyword-based image retrieval.  相似文献   

2.
Multi-view object class recognition can be achieved using existing approaches for single-view object class recognition, by treating different views as entirely independent classes. This strategy requires a large amount of training data for many viewpoints, which can be costly to obtain. We describe a method for constructing a weak three-dimensional model from as few as two views of an object of the target class, and using that model to transform images of objects from one view to several other views, effectively multiplying their value for class recognition. Our approach can be coupled with any 2D image-based recognition system. We show that automatically transformed images dramatically decrease the data requirements for multi-view object class recognition.  相似文献   

3.
In this paper, an online adaptive model-free tracker is proposed to track single objects in video sequences to deal with real-world tracking challenges like low-resolution, object deformation, occlusion and motion blur. The novelty lies in the construction of a strong appearance model that captures features from the initialized bounding box and then are assembled into anchor point features. These features memorize the global pattern of the object and have an internal star graph-like structure. These features are unique and flexible and help tracking generic and deformable objects with no limitation on specific objects. In addition, the relevance of each feature is evaluated online using short-term consistency and long-term consistency. These parameters are adapted to retain consistent features that vote for the object location and that deal with outliers for long-term tracking scenarios. Additionally, voting in a Gaussian manner helps in tackling inherent noise of the tracking system and in accurate object localization. Furthermore, the proposed tracker uses pairwise distance measure to cope with scale variations and combines pixel-level binary features and global weighted color features for model update. Finally, experimental results on a visual tracking benchmark dataset are presented to demonstrate the effectiveness and competitiveness of the proposed tracker.  相似文献   

4.
3D object recognition is a difficult and yet an important problem in computer vision. A 3D object recognition system has two major components, namely: an object modeller and a system that performs the matching of stored representations to those derived from the sensed image. The performance of systems wherein the construction of object models is done by training from one or more images of the objects, has not been very satisfactory. Although objects used in a robotic workcell or in assembly processes have been designed using a CAD system, the vision systems used for recognition of these objects are independent of the CAD database. This paper proposes a scheme for interfacing the CAD database of objects and the computer vision processes used for recognising these objects. CAD models of objects are processed to generate vision oriented features that appear in the different views of the object and the same features are extracted from images of the object to identify the object and its pose.  相似文献   

5.
Distinctive Image Features from Scale-Invariant Keypoints   总被引:517,自引:6,他引:517  
This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.  相似文献   

6.
7.
8.
In this article we introduce and compare two approaches towards automatic classification of 3D objects in 2D images. The first one is based on statistical modeling of wavelet features. It estimates probability density functions for all possible object classes considered in a particular recognition task. The second one uses sparse local features. For training, SURF features are extracted from the training images. During the recognition phase, features from the image are matched geometrically, providing the best fitting object for the query image. Experiments were performed for different training sets using more than 40 000 images with different backgrounds. Results show very good classification rates for both systems and point out special characteristics for each approach, which make them more suitable for different applications.  相似文献   

9.
Adaptive color segmentation-a comparison of neural and statisticalmethods   总被引:8,自引:0,他引:8  
With the availability of more powerful computers it is nowadays possible to perform pixel based operations on real camera images even in the full color space. New adaptive classification tools like neural networks make it possible to develop special-purpose object detectors that can segment arbitrary objects in real images with a complex distribution in the feature space after training with one or several previously labeled image(s). The paper focuses on a detailed comparison of a neural approach based on local linear maps (LLMs) to a classifier based on normal distributions. The proposed adaptive segmentation method uses local color information to estimate the membership probability in the object, respectively, background class. The method is applied to the recognition and localization of human hands in color camera images of complex laboratory scenes.  相似文献   

10.
对行人和车辆的检测识别是无人驾驶领域的重要组成部分,为满足该领域对相关模型检测精确度的需求,以传统单发多框检测器(single shot multibox detector,SSD)为基础,提出了一种车载图像识别改进算法。鉴于传统SSD目标检测算法不能充分利用局部特征和全局语义特征、目标定位和识别存在矛盾等缺陷,提出了SSD检测模型相关特征层的融合方法,从而重新生成模型的目标检测金字塔(object detection pyramid,ODP)。改进算法将输入图像中待检测目标的低层次细节特征与高层次语义特征结合起来,降低了待检测目标定位与识别间的矛盾,达到了提升模型检测精确度的目的。利用行车记录仪获得的车载图像数据集进行训练,实验结果表明,改进的SSD算法在相关图像数据集的测试集上可以达到79.2%的精确度,与传统的SSD算法相比精确度提高了2.3%。  相似文献   

11.
Learning a new object class from cluttered training images is very challenging when the location of object instances is unknown, i.e. in a weakly supervised setting. Many previous works require objects covering a large portion of the images. We present a novel approach that can cope with extensive clutter as well as large scale and appearance variations between object instances. To make this possible we exploit generic knowledge learned beforehand from images of other classes for which location annotation is available. Generic knowledge facilitates learning any new class from weakly supervised images, because it reduces the uncertainty in the location of its object instances. We propose a conditional random field that starts from generic knowledge and then progressively adapts to the new class. Our approach simultaneously localizes object instances while learning an appearance model specific for the class. We demonstrate this on several datasets, including the very challenging Pascal VOC 2007. Furthermore, our method allows training any state-of-the-art object detector in a weakly supervised fashion, although it would normally require object location annotations.  相似文献   

12.
3D object recognition from local features is robust to occlusions and clutter. However, local features must be extracted from a small set of feature rich keypoints to avoid computational complexity and ambiguous features. We present an algorithm for the detection of such keypoints on 3D models and partial views of objects. The keypoints are highly repeatable between partial views of an object and its complete 3D model. We also propose a quality measure to rank the keypoints and select the best ones for extracting local features. Keypoints are identified at locations where a unique local 3D coordinate basis can be derived from the underlying surface in order to extract invariant features. We also propose an automatic scale selection technique for extracting multi-scale and scale invariant features to match objects at different unknown scales. Features are projected to a PCA subspace and matched to find correspondences between a database and query object. Each pair of matching features gives a transformation that aligns the query and database object. These transformations are clustered and the biggest cluster is used to identify the query object. Experiments on a public database revealed that the proposed quality measure relates correctly to the repeatability of keypoints and the multi-scale features have a recognition rate of over 95% for up to 80% occluded objects.  相似文献   

13.
Evidence-based recognition of 3-D objects   总被引:1,自引:0,他引:1  
An evidence-based recognition technique is defined that identifies 3-D objects by looking for their notable features. This technique makes use of an evidence rule base, which is a set of salient or evidence conditions with corresponding evidence weights for various objects in the database. A measure of similarity between the set of observed features and the set of evidence conditions for a given object in the database is used to determine the identity of an object in the scene or reject the object(s) in the scene as unknown. This procedure has polynomial time complexity and correctly identifies a variety of objects in both synthetic and real range images. A technique for automatically deriving the evidence rule base from training views of objects is shown to generate evidence conditions that successfully identify new views of those objects  相似文献   

14.
Scalability is an important issue in object recognition as it reduces database storage and recognition time. In this paper, we propose a new scalable 3D object representation and a learning method to recognize many everyday objects. The key proposal for scalable object representation is to combine the concept of feature sharing with multi-view clustering in part-based object representation, in particular a common-frame constellation model (CFCM). In this representation scheme, we also propose a fully automatic learning method: appearance-based automatic feature clustering and sequential construction of clustered CFCMs from labeled multi-views and multiple objects. We evaluated the scalability of the proposed method to COIL-100 DB and applied the learning scheme to 112 objects with 620 training views. Experimental results show the scalable learning results in almost constant recognition performance relative to the number of objects.  相似文献   

15.
16.
现实世界的物体图像往往存在较大的类内变化,使用单一原型描述整个类别会导致语义模糊问题,为此提出一种基于超像素的多原型生成模块,利用多个原型分别表示物体的不同语义区域,通过图神经网络在生成的多个原型间利用上下文信息执行原型校正以保证子原型的正交性.为了获取到更准确的原型表示,设计了一种基于Transformer的语义对齐模块,以挖掘查询图像特征和支持图像的背景特征中蕴含的语义信息,此外还提出了一种多尺度特征融合结构,引导模型关注同时出现在支持图像和查询图像中的特征,提高对物体尺度变化的鲁棒性.所提出的模型在PASCAL-5i数据集上进行了实验,与基线模型相比平均交并比提高了6%.  相似文献   

17.
18.
《Advanced Robotics》2013,27(15):2035-2057
This paper presents a method to self-organize object features that describe object dynamics using bidirectional training. The model is composed of a dynamics learning module and a feature extraction module. Recurrent Neural Network with Parametric Bias (RNNPB) is utilized for the dynamics learning module, learning and self-organizing the sequences of robot and object motions. A hierarchical neural network is linked to the input of RNNPB as the feature extraction module for self-organizing object features that describe the object motions. The two modules are simultaneously trained through bidirectional training using image and motion sequences acquired from the robot's active sensing with objects. Experiments are performed with the robot's pushing motion with a variety of objects to generate sliding, falling over, bouncing and rolling motions. The results have shown that the model is capable of self-organizing object dynamics based on the self-organized features.  相似文献   

19.
In many cases, a single view of an object may not contain sufficient features to recognize it unambiguously. This paper presents a new online recognition scheme based on next view planning for the identification of an isolated 3D object using simple features. The scheme uses a probabilistic reasoning framework for recognition and planning. Our knowledge representation scheme encodes feature based information about objects as well as the uncertainty in the recognition process. This is used both in the probability calculations as well as in planning the next view. Results clearly demonstrate the effectiveness of our strategy for a reasonably complex experimental set  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号