首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Over successive stages, the ventral visual system develops neurons that respond with view, size and position invariance to objects including faces. A major challenge is to explain how invariant representations of individual objects could develop given visual input from environments containing multiple objects. Here we show that the neurons in a 1-layer competitive network learn to represent combinations of three objects simultaneously present during training if the number of objects in the training set is low (e.g. 4), to represent combinations of two objects as the number of objects is increased to for e.g. 10, and to represent individual objects as the number of objects in the training set is increased further to for e.g. 20. We next show that translation invariant representations can be formed even when multiple stimuli are always present during training, by including a temporal trace in the learning rule. Finally, we show that these concepts can be extended to a multi-layer hierarchical network model (VisNet) of the ventral visual system. This approach provides a way to understand how a visual system can, by self-organizing competitive learning, form separate invariant representations of each object even when each object is presented in a scene with multiple other objects present, as in natural visual scenes.  相似文献   

2.
Individual cells that respond preferentially to particular objects have been found in the ventral visual pathway. How the brain is able to develop neurons that exhibit these object selective responses poses a significant challenge for computational models of object recognition. Typically, many objects make up a complex natural scene and are never presented in isolation. Nonetheless, the visual system is able to build invariant object selective responses. In this paper, we present a model of the ventral visual stream, VisNet, which can solve the problem of learning object selective representations even when multiple objects are always present during training. Past research with the VisNet model has shown that the network can operate successfully in a similar training paradigm, but only when training comprises many different object pairs. Numerous pairings are required for statistical decoupling between objects. In this research, we show for the first time that VisNet is capable of utilizing the statistics inherent in independent rotation to form object selective representations when training with just two objects, always presented together. Crucially, our results show that in a dependent rotation paradigm, the model fails to build object selective representations and responds as if the two objects are in fact one. If the objects begin to rotate independently, the network forms representations for each object separately.  相似文献   

3.
To form view-invariant representations of objects, neurons in the inferior temporal cortex may associate together different views of an object, which tend to occur close together in time under natural viewing conditions. This can be achieved in neuronal network models of this process by using an associative learning rule with a short-term temporal memory trace. It is postulated that within a view, neurons learn representations that enable them to generalize within variations of that view. When three-dimensional (3D) objects are rotated within small angles (up to, e.g., 30 degrees), their surface features undergo geometric distortion due to the change of perspective. In this article, we show how trace learning could solve the problem of in-depth rotation-invariant object recognition by developing representations of the transforms that features undergo when they are on the surfaces of 3D objects. Moreover, we show that having learned how features on 3D objects transform geometrically as the object is rotated in depth, the network can correctly recognize novel 3D variations within a generic view of an object composed of a new combination of previously learned features. These results are demonstrated in simulations of a hierarchical network model (VisNet) of the visual system that show that it can develop representations useful for the recognition of 3D objects by forming perspective-invariant representations to allow generalization within a generic view.  相似文献   

4.
Roth D  Yang MH  Ahuja N 《Neural computation》2002,14(5):1071-1103
A learning account for the problem of object recognition is developed within the probably approximately correct (PAC) model of learnability. The key assumption underlying this work is that objects can be recognized (or discriminated) using simple representations in terms of syntactically simple relations over the raw image. Although the potential number of these simple relations could be huge, only a few of them are actually present in each observed image, and a fairly small number of those observed are relevant to discriminating an object. We show that these properties can be exploited to yield an efficient learning approach in terms of sample and computational complexity within the PAC model. No assumptions are needed on the distribution of the observed objects, and the learning performance is quantified relative to its experience. Most important, the success of learning an object representation is naturally tied to the ability to represent it as a function of some intermediate representations extracted from the image. We evaluate this approach in a large-scale experimental study in which the SNoW learning architecture is used to learn representations for the 100 objects in the Columbia Object Image Library. Experimental results exhibit good generalization and robustness properties of the SNoW-based method relative to other approaches. SNoW's recognition rate degrades more gracefully when the training data contains fewer views, and it shows similar behavior in some preliminary experiments with partially occluded objects.  相似文献   

5.
We address the problem of automatically learning the recurring associations between the visual structures in images and the words in their associated captions, yielding a set of named object models that can be used for subsequent image annotation. In previous work, we used language to drive the perceptual grouping of local features into configurations that capture small parts (patches) of an object. However, model scope was poor, leading to poor object localization during detection (annotation), and ambiguity was high when part detections were weak. We extend and significantly revise our previous framework by using language to drive the perceptual grouping of parts, each a configuration in the previous framework, into hierarchical configurations that offer greater spatial extent and flexibility. The resulting hierarchical multipart models remain scale, translation and rotation invariant, but are more reliable detectors and provide better localization. Moreover, unlike typical frameworks for learning object models, our approach requires no bounding boxes around the objects to be learned, can handle heavily cluttered training scenes, and is robust in the face of noisy captions, i.e., where objects in an image may not be named in the caption, and objects named in the caption may not appear in the image. We demonstrate improved precision and recall in annotation over the non-hierarchical technique and also show extended spatial coverage of detected objects.  相似文献   

6.
7.
Recently, large scale image annotation datasets have been collected with millions of images and thousands of possible annotations. Latent variable models, or embedding methods, that simultaneously learn semantic representations of object labels and image representations can provide tractable solutions on such tasks. In this work, we are interested in jointly learning representations both for the objects in an image, and the parts of those objects, because such deeper semantic representations could bring a leap forward in image retrieval or browsing. Despite the size of these datasets, the amount of annotated data for objects and parts can be costly and may not be available. In this paper, we propose to bypass this cost with a method able to learn to jointly label objects and parts without requiring exhaustively labeled data. We design a model architecture that can be trained under a proxy supervision obtained by combining standard image annotation (from ImageNet) with semantic part-based within-label relations (from WordNet). The model itself is designed to model both object image to object label similarities, and object label to object part label similarities in a single joint system. Experiments conducted on our combined data and a precisely annotated evaluation set demonstrate the usefulness of our approach.  相似文献   

8.
Colour can potentially provide useful information for a variety of computer vision tasks such as image segmentation, image retrieval, object recognition and tracking. However, for it to be helpful in practice, colour must relate directly to the intrinsic properties of the imaged objects and be independent of imaging conditions such as scene illumination and the imaging device. To this end many invariant colour representations have been proposed in the literature. Unfortunately, recent work (Second Workshop on Content-based Multimedia Indexing) has shown that none of them provides good enough practical performance.In this paper we propose a new colour invariant image representation based on an existing grey-scale image enhancement technique: histogram equalisation. We show that provided the rank ordering of sensor responses are preserved across a change in imaging conditions (lighting or device) a histogram equalisation of each channel of a colour image renders it invariant to these conditions. We set out theoretical conditions under which rank ordering of sensor responses is preserved and we present empirical evidence which demonstrates that rank ordering is maintained in practice for a wide range of illuminants and imaging devices. Finally, we apply the method to an image indexing application and show that the method out performs all previous invariant representations, giving close to perfect illumination invariance and very good performance across a change in device.  相似文献   

9.
A technique is developed to construct a representation of planar objects undergoing a general affine transformation. The representation can be used to describe planar or nearly planar objects in a three-dimensional space, observed by a camera under arbitrary orientations. The technique is based upon object contours, parameterized by an affine invariant parameter and the dyadic wavelet transform. The role of the wavelet transform is the extraction of multiresolution affine invariant features from the affine invariant contour representation. A dissimilarity function is also developed and used to distinguish among different object representations. This function makes use of the extrema on the representations, thus making its computation very efficient. A study of the effect of using different wavelet functions and their order or vanishing moments is also carried out. Experimental results show that the performance of the proposed representation is better than that of other existing methods, particularly when objects are heavily corrupted with noise  相似文献   

10.
Adaptive 3-D object recognition from multiple views   总被引:5,自引:0,他引:5  
The authors address the problem of generating representations of 3-D objects automatically from exploratory view sequences of unoccluded objects. In building the models, processed frames of a video sequence are clustered into view categories called aspects, which represent characteristic views of an object invariant to its apparent position, size, 2-D orientation, and limited foreshortening deformation. The aspects as well as the aspect transitions of a view sequence are used to build (and refine) the 3-D object representations online in the form of aspect-transition matrices. Recognition emerges as the hypothesis that has accumulated the maximum evidence at each moment. The `winning' object continues to refine its representation until either the camera is redirected or another hypothesis accumulates greater evidence. This work concentrates on 3-D appearance modeling and succeeds under favorable viewing conditions by using simplified processes to segment objects from the scene and derive the spatial agreement of object features  相似文献   

11.
Mel BW  Fiser J 《Neural computation》2000,12(4):731-762
We have studied some of the design trade-offs governing visual representations based on spatially invariant conjunctive feature detectors, with an emphasis on the susceptibility of such systems to false-positive recognition errors-Malsburg's classical binding problem. We begin by deriving an analytical model that makes explicit how recognition performance is affected by the number of objects that must be distinguished, the number of features included in the representation, the complexity of individual objects, and the clutter load, that is, the amount of visual material in the field of view in which multiple objects must be simultaneously recognized, independent of pose, and without explicit segmentation. Using the domain of text to model object recognition in cluttered scenes, we show that with corrections for the nonuniform probability and nonindependence of text features, the analytical model achieves good fits to measured recognition rates in simulations involving a wide range of clutter loads, word size, and feature counts. We then introduce a greedy algorithm for feature learning, derived from the analytical model, which grows a representation by choosing those conjunctive features that are most likely to distinguish objects from the cluttered backgrounds in which they are embedded. We show that the representations produced by this algorithm are compact, decorrelated, and heavily weighted toward features of low conjunctive order. Our results provide a more quantitative basis for understanding when spatially invariant conjunctive features can support unambiguous perception in multiobject scenes, and lead to several insights regarding the properties of visual representations optimized for specific recognition tasks.  相似文献   

12.
Mel B  Fiser J 《Neural computation》2000,12(2):247-278
We have studied some of the design trade-offs governing visual representations based on spatially invariant conjunctive feature detectors, with an emphasis on the susceptibility of such systems to false-positive recognition errors-Malsburg's classical binding problem. We begin by deriving an analytical model that makes explicit how recognition performance is affected by the number of objects that must be distinguished, the number of features included in the representation, the complexity of individual objects, and the clutter load, that is, the amount of visual material in the field of view in which multiple objects must be simultaneously recognized, independent of pose, and without explicit segmentation. Using the domain of text to model object recognition in cluttered scenes, we show that with corrections for the nonuniform probability and nonindependence of text features, the analytical model achieves good fits to measured recognition rates in simulations involving a wide range of clutter loads, word size, and feature counts. We then introduce a greedy algorithm for feature learning, derived from the analytical model, which grows a representation by choosing those conjunctive features that are most likely to distinguish objects from the cluttered backgrounds in which they are embedded. We show that the representations produced by this algorithm are compact, decorrelated, and heavily weighted toward features of low conjunctive order. Our results provide a more quantitative basis for understanding when spatially invariant conjunctive features can support unambiguous perception in multiobject scenes, and lead to several insights regarding the properties of visual representations optimized for specific recognition tasks.  相似文献   

13.
It has been proposed that invariant pattern recognition might be implemented using a learning rule that utilizes a trace of previous neural activity which, given the spatio-temporal continuity of the statistics of sensory input, is likely to be about the same object though with differing transforms in the short time scale. Recently, it has been demonstrated that a modified Hebbian rule which incorporates a trace of previous activity but no contribution from the current activity can offer substantially improved performance. In this paper we show how this rule can be related to error correction rules, and explore a number of error correction rules that can be applied to and can produce good invariant pattern recognition. An explicit relationship to temporal difference learning is then demonstrated, and from this further learning rules related to temporal difference learning are developed. This relationship to temporal difference learning allows us to begin to exploit established analyses of temporal difference learning to provide a theoretical framework for better understanding the operation and convergence properties of these learning rules, and more generally, of rules useful for learning invariant representations. The efficacy of these different rules for invariant object recognition is compared using VisNet, a hierarchical competitive network model of the operation of the visual system.  相似文献   

14.
Given an unstructured collection of captioned images of cluttered scenes featuring a variety of objects, our goal is to simultaneously learn the names and appearances of the objects. Only a small fraction of local features within any given image are associated with a particular caption word, and captions may contain irrelevant words not associated with any image object. We propose a novel algorithm that uses the repetition of feature neighborhoods across training images and a measure of correspondence with caption words to learn meaningful feature configurations (representing named objects). We also introduce a graph-based appearance model that captures some of the structure of an object by encoding the spatial relationships among the local visual features. In an iterative procedure, we use language (the words) to drive a perceptual grouping process that assembles an appearance model for a named object. Results of applying our method to three data sets in a variety of conditions demonstrate that, from complex, cluttered, real-world scenes with noisy captions, we can learn both the names and appearances of objects, resulting in a set of models invariant to translation, scale, orientation, occlusion, and minor changes in viewpoint or articulation. These named models, in turn, are used to automatically annotate new, uncaptioned images, thereby facilitating keyword-based image retrieval.  相似文献   

15.
We develop a novel method for class-based feature matching across large changes in viewing conditions. The method is based on the property that when objects share a similar part, the similarity is preserved across viewing conditions. Given a feature and a training set of object images, we first identify the subset of objects that share this feature. The transformation of the feature's appearance across viewing conditions is determined mainly by properties of the feature, rather than of the object in which it is embedded. Therefore, the transformed feature will be shared by approximately the same set of objects. Based on this consistency requirement, corresponding features can be reliably identified from a set of candidate matches. Unlike previous approaches, the proposed scheme compares feature appearances only in similar viewing conditions, rather than across different viewing conditions. As a result, the scheme is not restricted to locally planar objects or affine transformations. The approach also does not require examples of correct matches. We show that by using the proposed method, a dense set of accurate correspondences can be obtained. Experimental comparisons demonstrate that matching accuracy is significantly improved over previous schemes. Finally, we show that the scheme can be successfully used for invariant object recognition.  相似文献   

16.
We introduce an object recognition and localization system in which objects are represented as a sparse and spatially organized set of local (bent) line segments. The line segments correspond to binarized Gabor wavelets or banana wavelets, which are bent and stretched Gabor wavelets. These features can be metrically organized; the metric enables an efficient learning of object representations. It is essential for learning that only corresponding local areas are compared with each other; i.e., the correspondence problem has to be solved. We achieve correpondence (and in this way autonomous learning) by utilizing motor-controlled feedback, i.e., by interaction of arm movement and camera tracking. The learned representations are used for fast and efficient localization and discrimination of objects in complex scenes.  相似文献   

17.
Separating style and content with bilinear models   总被引:10,自引:0,他引:10  
Perceptual systems routinely separate "content" from "style," classifying familiar words spoken in an unfamiliar accent, identifying a font or handwriting style across letters, or recognizing a familiar face or object seen under unfamiliar viewing conditions. Yet a general and tractable computational model of this ability to untangle the underlying factors of perceptual observations remains elusive (Hofstadter, 1985). Existing factor models (Mardia, Kent, & Bibby, 1979; Hinton & Zemel, 1994; Ghahramani, 1995; Bell & Sejnowski, 1995; Hinton, Dayan, Frey, & Neal, 1995; Dayan, Hinton, Neal, & Zemel, 1995; Hinton & Ghahramani, 1997) are either insufficiently rich to capture the complex interactions of perceptually meaningful factors such as phoneme and speaker accent or letter and font, or do not allow efficient learning algorithms. We present a general framework for learning to solve two-factor tasks using bilinear models, which provide sufficiently expressive representations of factor interactions but can nonetheless be fit to data using efficient algorithms based on the singular value decomposition and expectation-maximization. We report promising results on three different tasks in three different perceptual domains: spoken vowel classification with a benchmark multi-speaker database, extrapolation of fonts to unseen letters, and translation of faces to novel illuminants.  相似文献   

18.
A method to construct a representation of space curves based on the zero-crossings of the dyadic wavelet transform is introduced. The principal axes of inertia of these space curves, referred to as objects, are considered as the reference system. The representation is translation, rotation and size invariant. Instances of objects in images are recognised by matching their representations with those of the models. A string-matching technique is adapted and used for this purpose. Experimental results show that the representation is robust and efficient in extracting and matching object information.  相似文献   

19.
Most existing 2D object recognition algorithms are not perspective (or projective) invariant, and hence are not suitable for many real-world applications. By contrast, one of the primary goals of this research is to develop a flat object matching system that can identify and localise an object, even when seen from different viewpoints in 3D space. In addition, we also strive to achieve good scale invariance and robustness against partial occlusion as in any practical 2D object recognition system. The proposed system uses multi-view model representations and objects are recognised by self-organised dynamic link matching. The merit of this approach is that it offers a compact framework for concurrent assessments of multiple match hypotheses by promoting competitions and/or co-operations among several local mappings of model and test image feature correspondences. Our experiments show that the system is very successful in recognising object to perspective distortion, even in rather cluttered scenes. Receiveed: 29 May 1998?,Received in revised form: 12 October 1998?Accepted: 26 October 1998  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号