首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Progress in scene understanding requires reasoning about the rich and diverse visual environments that make up our daily experience. To this end, we propose the Scene Understanding database, a nearly exhaustive collection of scenes categorized at the same level of specificity as human discourse. The database contains 908 distinct scene categories and 131,072 images. Given this data with both scene and object labels available, we perform in-depth analysis of co-occurrence statistics and the contextual relationship. To better understand this large scale taxonomy of scene categories, we perform two human experiments: we quantify human scene recognition accuracy, and we measure how typical each image is of its assigned scene category. Next, we perform computational experiments: scene recognition with global image features, indoor versus outdoor classification, and “scene detection,” in which we relax the assumption that one image depicts only one scene category. Finally, we relate human experiments to machine performance and explore the relationship between human and machine recognition errors and the relationship between image “typicality” and machine recognition accuracy.  相似文献   

2.
目的 图像美学属性评价可以提供丰富的美学要素,极大地增强图像美学的可解释性。然而现有的图像美学属性评价方法并没有考虑到图像场景类别的多样性,导致评价任务的性能不够理想。为此,本文提出一种深度多任务卷积神经网络(multi task convolutional neural network, MTCNN)模型,利用场景信息辅助图像的美学属性预测。方法 本文模型由双流深度残差网络组成,其中一支网络基于场景预测任务进行训练,以提取图像的场景特征;另一支网络提取图像的美学特征。然后融合这两种特征,通过多任务学习的方式进行训练,以预测图像的美学属性和整体美学分数。结果 为了验证模型的有效性,在图像美学属性数据集(aesthetics and attributes database, AADB)上进行实验验证。结果显示,在斯皮尔曼相关系数(Spearman rank-order correlation coefficient, SRCC)指标上,本文方法各美学属性预测的结果较其他方法的最优值平均提升了6.1%,本文方法整体美学分数预测的结果较其他方法的最优值提升了6.2%。结论 提出的图像美学属性...  相似文献   

3.
We present a data‐driven method for synthesizing 3D indoor scenes by inserting objects progressively into an initial, possibly, empty scene. Instead of relying on few hundreds of hand‐crafted 3D scenes, we take advantage of existing large‐scale annotated RGB‐D datasets, in particular, the SUN RGB‐D database consisting of 10,000+ depth images of real scenes, to form the prior knowledge for our synthesis task. Our object insertion scheme follows a co‐occurrence model and an arrangement model, both learned from the SUN dataset. The former elects a highly probable combination of object categories along with the number of instances per category while a plausible placement is defined by the latter model. Compared to previous works on probabilistic learning for object placement, we make two contributions. First, we learn various classes of higher‐order object‐object relations including symmetry, distinct orientation, and proximity from the database. These relations effectively enable considering objects in semantically formed groups rather than by individuals. Second, while our algorithm inserts objects one at a time, it attains holistic plausibility of the whole current scene while offering controllability through progressive synthesis. We conducted several user studies to compare our scene synthesis performance to results obtained by manual synthesis, state‐of‐the‐art object placement schemes, and variations of parameter settings for the arrangement model.  相似文献   

4.
Recently, various bag-of-features (BoF) methods show their good resistance to within-class variations and occlusions in object categorization. In this paper, we present a novel approach for multi-object categorization within the BoF framework. The approach addresses two issues in BoF related methods simultaneously: how to avoid scene modeling and how to predict labels of an image when multiple categories of objects are co-existing. We employ a biased sampling strategy which combines the bottom-up, biologically inspired saliency information and loose, top-down class prior information for object class modeling. Then this biased sampling component is further integrated with a multi-instance multi-label leaning and classification algorithm. With the proposed biased sampling strategy, we can perform multi-object categorization within an image without semantic segmentation. The experimental results on PASCAL VOC2007 and SUN09 show that the proposed method significantly improves the discriminative ability of BoF methods and achieves good performance in multi-object categorization tasks.  相似文献   

5.
Three-dimensional scene flow   总被引:2,自引:0,他引:2  
Just as optical flow is the two-dimensional motion of points in an image, scene flow is the three-dimensional motion of points in the world. The fundamental difficulty with optical flow is that only the normal flow can be computed directly from the image measurements, without some form of smoothing or regularization. In this paper, we begin by showing that the same fundamental limitation applies to scene flow; however, many cameras are used to image the scene. There are then two choices when computing scene flow: 1) perform the regularization in the images or 2) perform the regularization on the surface of the object in the scene. In this paper, we choose to compute scene flow using regularization in the images. We describe three algorithms, the first two for computing scene flow from optical flows and the third for constraining scene structure from the inconsistencies in multiple optical flows.  相似文献   

6.
Instance-based attribute identification in database integration   总被引:3,自引:0,他引:3  
Most research on attribute identification in database integration has focused on integrating attributes using schema and summary information derived from the attribute values. No research has attempted to fully explore the use of attribute values to perform attribute identification. We propose an attribute identification method that employs schema and summary instance information as well as properties of attributes derived from their instances. Unlike other attribute identification methods that match only single attributes, our method matches attribute groups for integration. Because our attribute identification method fully explores data instances, it can identify corresponding attributes to be integrated even when schema information is misleading. Three experiments were performed to validate our attribute identification method. In the first experiment, the heuristic rules derived for attribute classification were evaluated on 119 attributes from nine public domain data sets. The second was a controlled experiment validating the robustness of the proposed attribute identification method by introducing erroneous data. The third experiment evaluated the proposed attribute identification method on five data sets extracted from online music stores. The results demonstrated the viability of the proposed method.Received: 30 August 2001, Accepted: 31 August 2002, Published online: 31 July 2003Edited by L. Raschid  相似文献   

7.
We aim to identify the salient objects in an image by applying a model of visual attention. We automate the process by predicting those objects in an image that are most likely to be the focus of someone's visual attention. Concretely, we first generate fixation maps from the eye tracking data, which express the ground truth of people's visual attention for each training image. Then, we extract the high-level features based on the bag-of-visual-words image representation as input attributes along with the fixation maps to train a support vector regression model. With this model, we can predict a new query image's saliency. Our experiments show that the model is capable of providing a good estimate for human visual attention in test images sets with one salient object and multiple salient objects. In this way, we seek to reduce the redundant information within the scene, and thus provide a more accurate depiction of the scene.  相似文献   

8.
There are many correlated attributes in a database. Conventional attribute selection methods are not able to handle such correlations and tend to eliminate important rules that exist in correlated attributes. In this paper, we propose an attribute selection method that preserves important rules on correlated attributes. We rst compute a ranking of attributes by using conventional attribute selection methods. In addition, we compute two-dimensional rules for each pair of attributes and evaluate their importance for predicting a target attribute. Then, we evaluate the shapes of important two-dimensional rules to pick up hidden important attributes that are under-estimated by conventional attribute selection methods. After the shape evaluation, we re-calculate the ranking so that we can preserve the important correlations. Intensive experiments show that the proposed method can select important correlated attributes that are eliminated by conventional methods.  相似文献   

9.
10.
Scenes are closely related to the kinds of objects that may appear in them. Objects are widely used as features for scene categorization. On the other hand, landscapes with more spatial structures of scenes are representative of scene categories. In this paper, we propose a deep learning based algorithm for scene categorization. Specifically, we design two-pathway convolutional neural networks for exploiting both object attributes and spatial structures of scene images. Different from conventional deep learning methods, which usually focus on only one aspect of images, each pathway of the proposed architecture is tuned to capture a different aspect of images. As a result, complementary information of image contents can be utilized effectively. In addition, to deal with the feature redundancy problem caused by combining features from different sources, we adopt the ? 2,1 norm during classifier training to control selectivity of each type of features. Extensive experiments are conducted to evaluate the proposed method. Obtained results demonstrate that the proposed approach achieves superior performances over conventional methods. Moreover, the proposed method is a general framework, which can be easily extended to more pathways and applied to solve other problems.  相似文献   

11.
Statistics of natural image categories   总被引:9,自引:0,他引:9  
In this paper we study the statistical properties of natural images belonging to different categories and their relevance for scene and object categorization tasks. We discuss how second-order statistics are correlated with image categories, scene scale and objects. We propose how scene categorization could be computed in a feedforward manner in order to provide top-down and contextual information very early in the visual processing chain. Results show how visual categorization based directly on low-level features, without grouping or segmentation stages, can benefit object localization and identification. We show how simple image statistics can be used to predict the presence and absence of objects in the scene before exploring the image.  相似文献   

12.
Breakthrough performances have been achieved in computer vision by utilizing deep neural networks. In this paper we propose to use random forest to classify image representations obtained by concatenating multiple layers of learned features of deep convolutional neural networks for scene classification. Specifically, we first use deep convolutional neural networks pre-trained on the large-scale image database Places to extract features from scene images. Then, we concatenate multiple layers of features of the deep neural networks as image representations. After that, we use random forest as the classifier for scene classification. Moreover, to reduce feature redundancy in image representations we derived a novel feature selection method for selecting features that are suitable for random forest classification. Extensive experiments are conducted on two benchmark datasets, i.e. MIT-Indoor and UIUC-Sports. Obtained results demonstrated the effectiveness of the proposed method. The contributions of the paper are as follows. First, by extracting multiple layers of deep neural networks, we can explore more information of image contents for determining their categories. Second, we proposed a novel feature selection method that can be used to reduce redundancy in features obtained by deep neural networks for classification based on random forest. In particular, since deep learning methods can be used to augment expert systems by having the systems essentially training themselves, and the proposed framework is general, which can be easily extended to other intelligent systems that utilize deep learning methods, the proposed method provide a potential way for improving performances of other expert and intelligent systems.  相似文献   

13.
There has been a growing interest in exploiting contextual information in addition to local features to detect and localize multiple object categories in an image. A context model can rule out some unlikely combinations or locations of objects and guide detectors to produce a semantically coherent interpretation of a scene. However, the performance benefit of context models has been limited because most of the previous methods were tested on data sets with only a few object categories, in which most images contain one or two object categories. In this paper, we introduce a new data set with images that contain many instances of different object categories, and propose an efficient model that captures the contextual information among more than a hundred object categories using a tree structure. Our model incorporates global image features, dependencies between object categories, and outputs of local detectors into one probabilistic framework. We demonstrate that our context model improves object recognition performance and provides a coherent interpretation of a scene, which enables a reliable image querying system by multiple object categories. In addition, our model can be applied to scene understanding tasks that local detectors alone cannot solve, such as detecting objects out of context or querying for the most typical and the least typical scenes in a data set.  相似文献   

14.
15.
This paper is concerned with three-dimensional (3D) analysis, and analysis-guided syntheses, of images showing 3-D motion of an observer relative to a scene. There are two objectives of the paper. First, it presents an approach to recovering 3D motion and structure parameters from multiple cues present in a monocular image sequence, such as point features, optical flow, regions, lines, texture gradient, and vanishing line. Second, it introduces the notion that the cues that contribute the most to 3-D interpretation are also the ones that would yield the most realistic synthesis, thus suggesting an approach to analysis guided 3-D representation. For concreteness, the paper focuses on flight image sequences of a planar, textured surface. The integration of information in these diverse cues is carried out using optimization. For reliable estimation, a sequential batch method is used to compute motion and structure. Synthesis is done by using (i) image attributes extracted from the image sequence, and (ii) simple, artificial image attributes which are not present in the original images. For display, real and/or artificial attributes are shown as a monocular or a binocular sequence. Performance evaluation is done through experiments with one synthetic sequence, and two real image sequences digitized from a commercially available video tape and a laserdisc. The attribute based representation of these sequences compressed their sizes by 502 and 367. The visualization sequence appears very similar to the original sequence in informal, monocular as well as stereo viewing on a workstation monitor  相似文献   

16.
Fine-grained image classification is a challenging research topic because of the high degree of similarity among categories and the high degree of dissimilarity for a specific category caused by different poses and scales. A cultural heritage image is one of the fine-grained images because each image has the same similarity in most cases. Using the classification technique, distinguishing cultural heritage architecture may be difficult. This study proposes a cultural heritage content retrieval method using adaptive deep learning for fine-grained image retrieval. The key contribution of this research was the creation of a retrieval model that could handle incremental streams of new categories while maintaining its past performance in old categories and not losing the old categorization of a cultural heritage image. The goal of the proposed method is to perform a retrieval task for classes. Incremental learning for new classes was conducted to reduce the re-training process. In this step, the original class is not necessary for re-training which we call an adaptive deep learning technique. Cultural heritage in the case of Thai archaeological site architecture was retrieved through machine learning and image processing. We analyze the experimental results of incremental learning for fine-grained images with images of Thai archaeological site architecture from world heritage provinces in Thailand, which have a similar architecture. Using a fine-grained image retrieval technique for this group of cultural heritage images in a database can solve the problem of a high degree of similarity among categories and a high degree of dissimilarity for a specific category. The proposed method for retrieving the correct image from a database can deliver an average accuracy of 85 percent. Adaptive deep learning for fine-grained image retrieval was used to retrieve cultural heritage content, and it outperformed state-of-the-art methods in fine-grained image retrieval.  相似文献   

17.
Image-to-image translation has been widely studied. Since real-world images can often be described by multiple attributes, it is useful to manipulate them at the same time. However, most methods focus on transforming between two domains, and when they chain multiple single attribute transform networks together, the results are affected by the order of chaining, and the performance drops with the out-of-domain issue for intermediate results. Existing multi-domain transfer methods mostly manipulate multiple attributes by adding a list of attribute labels to the network feature, but they also suffer from interference of different attributes, and perform worse when multiple attributes are manipulated. We propose a novel approach to multi-attribute image-to-image translation using several parallel latent transform networks, where multiple attributes are manipulated in parallel and simultaneously, which eliminates both issues. To avoid the interference of different attributes, we introduce a novel soft independence constraint for the changes caused by different attributes. Extensive experiments show that our method outperforms state-of-the-art methods.  相似文献   

18.
This study introduces a novel conditional recycle generative adversarial network for facial attribute transformation, which can transform high-level semantic face attributes without changing the identity. In our approach, we input a source facial image to the conditional generator with target attribute condition to generate a face with the target attribute. Then we recycle the generated face back to the same conditional generator with source attribute condition. A face which should be similar to that of the source face in personal identity and facial attributes is generated. Hence, we introduce a recycle reconstruction loss to enforce the final generated facial image and the source facial image to be identical. Evaluations on the CelebA dataset demonstrate the effectiveness of our approach. Qualitative results show that our approach can learn and generate high-quality identity-preserving facial images with specified attributes.  相似文献   

19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号