首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   8篇
  免费   0篇
无线电   1篇
冶金工业   1篇
自动化技术   6篇
  2007年   1篇
  2006年   1篇
  2005年   4篇
  2004年   1篇
  1998年   1篇
排序方式: 共有8条查询结果,搜索用时 15 毫秒
1
1.
Semantic scene classification based only on low-level vision cues has had limited success on unconstrained image sets. On the other hand, camera metadata related to capture conditions provide cues independent of the captured scene content that can be used to improve classification performance. We consider three problems, indoor-outdoor classification, sunset detection, and manmade-natural classification. Analysis of camera metadata statistics for images of each class revealed that metadata fields, such as exposure time, flash fired, and subject distance, are most discriminative for each problem. A Bayesian network is employed to fuse content-based and metadata cues in the probability domain and degrades gracefully even when specific metadata inputs are missing (a practical concern). Finally, we provide extensive experimental results on the three problems using content-based and metadata cues to demonstrate the efficacy of the proposed integrated scene classification scheme.  相似文献   
2.
Scene Parsing Using Region-Based Generative Models   总被引:1,自引:0,他引:1  
Semantic scene classification is a challenging problem in computer vision. In contrast to the common approach of using low-level features computed from the whole scene, we propose "scene parsing" utilizing semantic object detectors (e.g., sky, foliage, and pavement) and region-based scene-configuration models. Because semantic detectors are faulty in practice, it is critical to develop a region-based generative model of outdoor scenes based on characteristic objects in the scene and spatial relationships between them. Since a fully connected scene configuration model is intractable, we chose to model pairwise relationships between regions and estimate scene probabilities using loopy belief propagation on a factor graph. We demonstrate the promise of this approach on a set of over 2000 outdoor photographs, comparing it with existing discriminative approaches and those using low-level features  相似文献   
3.
Automatic image orientation detection for natural images is a useful, yet challenging research topic. Humans use scene context and semantic object recognition to identify the correct image orientation. However, it is difficult for a computer to perform the task in the same way because current object recognition algorithms are extremely limited in their scope and robustness. As a result, existing orientation detection methods were built upon low-level vision features such as spatial distributions of color and texture. Discrepant detection rates have been reported for these methods in the literature. We have developed a probabilistic approach to image orientation detection via confidence-based integration of low-level and semantic cues within a Bayesian framework. Our current accuracy is 90 percent for unconstrained consumer photos, impressive given the findings of a psychophysical study conducted recently. The proposed framework is an attempt to bridge the gap between computer and human vision systems and is applicable to other problems involving semantic scene content understanding.  相似文献   
4.
Semantic scene classification is an open problem in computer vision, especially when information from only a single image is employed. In applications involving image collections, however, images are clustered sequentially, allowing surrounding images to be used as temporal context. We present a general probabilistic temporal context model in which the first-order Markov property is used to integrate content-based and temporal context cues. The model uses elapsed time-dependent transition probabilities between images to enforce the fact that images captured within a shorter period of time are more likely to be related. This model is generalized in that it allows arbitrary elapsed time between images, making it suitable for classifying image collections. In addition, we derived a variant of this model to use in ordered image collections for which no timestamp information is available, such as film scans. We applied the proposed context models to two problems, achieving significant gains in accuracy in both cases. The two algorithms used to implement inference within the context model, Viterbi and belief propagation, yielded similar results with a slight edge to belief propagation. Matthew Boutell received the BS degree in Mathematical Science from Worcester Polytechnic Institute, Massachusetts, in 1993, the MEd degree from University of Massachusetts at Amherst in 1994, and the PhD degree in Computer Science from the University of Rochester, Rochester, NY, in 2005. He served for several years as a mathematics and computer science instructor at Norton High School and Stonehill College and as a research intern/consultant at Eastman Kodak Company. Currently, he is Assistant Professor of Computer Science and Software Engineering at Rose-Hulman Institute of Technology in Terre Haute, Indiana. His research interests include image understanding, machine learning, and probabilistic modeling. Jiebo Luo received his PhD degree in Electrical Engineering from the University of Rochester, Rochester, NY in 1995. He is a Senior Principal Scientist with the Kodak Research Laboratories. He was a member of the Organizing Committee of the 2002 IEEE International Conference on Image Processing and 2006 IEEE International Conference on Multimedia and Expo, a guest editor for the Journal of Wireless Communications and Mobile Computing Special Issue on Multimedia Over Mobile IP and the Pattern Recognition journal Special Issue on Image Understanding for Digital Photos, and a Member of the Kodak Research Scientific Council. He is on the editorial boards of the IEEE Transactions on Multimedia, Pattern Recognition, and Journal of Electronic Imaging. His research interests include image processing, pattern recognition, computer vision, medical imaging, and multimedia communication. He has authored over 100 technical papers and holds over 30 granted US patents. He is a Kodak Distinguished Inventor and a Senior Member of the IEEE. Chris Brown (BA Oberlin 1967, PhD University of Chicago 1972) is Professor of Computer Science at the University of Rochester. He has published in many areas of computer vision and robotics. He wrote COMPUTER VISION with his colleague Dana Ballard, and influential work on the “active vision” paradigm was reported in two special issues of the International Journal of Computer Vision. He edited the first two volumes of ADVANCES IN COMPUTER VISION for Erlbaum and (with D. Terzopoulos) REAL-TIME COMPUTER VISION, from Cambridge University Press. He is the co-editor of VIDERE, the first entirely on-line refereed computer vision journal (MIT Press). His most recent PhD students have done research in infrared tracking and face recognition, features and strategies for image understanding, augmented reality, and three-dimensional reconstruction algorithms. He supervised the undergraduate team that twice won the AAAI Host Robot competition (and came third in the Robot Rescue competition in 2003).  相似文献   
5.
In classic pattern recognition problems, classes are mutually exclusive by definition. Classification errors occur when the classes overlap in the feature space. We examine a different situation, occurring when the classes are, by definition, not mutually exclusive. Such problems arise in semantic scene and document classification and in medical diagnosis. We present a framework to handle such problems and apply it to the problem of semantic scene classification, where a natural scene may contain multiple objects such that the scene can be described by multiple class labels (e.g., a field scene with a mountain in the background). Such a problem poses challenges to the classic pattern recognition paradigm and demands a different treatment. We discuss approaches for training and testing in this scenario and introduce new metrics for evaluating individual examples, class recall and precision, and overall accuracy. Experiments show that our methods are suitable for scene classification; furthermore, our work appears to generalize to other classification problems of the same nature.  相似文献   
6.
Two proteins, VP19C (50,260 Da) and VP23 (34,268 Da), make up the triplexes which connect adjacent hexons and pentons in the herpes simplex virus type 1 capsid. VP23 was expressed in Escherichia coli and purified to homogeneity by Ni-agarose affinity chromatography. In vitro capsid assembly experiments demonstrated that the purified protein was functionally active. Its physical status was examined by differential scanning calorimetry, ultracentrifugation, size exclusion chromatography, circular dichroism, fluorescence spectroscopy, and 8-anilino-1-naphthalene sulfonate binding studies. These studies established that the bacterially expressed VP23 exhibits properties consistent with its being in a partially folded, molten globule state. We propose that the molten globule represents a functionally relevant intermediate which is necessary to allow VP23 to undergo interaction with VP19C in the process of capsid assembly.  相似文献   
7.
The performance of an exemplar-based scene classification system depends largely on the size and quality of its set of training exemplars, which can be limited in practice. In addition, in nontrivial data sets, variations in scene content as well as distracting regions may exist in many testing images to prohibit good matches with the exemplars. Various boosting schemes have been proposed in machine learning, focusing on the feature space. We introduce the novel concept of image-transform bootstrapping using transforms in the image space to address such issues. In particular, three major schemes are described for exploiting this concept to augment training, testing, and both. We have successfully applied it to three applications of increasing difficulty: sunset detection, outdoor scene classification, and automatic image orientation detection. It is shown that appropriate transforms and meta-classification methods can be selected to boost performance according to the domain of the problem and the features/classifier used.  相似文献   
8.
Considerable research has been devoted to the problem of multimedia indexing and retrieval in the past decade. However, limited by state-of-the-art in image understanding, the majority of the existing content-based image retrieval (CBIR) systems have taken a relatively low-level approach and fallen short of higher-level interpretation and knowledge. Recent research has begun to focus on bridging the semantic and conceptual gap that exists between man and computer by integrating knowledge-based techniques, human perception, scene content understanding, psychology, and linguistics. In this article, we provide an overview of exploiting context for semantic scene content and understanding  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号