首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5篇
  免费   1篇
自动化技术   6篇
  2019年   1篇
  2014年   1篇
  2013年   1篇
  2012年   1篇
  2003年   1篇
  2000年   1篇
排序方式: 共有6条查询结果,搜索用时 0 毫秒
1
1.
We propose a novel method for unsupervised face recognition from time-varying sequences of face images obtained in real-world environments. The method utilizes the higher level of sensory variation contained in the input image sequences to autonomously organize the data in an incrementally built graph structure, without relying on category-specific information provided in advance. This is achieved by “chaining” together similar views across the spatio-temporal representations of the face sequences in image space by two types of connecting edges depending on local measures of similarity. Experiments with real-world data gathered over a period of several months and including both frontal and side-view faces from 17 different subjects were used to test the method, achieving correct self-organization rate of 88.6%. The proposed method can be used in video surveillance systems or for content-based information retrieval.  相似文献   
2.
3.
ABSTRACT

In this paper, we propose a method for semantic segmentation of pedestrian trajectories based on pedestrian behavior models, or agents. The agents model the dynamics of pedestrian movements in two-dimensional space using a linear dynamics model and common start and goal locations of trajectories. First, agent models are estimated from the trajectories obtained from image sequences. Our method is built on top of the Mixture model of Dynamic pedestrian Agents (MDA); however, the MDA's trajectory modeling and estimation are improved. Then, the trajectories are divided into semantically meaningful segments. The subsegments of a trajectory are modeled by applying a hidden Markov model using the estimated agent models. Experimental results with a real trajectory dataset show the effectiveness of the proposed method as compared to the well-known classical Ramer-Douglas-Peucker algorithm and also to the original MDA model.  相似文献   
4.
The paper proposes a method for generating a sequence of images with smooth change of illumination from two input images with different lighting conditions. The idea of the proposed method is based on image morphing. While conventional image morphing changes object shapes between two input images, here we focus on changing the illumination between two images. The proposed method uses isoluminance curves as a feature primitive. Isoluminance curves acquired from images are warped based on the correspondence of the curves between two images, and transformed luminance distributions are generated from the warped isoluminance curves. The proposed method called "illumination morphing" is able to generate smooth transition of luminance between two color images. The method does not need even the information about the light sources and 3D object models. The proposed method is a promising technique for many applications requiring a scene with variety of lighting effects, such as movies, TV games, and so on.  相似文献   
5.
We propose a new method for user-independent gesture recognition from time-varying images. The method uses relative-motion extraction and discriminant analysis for providing online learning/recognition abilities. Efficient and robust extraction of motion information is achieved. The method is computationally inexpensive which allows real-time operation on a personal computer. The performance of the proposed method has been tested with several data sets and good generalization abilities have been observed: it is robust to changes in background and illumination conditions, to users’ external appearance and changes in spatial location, and successfully copes with the non-uniformity of the performance speed of the gestures. No manual segmentation of any kind, or use of markers, etc. is necessary. Having the above-mentioned features, the method could be successfully used as a part of more refined human-computer interfaces. Bisser R. Raytchev: He received his BS and MS degrees in electronics from Tokai University, Japan, in 1995 and 1997 respectively. He is currently a doctoral student in electronics and information sciences at Tsukuba University, Japan. His research interests include biological and computer vision, pattern recognition and neural networks. Osamu Hasegawa, Ph.D.: He received the B.E. and M.E. degrees in Mechanical Engineering from the Science University of Tokyo, in 1988, 1990 respectively. He received Ph.D. degree in Electrical Engineering from the University of Tokyo, in 1993. Currently, he is a senior research scientist at the Electrotechnical Laboratory (ETL), Tsukuba, Japan. His research interests include Computer Vision and Multi-modal Human Interface. Dr. Hasegawa is a member of the AAAI, the Institute of Electronics, Information and Communication Engineers, Japan (IEICE), Information Processing Society of Japan and others. Nobuyuki Otsu, Ph.D.: He received B.S., Mr. Eng. and Dr. Eng. in Mathematical Engineering from the University of Tokyo in 1969, 1971, and 1981, respectively. Since he joined ETL in 1971, he has been engaged in theoretical research on pattern recognition, multivariate data analysis, and applications to image recognition in particular. After taking positions of Head of Mathematical Informatics Section (since 1985) and ETL Chief Senior Scientist (since 1990), he is currently Director of Machine Understanding Division since 1991, and concurrently a professor of the post graduate school of Tsukuba University since 1992. He has been involved in the Real World Computing program and directing the R&D of the project as Head of Real World Intelligence Center at ETL. Dr. Otsu is members of Behaviormetric Society and IEICE of Japan, etc.  相似文献   
6.
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号