首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Most work on multi-biometric fusion is based on static fusion rules. One prominent limitation of static fusion is that it cannot respond to the changes of the environment or the individual users. This paper proposes context-aware multi-biometric fusion, which can dynamically adapt the fusion rules to the real-time context. As a typical application, the context-aware fusion of gait and face for human identification in video is investigated. Two significant context factors that may affect the relationship between gait and face in the fusion are considered, i.e., view angle and subject-to-camera distance. Fusion methods adaptable to these two factors based on either prior knowledge or machine learning are proposed and tested. Experimental results show that the context-aware fusion methods perform significantly better than not only the individual biometric traits, but also those widely adopted static fusion rules including SUM, PRODUCT, MIN, and MAX. Moreover, context-aware fusion based on machine learning shows superiority over that based on prior knowledge.  相似文献   

2.
Early detection of human actions is essential in a wide spectrum of applications ranging from video surveillance to health-care. While human action recognition has been extensively studied, little attention is paid to the problem of detecting ongoing human action early, i.e. detecting an action as soon as it begins, but before it finishes. This study aims at training a detector to be capable of recognizing a human action when only partial action sample is seen. To do so, a hybrid technique is proposed in this work which combines the benefits of computer vision as well as fuzzy set theory based on the fuzzy Bandler and Kohout's sub-triangle product (BK subproduct). The novelty lies in the construction of a frame-by-frame membership function for each kind of possible movement. Detection is triggered when a pre-defined threshold is reached in a suitable way. Experimental results on a publicly available dataset demonstrate the benefits and effectiveness of the proposed method.  相似文献   

3.
Understanding human behavior from motion imagery   总被引:3,自引:0,他引:3  
Computer vision is gradually making the transition from image understanding to video understanding. This is due to the enormous success in analyzing sequences of images that has been achieved in recent years. The main shift in the paradigm has been from recognition followed by reconstruction (shape from X) to motion-based recognition. Since most videos are about people, this work has focused on the analysis of human motion. In this paper, I present my perspective on understanding human behavior. Automatically understanding human behavior from motion imagery involves extraction of relevant visual information from a video sequence, representation of that information in a suitable form, and interpretation of visual information for the purpose of recognition and learning about human behavior. Significant progress has been made in human tracking over the last few years. As compared with tracking, not much progress has been made in understanding human behavior, and the issue of representation has largely been ignored. I present my opinion on possible reasons and hurdles for slower progress in understanding human behavior, briefly present our work in tracking, representation, and recognition, and comment on the next steps in all three areas.Published online: 28 August 2003  相似文献   

4.
5.
In this article, we describe a knowledge-based controlled platform using program supervision techniques. This platform eases the creation and the configuration of video surveillance systems. Several issues need to be addressed to provide a correct system configuration: (1) to choose, among a library of programs, those which are best satisfying a given user request, (2) to assign a correct value for each program parameter, (3) to evaluate performances and to guarantee a performance rate which is satisfactory regarding end-user requirements. This platform is composed of three main components: the library of programs, the knowledge base and the control component. The knowledge is either given by experts or learnt by the system. The control is generic in the sense that it is independent of any application. To validate this platform, we have built and evaluated six video surveillance systems which are featured with three properties: adaptability, reliability and real-time processing.  相似文献   

6.
Understanding pair-wise activities is an essential step towards studying complex group and crowd behaviors in video. However, such research is often hampered by a lack of datasets that concentrate specifically on Atomic Pair Actions; [Here, we distinguish between the atomic motion of individual objects and the atomic motion of pairs of objects. The term action in Atomic Pair Action means an atomic interaction movement of two objects in video; a pair activity, then, is composed of multiple actions by a pair or multiple pairs of interacting objects ( and ). Please see Section 1 for details.] in addition, the general dearth in computer vision of a standardized, structured approach for reproducing and analyzing the efficacy of different models limits the ability to compare different approaches. In this paper, we introduce the ISI Atomic Pair Actions dataset, a set of 90 videos that concentrate on the Atomic Pair Actions of objects in video, namely converging, diverging, and moving in parallel. We further incorporate a structured, end-to-end analysis methodology, based on workflows, to easily and automatically allow for standardized testing of state-of-the-art models, as well as inter-operability of varied codebases and incorporation of novel models. We demonstrate the efficacy of our structured framework by testing several models on the new dataset. In addition, we make the full dataset (the videos, along with their associated tracks and ground truth, and the exported workflows) publicly available to the research community for free use and extension at <http://research.sethi.org/ricky/datasets/>.  相似文献   

7.
8.
    
This study focuses on the question of how humans can be inherently integrated into cyber-physical systems (CPS) to reinforce their involvement in the increasingly automated industrial processes. After a use-case oriented review of the related research literature, a human-integration framework and associated data models are presented as part of a multi-agent IoT middleware called CHARIOT. The framework enables human actors to be semantically represented and registered, together with other IoT entities, in a common service directory, thereby facilitating their inclusion in complex service chains. To validate and evaluate the proposed framework, a user study is conducted on a setup where a human and a robot arm collaborate on a “pick-assemble-place” job on a conveyor belt. Based on the human skill set parameters obtained from the user study, online and offline variants of task assignment on the conveyor belt setup are implemented and analyzed over the presented framework. The results illustrate possible efficiency gains through the consolidated online monitoring and control of all cyber-physical system entities, including human actors.  相似文献   

9.
Recognizing context for annotating a live life recording   总被引:1,自引:1,他引:1  
In the near future, it will be possible to continuously record and store the entire audio–visual lifetime of a person together with all digital information that the person perceives or creates. While the storage of this data will be possible soon, retrieval and indexing into such large data sets are unsolved challenges. Since today’s retrieval cues seem insufficient we argue that additional cues, obtained from body-worn sensors, make associative retrieval by humans possible. We present three approaches to create such cues, each along with an experimental evaluation: the user’s physical activity from acceleration sensors, his social environment from audio sensors, and his interruptibility from multiple sensors.
Albrecht SchmidtEmail:
  相似文献   

10.
一种基于卷积神经网络深度学习的人体行为识别方法   总被引:2,自引:0,他引:2  
王忠民  曹洪江  范琳 《计算机科学》2016,43(Z11):56-58, 87
为提高基于智能终端的人体行为识别的准确率,提出一种基于卷积神经网络深度学习人体行为识别方法。该方法将原始数据进行简单处理,直接作为输入数据输入到卷积神经网络中,由卷积神经网络进行局部特征分析,得到特征输出项,直接输入到Softmax分类器中,可识别走路、跑步、上下楼梯、站立等5种动作。 对比实验结果表明,其对不同的实验者的识别率达到84.8%,证明了该方法的有效性。  相似文献   

11.
12.
Multi-Camera Human Activity Monitoring   总被引:1,自引:0,他引:1  
With the proliferation of security cameras, the approach taken to monitoring and placement of these cameras is critical. This paper presents original work in the area of multiple camera human activity monitoring. First, a system is presented that tracks pedestrians across a scene of interest and recognizes a set of human activities. Next, a framework is developed for the placement of multiple cameras to observe a scene. This framework was originally used in a limited X, Y, pan formulation but is extended to include height (Z) and tilt. Finally, an active dual-camera system for task recognition at multiple resolutions is developed and tested. All of these systems are tested under real-world conditions, and are shown to produce usable results. This work has been supported by the NSF through grants #IIS-0219863, #CNS-0224363, #CNS-0324864, #IIP-0443945, #CNS-0420836, #IIP-0726109, and #CNS-0708344.  相似文献   

13.
The study of human activity is applicable to a large number of science and technology fields, such as surveillance, biomechanics or sports applications. This article presents BB6-HM, a block-based human model for real-time monitoring of a large number of visual events and states related to human activity analysis, which can be used as components of a library to describe more complex activities in such important areas as surveillance, for example, luggage at airports, clients’ behaviour in banks and patients in hospitals. BB6-HM is inspired by the proportionality rules commonly used in Visual Arts, i.e., for dividing the human silhouette into six rectangles of the same height. The major advantage of this proposal is that analysis of the human can be easily broken down into regions, so that we can obtain information of activities. The computational load is very low, so it is possible to define a very fast implementation. Finally, this model has been applied to build classifiers for the detection of primitive events and visual attributes using heuristic rules and machine learning techniques.  相似文献   

14.
标准正面人脸图像的识别   总被引:7,自引:0,他引:7  
本论文选用人脸上27个特殊点作为人脸基本特征。以人脸几何结构为基础,结合有脸识别的心理特性,提出新颖、简便、高精度的“寻找存在”法,使提取特征点的速度、精度得到大大的提高,在详细分析这27个特列点的统计特性后,选择了其中信息量丰富的15个点间距及间距比构成一组向量代替人脸描述,用加权欧氏距离作为特征向量间相似性测试,在两类实验中,识别率高达100%和98%。  相似文献   

15.
Detection of anomalies is a broad field of study, which is applied in different areas such as data monitoring, navigation, and pattern recognition. In this paper we propose two measures to detect anomalous behaviors in an ensemble of classifiers by monitoring their decisions; one based on Mahalanobis distance and another based on information theory. These approaches are useful when an ensemble of classifiers is used and a decision is made by ordinary classifier fusion methods, while each classifier is devoted to monitor part of the environment. Upon detection of anomalous classifiers we propose a strategy that attempts to minimize adverse effects of faulty classifiers by excluding them from the ensemble. We applied this method to an artificial dataset and sensor-based human activity datasets, with different sensor configurations and two types of noise (additive and rotational on inertial sensors). We compared our method with two other well-known approaches, generalized likelihood ratio (GLR) and One-Class Support Vector Machine (OCSVM), which detect anomalies at data/feature level.  相似文献   

16.
17.
This paper provides a novel approach to detect unattended packages in public venues. Different from previous works on this topic which are mostly limited to detecting static objects where no human is nearby, we provide a solution which can detect an unattended package with people in its close proximity but not its owners. Mimicking the human logic in detecting such an event, our decision-making is based on understanding human activity and the relationships between humans and packages. There are three main contributions from this paper. First, an efficient method is provided to online categorize moving objects into the predefined classes using the eigen-features and the support vector machines (SVM). Second, utilizing the classification results, a method is developed to recognize human activities with hidden Markov models (HMM) and decide the package ownership. Finally the unattended package detection is achieved by analyzing multiple object relationships: package ownership, spatial and temporal distance relationships.  相似文献   

18.
19.
We describe a middleware framework for the adaptive delivery of context information to context-aware applications. The framework abstracts the applications from the sensors that provide context. Further applications define utility functions on the quality of context attributes that describe the context providers. Then, given multiple alternatives for providing the same type of context, the middleware applies the utility function to each alternative and choose the one with maximum utility. By allowing applications to delegate the selection of context source to the middleware, our middleware can implement autonomic properties, such as self-configuration when new context providers appear and resilience to failures of context providers.
  相似文献   

20.
    
Sensor-based human activity recognition (HAR), with the ability to recognise human activities from wearable or embedded sensors, has been playing an important role in many applications including personal health monitoring, smart home, and manufacturing. The real-world, long-term deployment of these HAR systems drives a critical research question: how to evolve the HAR model automatically over time to accommodate changes in an environment or activity patterns. This paper presents an online continual learning (OCL) scenario for HAR, where sensor data arrives in a streaming manner which contains unlabelled samples from already learnt activities or new activities. We propose a technique, OCL-HAR, making a real-time prediction on the streaming sensor data while at the same time discovering and learning new activities. We have empirically evaluated OCL-HAR on four third-party, publicly available HAR datasets. Our results have shown that this OCL scenario is challenging to state-of-the-art continual learning techniques that have significantly underperformed. Our technique OCL-HAR has consistently outperformed them in all experiment setups, leading up to 0.17 and 0.23 improvements in micro and macro F1 scores.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号