首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   23篇
  免费   0篇
机械仪表   1篇
无线电   9篇
自动化技术   13篇
  2024年   1篇
  2017年   2篇
  2016年   1篇
  2015年   1篇
  2014年   4篇
  2013年   2篇
  2012年   1篇
  2011年   1篇
  2009年   1篇
  2008年   2篇
  2007年   3篇
  2002年   2篇
  1997年   1篇
  1994年   1篇
排序方式: 共有23条查询结果,搜索用时 15 毫秒
1.
This paper presents a new method for three dimensional object tracking by fusing information from stereo vision and stereo audio. From the audio data, directional information about an object is extracted by the Generalized Cross Correlation (GCC) and the object’s position in the video data is detected using the Continuously Adaptive Mean shift (CAMshift) method. The obtained localization estimates combined with confidence measurements are then fused to track an object utilizing Particle Swarm Optimization (PSO). In our approach the particles move in the 3D space and iteratively evaluate their current position with regard to the localization estimates of the audio and video module and their confidences, which facilitates the direct determination of the object’s three dimensional position. This technique has low computational complexity and its tracking performance is independent of any kind of model, statistics, or assumptions, contrary to classical methods. The introduction of confidence measurements further increases the robustness and reliability of the entire tracking system and allows an adaptive and dynamical information fusion of heterogenous sensor information.  相似文献   
2.
Performing and composing for interactive audiovisual system presents many challenges to the performer. Working with visual, sonic and gestural components requires new skills and new ways of thinking about performance. However, there are few studies that focus on performer experience with interactive systems. We present the work Blue Space for oboe and interactive audiovisual system, highlighting the evolving process of the collaborative development of the work. We consider how musical and technical demands interact in this process, and outline the challenges of performing with interactive systems. Using the development of Blue Space as a self-reflective case study, we examine the role of gestures in interactive audiovisual works and identify new modes of performance.  相似文献   
3.
This study aims to design and implement new learning methodologies and resources that seek to advance the development and assessment of one of the fundamental students’ competencies from any Business Administration Degree, such as critical thinking, that is, this is an exploratory study on computing for human learning, specifically, learning of key competencies for business. We are going to use audiovisual case methodology through the use of short film clips, usually real stories, to help students understand the practical implications of theoretical concepts explained in the classroom. A theoretical model test uses data from 32 business students from the Open University of Catalonia. The first results show positive attitudes toward a new technological resource ‘audio-visual cases’. They indicate that the use of this tool can improve the comprehension of a problem and its origins and, at the same time, stimulate learning. In addition, this tool helps to develop critical thinking competency. This study offers important contributions to an e-learning environment and their applicability to the workplace, since it is the first type of research about the impact of audiovisual cases in the acquisition of critical thinking competency. Furthermore, this methodology promotes collaborative learning.  相似文献   
4.
This article highlights the importance of complementing classes through educational videos, especially in disciplines exclusively with lectures. This proposition is exemplified through basic concepts in alternating current for both single-phase and three-phase circuits, which are critical in the formation of electrical engineers, mechanics, chemists, etc. The main objective of conducting educational videos is to make learning more attractive and stimulate interest in acquiring the knowledge of certain topics, given their importance in professional life in different engineering areas. The videos, filmed in laboratory and of short duration, aim to complement and consolidate the content taught in the classroom lectures.  相似文献   
5.
目前的视听语音分离模型基本是将视频特征和音频特征进行简单拼接,没有充分考虑各个模态的相互关系,导致视觉信息未被充分利用,分离效果不理想。该文充分考虑视觉特征、音频特征之间的相互联系,采用多头注意力机制,结合卷积时域分离模型(Conv-TasNet)和双路径递归神经网络(DPRNN),提出多头注意力机制时域视听语音分离(MHATD-AVSS)模型。通过音频编码器与视觉编码器获得音频特征与视频的唇部特征,并采用多头注意力机制将音频特征与视觉特征进行跨模态融合,得到融合视听特征,将其经DPRNN分离网络,获得不同说话者的分离语音。利用客观语音质量评估(PESQ)、短时客观可懂度(STOI)及信噪比(SNR)评价指标,在VoxCeleb2数据集进行实验测试。研究表明,当分离两位、3位或4位说话者的混合语音时,该文方法与传统分离网络相比,SDR提高量均在1.87 dB以上,最高可达2.29 dB。由此可见,该文方法能考虑音频信号的相位信息,更好地利用视觉信息与音频信息的相关性,提取更为准确的音视频特性,获得更好的分离效果。  相似文献   
6.
针对HDMI2.0中继器在传输数据的过程中数据因干扰会发生错误的问题,提出了采用前向纠错技术(FEC)来纠正控制周期数据错误而导致的错误视频和音频数据的方案,给出了FEC和HDMI2.0协议相结合的具体过程,实现了基于HDMI2.0接口的数据纠错模块的设计.在Cadence平台下,编写了可综合的Verilog代码实现了电路的设计,并用科学的测试方法对数据纠错模块进行了测试验证.验证结果表明:数据纠错模块和HDMI2.0接口有机地结合在一起,有效纠正了HDMI2.0中继器在数据传输过程中产生的错误,增强了数据传输的可靠性并提高了视听效果.  相似文献   
7.
8.
Kwahk J  Han SH 《Applied ergonomics》2002,33(5):419-431
Usability evaluation is now considered an essential procedure in consumer product development. Many studies have been conducted to develop various techniques and methods of usability evaluation hoping to help the evaluators choose appropriate methods. However, planning and conducting usability evaluation requires considerations of a number of factors surrounding the evaluation process including the product, user, activity, and environmental characteristics. In this perspective, this study suggested a new methodology of usability evaluation through a simple, structured framework. The framework was outlined by three major components: the interface features of a product as design variables, the evaluation context consisting of user, product, activity, and environment as context variables, and the usability measures as dependent variables. Based on this framework, this study established methods to specify the product interface features, to define evaluation context, and to measure usability. The effectiveness of this methodology was demonstrated through case studies in which the usability of audiovisual products was evaluated by using the methods developed in this study. This study is expected to help the usability practitioners in consumer electronics industry in various ways. Most directly, it supports the evaluators' plan and conduct usability evaluation sessions in a systematic and structured manner. In addition, it can be applied to other categories of consumer products (such as appliances, automobiles, communication devices, etc.) with minor modifications as necessary.  相似文献   
9.
Automatic detection of the level of human interest is of high relevance for many technical applications, such as automatic customer care or tutoring systems. However, the recognition of spontaneous interest in natural conversations independently of the subject remains a challenge. Identification of human affective states relying on single modalities only is often impossible, even for humans, since different modalities contain partially disjunctive cues. Multimodal approaches to human affect recognition generally are shown to boost recognition performance, yet are evaluated in restrictive laboratory settings only. Herein we introduce a fully automatic processing combination of Active–Appearance–Model-based facial expression, vision-based eye-activity estimation, acoustic features, linguistic analysis, non-linguistic vocalisations, and temporal context information in an early feature fusion process. We provide detailed subject-independent results for classification and regression of the Level of Interest using Support-Vector Machines on an audiovisual interest corpus (AVIC) consisting of spontaneous, conversational speech demonstrating “theoretical” effectiveness of the approach. Further, to evaluate the approach with regards to real-life usability a user-study is conducted for proof of “practical” effectiveness.  相似文献   
10.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号