首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   132篇
  免费   3篇
  国内免费   4篇
电工技术   2篇
综合类   1篇
机械仪表   3篇
建筑科学   2篇
能源动力   1篇
轻工业   5篇
无线电   25篇
一般工业技术   10篇
冶金工业   3篇
自动化技术   87篇
  2021年   2篇
  2019年   2篇
  2018年   2篇
  2017年   4篇
  2016年   5篇
  2015年   5篇
  2014年   8篇
  2013年   8篇
  2012年   20篇
  2011年   6篇
  2010年   6篇
  2009年   3篇
  2008年   5篇
  2007年   5篇
  2006年   10篇
  2005年   8篇
  2004年   7篇
  2003年   5篇
  2002年   6篇
  2001年   4篇
  2000年   1篇
  1999年   4篇
  1998年   1篇
  1997年   2篇
  1995年   1篇
  1994年   2篇
  1993年   1篇
  1992年   3篇
  1990年   1篇
  1986年   1篇
  1982年   1篇
排序方式: 共有139条查询结果,搜索用时 15 毫秒
1.
孙暐  吴镇扬 《信号处理》2006,22(4):559-563
根据Flether等人的研究,基于感知独立性假设的子带识别方法被用于抗噪声鲁棒语音识别。本文拓展子带方法,采用基于噪声污染假定的多带框架来减少噪声影响。论文不仅从理论上分析了噪声污染假定多带框架在识别性能上的潜在优势,而且提出了多带环境下的鲁棒语音识别算法。研究表明:多带框架不仅回避了独立感知假设要求,而且与子带方法相比,多带方法能更好的减少噪声影响,提高系统识别性能。  相似文献   
2.
STUDY ON PHASE PERCEPTION IN SPEECH   总被引:4,自引:0,他引:4  
The perceptual effect of the phase information in speech has been studied by auditory subjective tests. On the condition that the phase spectrum in speech is changed while amplitude spectrum is unchanged, the tests show that: (1) If the envelop of the reconstructed speech signal is unchanged, there is indistinctive auditory perception between the original speech and the reconstructed speech; (2) The auditory perception effect of the reconstructed speech mainly lies on the amplitude of the derivative of the additive phase; (3) td is the maximum relative time shift between different frequency components of the reconstructed speech signal. The speech quality is excellent while td <10ms; good while 10ms< td <20ms; common while 20ms< td <35ms, and poor while td >35ms.  相似文献   
3.
With the increasing popularity of touch screen mobile devices, improving the usability and the user experience while inputting text on these devices is becoming increasingly important. Most conventional touch screen keyboards on mobile devices rely heavily on visual feedback, while auditory feedback seldom includes any useful information about what is being inputted by the user. The auditory feedback usually simply replicates the sounds produced by a physical keyboard. This paper describes the development of an enhanced auditory feedback mechanism for a Korean touch screen keyboard called the enhanced auditory feedback (EAF) mechanism. EAF has subtle phonetic auditory feedback generated using the acoustic phonetic features of human speech. While typing with EAF, users can acquire non-invasive auditory clues about the keys pressed. In this work, we compare conventional auditory feedback for a touch screen keyboard used in touch screen mobile devices with that of EAF and explore the possibility of using enhanced auditory feedback for touch screen keyboards.  相似文献   
4.
In recent years there has been a call within composition to include sound, among other modes, such as word and image in writing. Some of this call relies on a movement to multimodal composition in order to capture both the richness of rhetorical possibility and the reality of communities of practice, and some is in response to a perceived shift in writing due to digital media tools and environments. Regardless of the impetus for including the auditory realm in the composition classroom, it is important for the field of composition and rhetoric to develop further pedagogies of sound so that students are not simply offered the opportunity to produce diverse texts, but instead, are invited to enter “the playing field.” In order to do this I first explore an approach to teaching auditory rhetoric based on ways of knowing sound from an acoustics and musicology perspective, then I consider a phenomenological approach based on listening, and finally I construct a model of “tuning the sonic playing field” that draws on the literal, material practice of tuning as a metaphor for how sound may be taught in composition. The “tuning” approach to teaching sound draws on attention, embodiment, listening, and negotiation. Rather than simply offering students opportunities to use sound in rhetorically sensitive ways, this final approach asks instructors to become “attuned” to how different auditory epistemologies influence students’ ability to design and compose in sound.  相似文献   
5.
We have designed an auditory guidance system for the blind using ultrasonic-to-audio signal transformation. We first investigated the system requirements, and designed a simple but useful portable guidance system for the blind. The system derives visual information using multiple ultrasonic sensors, and transforms it to binaural auditory information using a suitable technique. The user can recognize the position of obstacles and the surrounding environment. The system is composed of two parts. One is a glasses-type system, and the other is a cane-type system with guide wheels. The former functions as an environment sensor, and the latter functions as a clear-path indicator. Wide-beam-angle ultrasonic sensors are used to detect bojects over a broader range. The system is designed as a battery-supplied portable model. Our design is focused on low power consumption, small size, light weight, and easy manipulation.  相似文献   
6.
《Ergonomics》2012,55(1-3):68-87
Multimodal interfaces offer great potential to humanize interactions with computers by employing a multitude of perceptual channels. This paper reports on a novel multimodal interface using auditory, haptic and visual feedback in a direct manipulation task to establish new recommendations for multimodal feedback, in particular uni-, bi- and trimodal feedback. A close examination of combinations of uni-, bi- and trimodal feedback is necessary to determine which enhances performance without increasing workload. Thirty-two participants were asked to complete a task consisting of a series of ‘drag-and-drops’ while the type of feedback was manipulated. Each participant was exposed to three unimodal feedback conditions, three bimodal feedback conditions and one trimodal feedback condition that used auditory, visual and haptic feedback alone, and in combination. Performance under the different conditions was assessed with measures of trial completion time, target highlight time and a self-reported workload assessment captured by the NASA Task Load Index (NASA-TLX). The findings suggest that certain types of bimodal feedback can enhance performance while lowering self-perceived mental demand.  相似文献   
7.
《Ergonomics》2012,55(6):807-827
The goal of the present study was to investigate the human factors issues related to acoustic beacons used for auditory navigation. Specific issues addressed were: (1) the effect of various beacon characteristics on human accuracy in turning toward the direction of the acoustic beacon; (2) the difference between real and virtual environments on human accuracy in turning toward the acoustic beacon; and (3) the perceived sound quality of various acoustic beacons. Three experiments were conducted in which acoustic beacons were presented in a background of 80 dBA pink noise. Results of the localization tasks revealed that (a) presentation mode (continuous versus pulsed beacon sound) did not affect the overall localization accuracy or number of front-back confusion errors; and (b) the type of acoustic beacon affected the size of localization error. Results of the sound quality assessment indicated that listeners had definite preferences regarding the type of sound being used as a beacon, with (a) non-speech beacons preferred over speech beacons, (b) a beacon repetition rate of 1.1 rps preferred over either the 0.7 or 2.5 rps rates, and (c) a continuous operation of a beacon preferred over a pulsed operation. Finally, sound quality ratings and localization errors were highly negatively correlated. This finding demonstrates the usefulness and practical values of sound quality judgements for audio display design and evaluation.  相似文献   
8.
《Ergonomics》2012,55(1):61-74
Speech displays and verbal response technologies are increasingly being used in complex, high workload environments that require the simultaneous performance of visual and manual tasks. Examples of such environments include the flight decks of modern aircraft, advanced transport telematics systems providing in-vehicle route guidance and navigational information and mobile communication equipment in emergency and public safety vehicles. Previous research has established an optimum range for speech intelligibility. However, the potential for variations in presentation levels within this range to affect attentional resources and cognitive processing of speech material has not been examined previously. Results of the current experimental investigation demonstrate that as presentation level increases within this ‘optimum’ range, participants in high workload situations make fewer sentence-processing errors and generally respond faster. Processing errors were more sensitive to changes in presentation level than were measures of reaction time. Implications of these findings are discussed in terms of their application for the design of speech communications displays in complex multi-task environments.  相似文献   
9.
《Ergonomics》2012,55(9):1233-1248
In the context of emergency warnings, auditory icons, which convey information about system events by analogy with everyday events, have the potential to be understood more quickly and easily than abstract sounds. To test this proposal, an experiment was carried out to evaluate the use of auditory icons for an invehicle collision avoidance application. Two icons, the sounds of a car horn and of skidding tyres, were compared with two conventional warnings, a simple tone and a voice saying ‘ahead’. Participants sat in an experimental vehicle with a road scene projected ahead, and they were required to brake in response to on-screen collision situations and their accompanying warning sounds. The auditory icons produced significantly faster reaction times than the conventional warnings, but suffered from more inappropriate responses, where drivers reacted with a brake press to a non-collision situation. The findings are explained relative to the perceived urgency and inherent meaning of each sound. It is argued that optimal warnings could be achieved by adjusting certain sound attributes of auditory icons, as part of a structured, user-centred design and evaluation procedure.  相似文献   
10.
Recordings of the Earth's surface oscillation as a function of time (seismograms) can be sonified by compressing time so that most of the signal's frequency spectrum falls in the audible range. The pattern-recognition capabilities of the human auditory system can then be applied to the auditory analysis of seismic data. In this experiment, we sonify a set of seismograms associated with a magnitude-5.6 Oklahoma earthquake recorded at 17 broadband stations within a radius of ∼300 km from the epicenter, and a group of volunteers listen to our sonified seismic data set via headphones. Most of the subjects have never heard a sonified seismogram before. Given the lack of studies on this subject, we prefer to make no preliminary hypotheses on the categorization criteria employed by the listeners: we follow the “free categorization” approach, asking listeners to simply group sounds that they perceive as “similar.” We find that listeners tend to group together sonified seismograms sharing one or more underlying physical parameters, including source–receiver distance, source–receiver azimuth, and, possibly, crustal structure between source and receiver and/or at the receiver. This suggests that, if trained to do so, human listeners can recognize subtle features in sonified seismic signals. It remains to be determined whether auditory analysis can complement or lead to improvements upon the standard visual and computational approaches in specific tasks of geophysical interest.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号