首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abstract   A computer-assisted music-learning system (CAMLS) has been developed to help the hearing impaired practice playing a musical melody. The music-learning performance is evaluated to test the usability of the system. This system can be a computer-supported learning tool for the hearing impaired to help them understand what pitch and tempo are, and then learn to play songs thereby increasing their interest in music classes and enhancing their learning performance. The results indicated that CAMLS could enhance hearing-impaired students' learning performance in a music course. A questionnaire survey also demonstrated that the computer-aided method did benefit hearing-impaired students in their music leaning. Actually, this system can also be applied to non-disabled students as a music-supportive tool to help their music learning.  相似文献   

2.
When one sensory input, hearing, is blocked altogether or reduced to some degree, a greater load of communications is placed on vision. Not surprisingly, the deaf and hearingimpaired have long relied on two visual substitutes for speech: lip reading and sign language. To make these skills easier to learn, two contestants in the Johns Hopkins University Search for Applications of Personal Computing to Aid the Handicapped have devised ways of simulating lip positions and hand signs on a display. In both cases the main intent of the software packages is to train not only the deaf and hearing-impaired, but also those who want to communicate with them.  相似文献   

3.
Touch-based interaction is becoming increasingly popular and is commonly used as the main interaction paradigm for self-service kiosks in public spaces. Touch-based interaction is known to be visually intensive, and current non-haptic touch-display technologies are often criticized as excluding blind users. This study set out to demonstrate that touch-based kiosks can be designed to include blind users without compromising the user experience for non-blind users. Most touch-based kiosks are based on absolute positioned virtual buttons which are difficult to locate without any tactile, audible or visual cues. However, simple stroke gestures rely on relative movements and the user does not need to hit a target at a specific location on the display. In this study, a touch-based train ticket sales kiosk based on simple stroke gestures was developed and tested on a panel of blind and visually impaired users, a panel of blindfolded non-visually impaired users and a control group of non-visually impaired users. The tests demonstrate that all the participants managed to discover, learn and use the touch-based self-service terminal and complete a ticket purchasing task. The majority of the participants completed the task in less than 4?min on the first attempt.  相似文献   

4.
计算机光学乐谱识别技术   总被引:10,自引:0,他引:10  
计算机光学乐谱识别是计算机技术在音乐领域的发展和应用,主要利用图像处理,模式识别,文档图像分析等相关技术,把乐谱图像转化成通用的数字音乐格式,OMR实现传统乐谱数字化,在计算机音乐、计算机辅助音乐教学,数学音乐图书馆等众多领域有着广泛的应用前景,OMR包括乐谱图像预处理,谱线检测与删除,原始音符对象识别和特征音符对象解释与重组等主要过程,音符对象的识别,解释与重组是其中的难点和关键。  相似文献   

5.
The Internet has become an ordinary and widely accepted alternative social environment—known as cyberspace—in which many people take part in numerous activities. For the hearing-impaired, cyberspace provides extra benefits for two basic reasons: means of communication, which is primarily based on visual (text and images) and not auditory channels, and the convenient possibility of concealing their handicap from other users, thus gaining more security and a sense of equality. The purpose of the current study was to examine characteristics, intensity, and types of use of the Internet by hearing-impaired adolescents compared to an equivalent group of normal-hearing participants, with gender and adolescence stage (age 12–15, or 16–19) as additional independent variables. In addition, the intensity of using the Internet as a possible moderator of deaf participants’ well-being was examined by comparing measures of loneliness and self-esteem between low- and high-intensive hearing-impaired users on the one hand, and hearing participants, on the other. Questionnaires were administered to 114 hearing-impaired and 100 hearing participants, matched for intelligence and socio-economic status. Main results showed that for both genders and for the two adolescence stages, hearing-impaired participants were motivated to use, and actually did use, the Internet more intensively than their hearing counterparts. Furthermore, the hearing-impaired used the Internet more than did hearing participants for both personal and group communication. Hearing and intensively Internet-using deaf participants were similar in level of well-being, both higher than the well-being of less-intensively Internet-using deaf participants. The Internet may thus be viewed as an empowering agent for the hearing-impaired.  相似文献   

6.
设计并实现了一个支持笔输入的乐谱编辑器,用户使用笔和书写板输入乐谱手势符号,利用基于网格编码的单笔划手势识别算法识别手势符号,生成与输入相应的乐谱,具有实时播放的功能。与传统交互界面的乐谱编辑器相比较,该系统的交互方式更加符合人们对乐谱的书写和认知习惯,使乐谱的输入过程变得简单、自然、高效。  相似文献   

7.
Research in the field of embodied music cognition has shown the importance of coupled processes of body activity (action) and multimodal representations of these actions (perception) in how music is processed. Technologies in the field of human–computer interaction (HCI) provide excellent means to intervene into, and extend, these coupled action-perception processes. In this article this model is applied to a concrete HCI application, called the “Conducting Master.” The application facilitates multiple users to interact in real time with the system in order to explore and learn how musical meter can be articulated into body movements (i.e., meter-mimicking gestures). Techniques are provided to model and automatically recognize these gestures in order to provide multimodal feedback streams back to the users. These techniques are based on template-based methods that allow approaching meter-mimicking gestures explicitly from a spatiotemporal account. To conclude, some concrete setups are presented in which the functionality of the Conducting Master was evaluated.  相似文献   

8.
We have developed a gesture input system that provides a common interaction technique across mobile, wearable and ubiquitous computing devices of diverse form factors. In this paper, we combine our gestural input technique with speech output and test whether or not the absence of a visual display impairs usability in this kind of multimodal interaction. This is of particular relevance to mobile, wearable and ubiquitous systems where visual displays may be restricted or unavailable. We conducted the evaluation using a prototype for a system combining gesture input and speech output to provide information to patients in a hospital Accident and Emergency Department. A group of participants was instructed to access various services using gestural inputs. The services were delivered by automated speech output. Throughout their tasks, these participants could see a visual display on which a GUI presented the available services and their corresponding gestures. Another group of participants performed the same tasks but without this visual display. It was predicted that the participants without the visual display would make more incorrect gestures and take longer to perform correct gestures than the participants with the visual display. We found no significant difference in the number of incorrect gestures made. We also found that participants with the visual display took longer than participants without it. It was suggested that for a small set of semantically distinct services with memorable and distinct gestures, the absence of a GUI visual display does not impair the usability of a system with gesture input and speech output.  相似文献   

9.
The Continuator is a usable musical instrument combining techniques from interactive and automatic learning systems. It learns and interactively plays with a user in the user's style. Music-generation systems have traditionally belonged to one of two categories: interactive systems in which players trigger musical phrases, events, or effects, such as the Karma musical workstation, and systems such as Risset's interactive piano, which allow for user input such as keystrokes or chords, but can't learn and use preprogrammed musical styles. Most of these systems propose musical effects libraries (a term used in the Karma workstation meaning a generation of music material based on user input). Although some of these effects are musically impressive, these systems can't be considered cybernetic musicians or even musical companions, because they use preprogrammed reactions and have no memory or facility for evolving.  相似文献   

10.
对基于视觉缩略图的高维音乐信息可视化技术作了深入探讨。对代表性的音乐可视化、特别是视觉缩略图的各种技术进行了广泛调研,通过一系列的用户调查分析了音乐内容的视觉缩略图应具备的基本特征。在此基础之上,提出了一种新颖的视觉缩略图ThumbnailDJ,并对其进行了一系列的用户测试。在对实验结果进行分析后,讨论了音乐信息可视化的应用前景、发展方向及相关研究重点。高维音乐内容的视觉描述有助于提高音乐库的浏览和检索效率,本研究将有助于缩小音乐视觉描述与用户音乐感知之间的语义鸿沟,提高音乐库的浏览和检索效率。同时,研究成果也将对高维数据的信息可视化研究起到重要的借鉴作用。  相似文献   

11.

Since many years ago, musicians have composed music based on the images that they have had in their minds. On the other hand, music affects people’s imagination while hearing it. This research provides a method that can transform shape to music and music to shape. This method defines musical notations for horizontal, diagonal and vertical line segments, filled circle and curve with different colors, which are the basis of many shapes in transforming shapes into music. Then these primary mappings are generalized to more complex forms to transform any shape. Moreover, music can be transformed into shape by this method. For this transformation, primary musical notations such as simple notes, notes joined by a legato, notes with a staccato, notes joined by a legato and have crescendo or decrescendo and notes with an accent or a trill are defined. These primary musical notations are generalized to more complex forms to transform any music into shape. Also, the method of this research can be used in music cryptography. It employs mapping of notes in a twelve-tone equal musical system into shapes and mappings of shapes with an equal line width and different colors into music.

  相似文献   

12.
In this paper, we propose a novel approach to generating a sequence of dance motions using music similarity as a criterion to find the appropriate motions given a new musical input. Based on the observation that dance motions used in similar musical pieces can be a good reference in choreographing a new dance, we first construct a music-motion database that comprises a number of segment-wise music-motion pairs. When a new musical input is given, it is divided into short segments and for each segment our system suggests the dance motion candidates by finding from the database the music cluster that is most similar to the input. After a user selects the best motion segment, we perform music-dance synchronization by means of cross-correlation between the two music segments using the novelty functions as an input. We evaluate our system’s performance using a user study, and the results show that the dance motion sequence generated by our system achieves significantly higher ratings than the one generated randomly.  相似文献   

13.
《Advanced Robotics》2013,27(3-4):363-381
Music has long been used to strengthen bonds between humans. In our research, we develop musical coplayer robots with the hope that music may improve human–robot symbiosis as well. In this paper, we underline the importance of non-verbal, visual communication for ensemble synchronization at the start, during and end of a piece. We propose three cues for interplayer communication, and present a thereminplaying, singing robot that can detect them and adapt its play to a human flutist. Experiments with two naive flutists suggest that the system can recognize naturally occurring flutist gestures without requiring specialized user training. In addition, we show how the use of audio-visual aggregation can allow a robot to adapt to tempo changes quickly.  相似文献   

14.
Human beings perceive their surroundings based on sensory information from diverse channels. However, for human–computer interaction we mostly restrict the user on visual perception. In this paper, we contribute to the investigation of tactile feedback as an additional perception modality. Therefore, we will first discuss existing user studies and provide a classification scheme for tactile feedback techniques. We will then present and discuss a comparative evaluation study based on the ISO 9241-9 [Ergonomic requirements for office work with visual display terminals (VDTs) – Part 9: requirements for non-keyboard input devices, 2000]. The 20 participants performed horizontal and vertical one-directional tapping tasks with hand gesture input with and without tactile feedback in front of a large, high-resolution display. In contrast to previous research, we cannot confirm a benefit of tactile feedback on user performance. Our results show no significant effect in terms of throughput (effective index of performance (IPe)) and even a significant higher error rate for horizontal target alignment when using tactile feedback. Based on these results, we suggest that tactile feedback can interfere with other senses in a negative way, resulting in the observed higher error rate for horizontal targets. Therefore, more systematic research is needed to clarify the influencing factors on the usefulness of tactile feedback. Besides these results, we found a significant difference in favor of the horizontal target alignment compared with the vertical one in terms of the effective index of performance (IPe), confirming the work by Dennerlein et al. [Force feedback improves performance for steering and combined steering–targeting tasks, in: CHI ’00: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, New York, NY, USA, 2000, pp. 423–429].  相似文献   

15.
This paper describes an evaluation of software developed with the aim of enabling hearing-impaired children to explore and experience the use and effects of language. The software allows interactive graphics to be discussed and controlled by means of a simple‘natural’ language interface, thereby allowing the user to hold a written conversation with the computer. It is argued that by making written language interactive in this way, prelingually deaf children should be able to use, in the written medium, language learning strategies similar to those normally used by hearing children in the spoken medium.  相似文献   

16.
Despite the existence of advanced functions in smartphones, most blind people are still using old-fashioned phones with familiar layouts and dependence on tactile buttons. Smartphones support accessibility features including vibration, speech and sound feedback, and screen readers. However, these features are only intended to provide feedback to user commands or input. It is still a challenge for blind people to discover functions on the screen and to input the commands. Although voice commands are supported in smartphones, these commands are difficult for a system to recognize in noisy environments. At the same time, smartphones are integrated with sophisticated motion sensors, and motion gestures with device tilt have been gaining attention for eyes-free input. We believe that these motion gesture interactions offer more efficient access to smartphone functions for blind people. However, most blind people are not smartphone users and they are aware of neither the affordances available in smartphones nor the potential for interaction through motion gestures. To investigate the most usable gestures for blind people, we conducted a user-defined study with 13 blind participants. Using the gesture set and design heuristics from the user study, we implemented motion gesture based interfaces with speech and vibration feedback for browsing phone books and making a call. We then conducted a second study to investigate the usability of the motion gesture interface and user experiences using the system. The findings indicated that motion gesture interfaces are more efficient than traditional button interfaces. Through the study results, we provided implications for designing smartphone interfaces.  相似文献   

17.
Haptic technologies and applications have received enormous attention in the last decade. The incorporation of haptic modality into multimedia applications adds excitement and enjoyment to an application. It also adds a more natural feel to multimedia applications, that otherwise would be limited to vision and audition, by engaging as well the user’s sense of touch, giving a more intrinsic feel essential for ambient intelligent applications. However, the improvement of an application’s Quality of Experience (QoE) by the addition of haptic feedback is still not completely understood. The research presented in this paper focuses on the effect of haptic feedback and what it potentially adds to the experience of the user as opposed to the traditional visual and auditory feedback. In essence, it investigates certain issues regarding stylus-based haptic education applications and haptic-enhanced entertainment videos. To this end, we used two haptic applications: the haptic handwriting learning tool to experiment with force feedback haptic interaction and the tactile YouTube application for tactile haptic feedback. In both applications, our analysis shows that the addition of haptic feedback will increase the QoE in the absence of fatigue or discomfort for this category of applications. This implies that the incorporation of haptic modality (both force feedback as well as tactile feedback) has positively contributed to the overall QoE for the users.  相似文献   

18.
In this article, we present a data-driven texture rendering method applied to a tactile display based on electrostatic attraction. The proposed method was examined in two steps. First, accelerations occurring due to sliding a tool on three different surfaces were measured, and then the collected data were replayed on an electrostatic tactile display. The proposed data-driven texture rendering method was evaluated against a conventional method in which a standard input such as a square wave was used for texture representation. Second, data from the Penn Haptic Texture Toolkit were used to generate virtual textures on the same tactile display. Psychophysical experiments were carried out for both steps, during which subjects rated similarities among the rendered virtual textures and the real samples. Confusion matrices were created, and multidimensional scaling (MDS) analysis was performed to create a perceptual space for further examination and to extract underlying dimensions of the textures. The results show that the virtual textures generated using the data-driven method were similar to the real textures. Roughness and stickiness were the primary dimensions of texture perception. Together with the supporting results from the MDS analysis, this study showed that the data-driven method is a viable solution for realistic texture rendering with electrostatic attraction.  相似文献   

19.
To enhance missing nonverbal cues in computer-mediated communication using text, those who can see often use emojis or emoticons. Although emojis for the sighted have transformed throughout the years to animated forms and added sound effects, emojis for visually impaired people remain underdeveloped. This study tested how tactile emojis based on visual imagery combined with the Braille system can enhance clarity in the computer-mediated communication environment for those with visual impairments. Results of this study confirmed three things: Visually impaired subjects were able to connect emotional emojis to the emotion they represented without any prior guidance, image-based (picture-based) and non-image-based (abstraction-based) tactile emoji were equally learnable, and the clarity of intended meaning was improved when an emoji was used with text (Braille). Thirty visually impaired subjects were able to match an average of 67% of emotions without prior guidance, and three of the four subjects who matched perfectly both before and after guidance were congenitally blind. The subjects had the most trouble discriminating the facial feature of “fear” between “sadness” or “surprised” for they shared similar traits. After guidance, the image-based tactile design elicited an average of 81% correct answers, whereas the non-image-based tactile design elicited an average of 37%, showing that the image-based tactile design was more effective for learning the meaning of emojis. The clarity of the sentence was also improved. This study shows that image-based tactile emojis can improve the texting experience of visually impaired individuals to a level where they can communicate subtle emotional cues through tactile imagery. This advance could minimize the service gap between sighted and visually impaired people and offer a much more abundant computer-mediated communication environment for visually impaired individuals.  相似文献   

20.
Pointing devices, essential input tools for the graphical user interface (GUI) of desktop computers, require precise motor control and dexterity to use. Haptic force-feedback devices provide the human operator with tactile cues, adding the sense of touch to existing visual and auditory interfaces. However, the performance enhancements, comfort, and possible musculoskeletal loading of using a force-feedback device in an office environment are unknown. Hypothesizing that the time to perform a task and the self-reported pain and discomfort of the task improve with the addition of force feedback, 26 people ranging in age from 22 to 44 years performed a point-and-click task 540 times with and without an attractive force field surrounding the desired target. The point-and-click movements were approximately 25% faster with the addition of force feedback (paired t-tests, p < 0.001). Perceived user discomfort and pain, as measured through a questionnaire, were also smaller with the addition of force feedback (p < 0.001). However, this difference decreased as additional distracting force fields were added to the task environment, simulating a more realistic work situation. These results suggest that for a given task, use of a force-feedback device improves performance, and potentially reduces musculoskeletal loading during mouse use. Actual or potential applications of this research include human-computer interface design, specifically that of the pointing device extensively used for the graphical user interface.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号