首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In order to help the visually impaired as they navigate unfamiliar environment such as public buildings, this paper presents a novel smart phone, vision-based indoor localization, and guidance system, called Seeing Eye Phone. This system requires a smart phone from the user and a server. The smart phone captures and transmits images of the user facing forward to the server. The server processes the phone images to detect and describe 2D features by SURF and then matches them to the 2D features of the stored map images that include their corresponding 3D information of the building. After features are matched, Direct Linear Transform runs on a subset of correspondences to find a rough initial pose estimate and the Levenberg–Marquardt algorithm further refines the pose estimate to find a more optimal solution. With the estimated pose and the camera’s intrinsic parameters, the location and orientation of the user are calculated using 3D location correspondence data stored for features of each image. Positional information is then transmitted back to the smart phone and communicated to the user via text-to-speech. This indoor guiding system uses efficient algorithms such as SURF, homographs, multi-view geometry, and 3D to 2D reprojection to solve a very unique problem that will benefit the visually impaired. The experimental results demonstrate the feasibility of using a simple machine vision system design to accomplish a complex task and the potential of building a commercial product based on this design.  相似文献   

2.
To enhance missing nonverbal cues in computer-mediated communication using text, those who can see often use emojis or emoticons. Although emojis for the sighted have transformed throughout the years to animated forms and added sound effects, emojis for visually impaired people remain underdeveloped. This study tested how tactile emojis based on visual imagery combined with the Braille system can enhance clarity in the computer-mediated communication environment for those with visual impairments. Results of this study confirmed three things: Visually impaired subjects were able to connect emotional emojis to the emotion they represented without any prior guidance, image-based (picture-based) and non-image-based (abstraction-based) tactile emoji were equally learnable, and the clarity of intended meaning was improved when an emoji was used with text (Braille). Thirty visually impaired subjects were able to match an average of 67% of emotions without prior guidance, and three of the four subjects who matched perfectly both before and after guidance were congenitally blind. The subjects had the most trouble discriminating the facial feature of “fear” between “sadness” or “surprised” for they shared similar traits. After guidance, the image-based tactile design elicited an average of 81% correct answers, whereas the non-image-based tactile design elicited an average of 37%, showing that the image-based tactile design was more effective for learning the meaning of emojis. The clarity of the sentence was also improved. This study shows that image-based tactile emojis can improve the texting experience of visually impaired individuals to a level where they can communicate subtle emotional cues through tactile imagery. This advance could minimize the service gap between sighted and visually impaired people and offer a much more abundant computer-mediated communication environment for visually impaired individuals.  相似文献   

3.
Through rehabilitation and training, visually impaired people can be placed in types of jobs that are compatible with their abilities. A functional assessment approach should be established to measure the physical ability of handicapped people in response to specific tasks and environmental demands. The objective of this study is to develop an integrated computerized system, entitled VITAL (Vision Impaired Task and Assignment Lexicon), to measure the vision impaired worker's residual capabilities and to provide the necessary recommendations for job accommodations. VITAL includes two major modules: the disability index, and the ergonomics consultation module. A single measure, the Disability Index (DI), which represents capacities of vision impaired individuals through a range of skill tests is developed via Multiple Attribute Decision Making (MADM) procedures. The resulting DI can be used in identifying the functional deficits and limitations of the visually impaired worker, and matching the visually impaired people to appropriate employment. This information is also used in the ergonomic consultation module to provide recommendations regarding job and workplace design for the vision impaired worker.  相似文献   

4.
In order to increase the power of virtual environments, several different attempts have been made to incorporate sound interactivity in some form. For example, several implementations of virtual environments permit the playing of a previously recorded soundfile upon the triggering of an associated event. The user may then, for instance, perceive the sound of a creaky door when one is opened. However, a relatively more effective system for entertaining joint audio and visual response may be derived by using physical modelling techniques. We have undertaken a pilot investigation in which virtual objects are implemented in a manner such that they implicitly possess vibration properties analogous to that of the real world. Consequently these objects are able to vibrate in response to stimulus. The vibrations may be visually perceived as, for example, wave patterns on the surface of an object, and acoustically perceived by mapping values representative of surface displacement to a loudspeaker. This paper discusses the current state of the project.  相似文献   

5.
This paper presents a cooperative design-view environment for interactive partitioning applications. This environment provides the user with a comprehensive viewing facility that describes the potentially complex relationships between various design objects. Using this environment, the user is able to evaluate and analyse design results visually throughout the entire partitioning process. We have developed a graphical user interface (GUI) environment for the InterPar system which supports mixed automatic and manual partitioning for multiple-field programmable gate array (FPGA) designs. The preliminary experiments have shown that the use of InterPar may lead to a new direction for the exploration of new partitioning approaches based on the circuit-structure analysis.  相似文献   

6.
In robotics, the idea of human and robot interaction is receiving a lot of attention lately. In this paper, we describe a multi-modal system for generating a map of the environment through interaction of a human and home robot. This system enables people to teach a newcomer robot different attributes of objects and places in the room through speech commands and hand gestures. The robot learns about size, position, and topological relations between objects, and produces a map of the room based on knowledge learned through communication with the human. The developed system consists of several sections including: natural language processing, posture recognition, object localization and map generation. This system combines multiple sources of information and model matching to detect and track a human hand so that the user can point toward an object of interest and guide the robot to either go near it or to locate that object's position in the room. The positions of objects in the room are located by monocular camera vision and depth from focus method.  相似文献   

7.
8.
Many public facility layouts have been developed with little consideration of the visually impaired, producing difficult and unpleasant wayfinding experiences. Not all wayfinding elements can be applied universally to all environments; several wayfinding elements are specific to the type of industry being considered. No known research has been conducted within healthcare systems to find wayfinding limitations among visually impaired users during the navigation process. The purpose of this study was to analyze the current issues in a wayfinding task for the visually impaired and normally sighted to identify wayfinding design deficits. Normally-sighted participants (m = 25, f = 25) wore one of five different vision simulator goggles to simulate a specific visual impairment (diabetic retinopathy, glaucoma, cataracts, macular degeneration, and hemianopsia) and were then given directions how to get to specific series of departments within a hospital campus. Participants then navigated a second time (using a different, but similar series of paths) without the vision simulator goggles (normal vision) so comparisons could be made. During participant wayfinding, behaviors such as stopping, looking around, touching walls, becoming lost and/or confused were recorded by location of each instance on a map. Questionnaires asking about the surrounding environment were completed after each condition. The results of this study identified several design elements involving signage, paths/target sites, lighting and flooring that created wayfinding issues for both experimental conditions. The effects of the wayfinding issues on participants ranged from tripping to becoming lost in the surrounding environment. Enhancing wayfinding for the most highly visually impacted individuals may also improve wayfinding for those with normal vision via universal design. The hospital design flaws identified by this study provide key areas and elements (not previously investigated) for further research studies to analyze more comprehensively and ultimately provide sound design recommendations to enhance effective wayfinding.

Relevance to the industry

This paper offers information relevant to a growing healthcare sector facing an aging population with growing needs. Applying organizational, architectural and design principles from this paper can lead to improved patient satisfaction, safety and patient flow within the hospital setting for the visually impaired and others without visual impairment.  相似文献   

9.
Though several electronic assistive devices have been developed for the visually impaired in the past few decades, however, relatively few solutions have been devised to aid them in recognizing generic objects in their environment, particularly indoors. Nevertheless, research in this area is gaining momentum. Among the various technologies being utilized for this purpose, computer vision based solutions are emerging as one of the most promising options mainly due to their affordability and accessibility. This paper provides an overview of the various technologies that have been developed in recent years to assist the visually impaired in recognizing generic objects in an indoors environment with a focus on approaches based on computer vision. It aims to introduce researchers to the latest trends in this area as well as to serve as a resource for developers who wish to incorporate such solutions into their own work.  相似文献   

10.
Assistive Device Art derives from the integration of Assistive Technology and Art, involving the mediation of sensorimotor functions and perception from both, psychophysical methods and conceptual mechanics of sensory embodiment. This paper describes the concept of ADA and its origins by observing the phenomena that surround the aesthetics of prosthesis-related art. It also analyzes one case study, the Echolocation Headphones, relating its provenience and performance to this new conceptual and psychophysical approach of tool design. This ADA tool is designed to aid human echolocation. They facilitate the experience of sonic vision, as a way of reflecting and learning about the construct of our spatial perception. Echolocation Headphones are a pair of opaque goggles which disable the participant’s vision. This device emits a focused sound beam which activates the space with directional acoustic reflection, giving the user the ability to navigate and perceive space through audition. The directional properties of parametric sound provide the participant a focal echo, similar to the focal point of vision. This study analyzes the effectiveness of this wearable sensory extension for aiding auditory spatial location in three experiments; optimal sound type and distance for object location, perceptual resolution by just noticeable difference, and goal-directed spatial navigation for open pathway detection, all conducted at the Virtual Reality Lab of the University of Tsukuba, Japan. The Echolocation Headphones have been designed for a diverse participant base. They have both the potential to aid auditory spatial perception for the visually impaired and to train sighted individuals in gaining human echolocation abilities. Furthermore, this Assistive Device artwork instigates participants to contemplate on the plasticity of their sensorimotor architecture.  相似文献   

11.
《Advanced Robotics》2013,27(5):499-517
We are developing a helper robot that carries out tasks ordered by users through speech. The robot needs a vision system to recognize the objects appearing in the orders. However, conventional vision systems cannot recognize objects in complex scenes. They may find many objects and cannot determine which is the target. This paper proposes a method of using a conversation with the user to solve this problem. The robot asks a question to which the user can easily answer and whose answer can efficiently reduce the number of candidate objects. It considers the characteristics of features used for object identification such as the ease for humans to specify them by word, generating a user-friendly and efficient sequence of questions. Experimental results show that the robot can detect target objects by asking the questions generated by the method.  相似文献   

12.
为提高钢结构工业厂房设计的准确性、精确度和工作效率,为某钢结构工业厂房CAD/CAM软件研制出基于三维实体模型的后处理系统.该系统基于面向对象的编程技术,抽象出描述实际结构零件几何信息、结构特征和设计条件的智能型实体对象.将对象以及对象之间的层次关系和逻辑关系存储在AutoCAD图形数据库中,形成三维实体模型,用于直观、准确和完整地表现具有复杂空间关系和细部构造的真实结构;实现数据库对象的建立、编辑、查询、显示和数据整理等操作.该系统结合具体结构形式的特点,实现用于施工图设计和深化设计的全部关键功能.  相似文献   

13.
This paper describes a user study on the benefits and drawbacks of simultaneous spatial sounds in auditory interfaces for visually impaired and blind computer users. Two different auditory interfaces in spatial and non-spatial condition were proposed to represent the hierarchical menu structure of a simple word processing application. In the horizontal interface, the sound sources or the menu items were located in the horizontal plane on a virtual ring surrounding the user’s head, while the sound sources in the vertical interface were aligned one above the other in front of the user. In the vertical interface, the central pitch of the sound sources at different elevations was changed in order to improve the otherwise relatively low localization performance in the vertical dimension. The interaction with the interfaces was based on a standard computer keyboard for input and a pair of studio headphones for output. Twelve blind or visually impaired test subjects were asked to perform ten different word processing tasks within four experiment conditions. Task completion times, navigation performance, overall satisfaction and cognitive workload were evaluated. The initial hypothesis, i.e. that the spatial auditory interfaces with multiple simultaneous sounds should prove to be faster and more efficient than non-spatial ones, was not confirmed. On the contrary—spatial auditory interfaces proved to be significantly slower due to the high cognitive workload and temporal demand. The majority of users did in fact finish tasks with less navigation and key pressing; however, they required much more time. They reported the spatial auditory interfaces to be hard to use for a longer period of time due to the high temporal and mental demand, especially with regards to the comprehension of multiple simultaneous sounds. The comparison between the horizontal and vertical interface showed no significant differences between the two. It is important to point out that all participants were novice users of the system; therefore it is possible that the overall performance could change with a more extensive use of the interfaces and an increased number of trials or experiments sets. Our interviews with visually impaired and blind computer users showed that they are used to sharing their auditory channel in order to perform multiple simultaneous tasks such as listening to the radio, talking to somebody, using the computer, etc. As the perception of multiple simultaneous sounds requires the entire capacity of the auditory channel and total concentration of the listener, it does therefore not enable such multitasking.  相似文献   

14.
The overall quality of haptic user interfaces designed to support visually impaired students' science learning through sensorial feedback was systematically studied to investigate task performance and user behavior. Fourteen 6th- to 11th-grade students with visual impairments recruited from a state-funded blind school were asked to perform three main tasks (i.e., menu selection, structure exploration, and force recognition) using haptic user interfaces and a haptic device. This study used several dependent measures that are categorized into three types of variables: (a) task performance including success rate, workload, and task completion time; (b) user behavior defined as cursor movements proportionately represented from the user's cursor positional data; and (c) user preference. Results showed that interface type has significant effects on task performance, user behavior, and user preference, with varying degree of impact to participants with severe visual impairments performing the tasks. The results of this study as well as a set of refined design guidelines and principles should provide insights to the future research of haptic user interfaces that can be used when developing haptically enhanced science learning systems for the visually impaired.  相似文献   

15.

In this paper, we introduce a novel computer vision-based perception system, dedicated to the autonomous navigation of visually impaired people. A first feature concerns the real-time detection and recognition of obstacles and moving objects present in potentially cluttered urban scenes. To this purpose, a motion-based, real-time object detection and classification method is proposed. The method requires no a priori information about the obstacle type, size, position or location. In order to enhance the navigation/positioning capabilities offered by traditional GPS-based approaches, which are often unreliably in urban environments, a building/landmark recognition approach is also proposed. Finally, for the specific case of indoor applications, the system has the possibility to learn a set of user-defined objects of interest. Here, multi-object identification and tracking is applied in order to guide the user to localize such objects of interest. The feedback is presented to user by audio warnings/alerts/indications. Bone conduction headphones are employed in order to allow visually impaired to hear the systems warnings without obstructing the sounds from the environment. At the hardware level, the system is totally integrated on an android smartphone which makes it easy to wear, non-invasive and low-cost.

  相似文献   

16.
Mobile robots capable of auditory perception usually adopt the stop-perceive-act principle to avoid sounds made during moving due to motor noise. Although this principle reduces the complexity of the problems involved in auditory processing for mobile robots, it restricts their capabilities of auditory processing. In this paper, sound and visual tracking are investigated to compensate each other's drawbacks in tracking objects and to attain robust object tracking. Visual tracking may be difficult in case of occlusion, while sound tracking may be ambiguous in localization due to the nature of auditory processing. For this purpose, we present an active audition system for humanoid robot. The audition system of the highly intelligent humanoid requires localization of sound sources and identification of meanings of the sound in the auditory scene. The active audition reported in this paper focuses on improved sound source tracking by integrating audition, vision, and motor control. Given the multiple sound sources in the auditory scene, SIG the humanoid actively moves its head to improve localization by aligning microphones orthogonal to the sound source and by capturing the possible sound sources by vision. The system adaptively cancels motor noises using motor control signals. The experimental result demonstrates the effectiveness of sound and visual tracking.  相似文献   

17.
Degradation of the visual system can lead to a dramatic reduction of mobility by limiting a person to his sense of touch and hearing. This paper presents the development of an obstacle detection system for visually impaired people. While moving in his environment the user is alerted to close obstacles in range. The system we propose detects an obstacle surrounding the user by using a multi-sonar system and sending appropriate vibrotactile feedback. The system aims at increasing the mobility of visually impaired people by offering new sensing abilities.  相似文献   

18.
《Applied Soft Computing》2007,7(1):257-264
The main objective of this work is to develop an electronic travel aid to assist the blinds for obstacle identification in their navigation. This navigation assistance for visually impaired (NAVI) system presented in this paper consists of a single board processing system (SBPS), a vision sensor mounted headgear and a pair of stereo earphones. The image environment in front of the blind is captured by the vision sensor. The image is processed by a new real time image processing scheme using fuzzy clustering algorithms. The processed image is mapped onto a specially structured stereo acoustic patterns and transferred to the stereo earphones in the system. Blind individuals were trained with NAVI system and tested for obstacle identification. Suggestions from the blind volunteers regarding pleasantness and discrimination of sound pattern were also incorporated in the prototype. The proposed processing methodology is found to be effective for object identification and for producing stereo sound patterns in the NAVI system.  相似文献   

19.
盲文是视障人士获取信息,学习知识的重要媒介.然而,目前基于纸质书籍的盲文学习方法只能提供盲文点位的触觉刺激,存在不便携、不易用且内容陈旧等问题.为此,本文提出了一种视觉、听觉和触觉同步刺激的数字化盲文学习方法,能够提高视障人士的盲文学习效率.基于多感知盲文学习机,本文设计了一种多感知信息匹配算法,能够输出文字、声音和盲文点位相同内容的信息,为视障人士无障碍学习盲文提供条件.短期记忆的盲文学习效果实验表明:(1)在视觉、听觉和触觉的共同刺激下,盲文学习效率最高,即在盲文学习过程中,增加视觉刺激对于视力残余人士提升盲文学习效率有显著正向促进作用;(2)在听觉和触觉的共同刺激下,盲文学习效率并不高,即全盲人士学会盲文有一定难度,需要有较长的学习曲线;(3)仅在听觉刺激下,盲文学习效率很低,即开发语音学盲文的APP不具备实践意义.  相似文献   

20.
Although a large amount of research has been conducted on building interfaces for the visually impaired that allows users to read web pages and generate and access information on computers, little development addresses two problems faced by the blind users. First, sighted users can rapidly browse and select information they find useful, and second, sighted users can make much useful information portable through the recent proliferation of personal digital assistants (PDAs). These possibilities are not currently available for blind users. This paper describes an interface that has been built on a standard PDA and allows its user to browse the information stored on it through a combination of screen touches coupled with auditory feedback. The system also supports the storage and management of personal information so that addresses, music, directions, and other supportive information can be readily created and then accessed anytime and anywhere by the PDA user. The paper describes the system along with the related design choices and design rationale. A user study is also reported.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号