首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 156 毫秒
1.
触觉(Haptic)技术信息表达是一种有效的人机交互模式和信息传递方式,并作为信息传递的重要通道用于盲人导航/路径诱导辅具领域,弥补了在特定条件下声音的缺点与不足。对盲人导航/路径诱导辅具总体情况进行了概述;结合空间认知介绍了基于Haptic技术的盲人导航/路径诱导辅具研究现状与应用;分析了基于Haptic技术的盲人导航/路径诱导辅具在今后的发展趋势和存在的问题,并讨论了应用前景。  相似文献   

2.
此系统目的在于提供一种方便盲人独自安全出行的导航系统,旨在解决现有的盲人导航系统不能够准确、安全的为盲人导航的问题.系统采用RFID射频识别技术进行导航,将道路存储在电子标签,再转换成相应的语音提示信息,从而实现为盲人提供精确、安全的导航,解决盲人的出行问题.现阶段已完成预定功能,实现RFID盲人导航、语音提示、盲人输入法、GPS导航、语音读取短信、GSM相关功能等.系统能准确地帮助盲人到达目的地,让盲人实现自由地出行.  相似文献   

3.
非合作航天器自主相对导航研究综述   总被引:3,自引:0,他引:3  
非合作航天器自主相对导航作为与非合作航天器实现空间交会对接过程中的关键技术,是在轨服务技术的重点发展方向之一,其研究具有重要的理论价值与工程意义.针对在轨服务任务对于自主相对精确导航的需求,本文对发展非合作航天器自主相对导航技术的必要性进行了阐述.首先总结了非合作航天器自主相对导航技术的内涵与研究现状;随后分析梳理了非合作航天器自主相对导航过程涉及到的光学敏感器、位姿测量、导航滤波器以及地面实验等关键技术.最后根据研究现状和关键技术的分析指出了非合作航天器自主相对导航目前存在的主要问题并给出后续发展的建议.  相似文献   

4.
近年来,物联网和计算机信息技术的不断发展,使人们的生活中出现了各种各样的智能可穿戴设备。但是目前市场上针对盲人群体的可穿戴智能设备较少,无法给盲人提供有效帮助。为了给盲人带来生活便利,保障盲人在日常生活中的安全,本文设计出一种基于物联网的智能盲人导航眼镜的方案。该眼镜的设计涉及语音识别、GPS定位、传感器等技术,能够实现避障、GPS导航、人机交互等功能。设计结果表明,该眼镜能够有效地帮助盲人提高生活便利,使盲人群体不再依赖传统工具,具有较大的实用价值和社会需求。  相似文献   

5.
震动技术信息表达是一种有效的人机交互模式和信息传递方式,并作为信息传递的重要通道用于盲人导航/路径诱导辅具领域,弥补了在特定条件下声音的缺点与不足.针对盲人具有敏锐于常人的触觉资源和对路径诱导环境特定的感知,综合应用GPS、GIS和Haptic(震动)等技术,开发研制一种盲人路径诱导新模式.在此基础上,进一步探讨该模式向盲人提供路口等关键节点和偏离规划路径多种情况下的差异性震动.具有抗噪声干扰、反馈及时和接收效率高等优势,具有很高的应用前景.  相似文献   

6.
为了解决目前盲人自主出行困难的问题,结合惯性导航与RFID技术各自的优势和特点,提出一种基于手机惯性导航和RFID相结合的设计方案。该方案基于固定式RFID标签群,生成盲人路线图,将随身移动式RFID读卡器和个人智能手机相结合,完成盲人定位、路径规划和导航提醒。实验结果表明:该系统能够给盲人提供安全便捷的导航服务,有助于解决盲人自主出行问题。  相似文献   

7.
计算机技术的运用为电力部门的巡检工作提供了方便,然而现有的大部分巡检系统没有和导航系统结合起来,巡检工作缺乏空间定位信息服务的支持.本文阐述了将导航系统与电力巡检系统相结合的理论,为巡检工作提供GPS定位、线路导航、设备发现等空间定位信息服务,并使用J2ME实现了该系统.  相似文献   

8.
给出了一个智能旅游行程导航的系统架构、关键技术和系统的设计与实现.系统结合Oracle Spatial空间数据库与MapXtreme技术,综合运用位置服务、地理信息系统、数据挖掘等技术,根据旅游者的类别、个人偏好、特征和需求等信息,为游客提供智能化、个性化的旅游用户信息管理服务、智能行程规划服务和智能导航、救援等.  相似文献   

9.
利用3G和GIS开发一个盲人导航系统,为盲人提供导航服务。该系统的实现技术主要是整合了第三代系统通信、地理信息系统(GIS)和全球定位系统(GPS)。  相似文献   

10.
如今, 全球导航卫星系统(GNSS)基本解决了室外开阔环境下的实时高精度定位问题. 然而, 随着城市化进程加快, 为受到GNSS信号干扰的密集建筑物场所提供行人导航服务也产生了大量需求, 并推动室内定位技术近些年取得了较大进展. 在此基础上, 由于目前还没有任何单一普适的定位方式解决室内外环境的无缝过渡, 因此, 为了解决导航领域“最后一公里”的难题, 无缝导航技术开启了新的热点与课题. 本文总结了行人室内导航的多传感器融合技术: (1) 从基于无线射频信号到非电信号分别分析比较单一传感器在室内定位中的优势与局限性; (2) 介绍了室内多传感器融合领域的定位手段, 包括多模式指纹融合、基于几何测距融合与基于PDR技术融合. 最后, 研究了室内定位技术应用于无缝导航的解决方案, 展示了室内外环境下无缝定位的挑战与前景. 本工作为后续实现高精度无缝定位研究提供参考与帮助.  相似文献   

11.
In this paper, an empirical based study is described which has been conducted to gain a deeper understanding of the challenges faced by the visually impaired community when accessing the Web. The study, involving 30 blind and partially sighted computer users, has identified navigation strategies, perceptions of page layout and graphics using assistive devices such as screen readers. Analysis of the data has revealed that current assistive technologies impose navigational constraints and provide limited information on web page layout. Conveying additional spatial information could enhance the exploration process for visually impaired Internet users. It could also assist the process of collaboration between blind and sighted users when performing web-based tasks. The findings from the survey have informed the development of a non-visual interface, which uses the benefits of multimodal technologies to present spatial and navigational cues to the user.  相似文献   

12.
This paper describes a user study on the benefits and drawbacks of simultaneous spatial sounds in auditory interfaces for visually impaired and blind computer users. Two different auditory interfaces in spatial and non-spatial condition were proposed to represent the hierarchical menu structure of a simple word processing application. In the horizontal interface, the sound sources or the menu items were located in the horizontal plane on a virtual ring surrounding the user’s head, while the sound sources in the vertical interface were aligned one above the other in front of the user. In the vertical interface, the central pitch of the sound sources at different elevations was changed in order to improve the otherwise relatively low localization performance in the vertical dimension. The interaction with the interfaces was based on a standard computer keyboard for input and a pair of studio headphones for output. Twelve blind or visually impaired test subjects were asked to perform ten different word processing tasks within four experiment conditions. Task completion times, navigation performance, overall satisfaction and cognitive workload were evaluated. The initial hypothesis, i.e. that the spatial auditory interfaces with multiple simultaneous sounds should prove to be faster and more efficient than non-spatial ones, was not confirmed. On the contrary—spatial auditory interfaces proved to be significantly slower due to the high cognitive workload and temporal demand. The majority of users did in fact finish tasks with less navigation and key pressing; however, they required much more time. They reported the spatial auditory interfaces to be hard to use for a longer period of time due to the high temporal and mental demand, especially with regards to the comprehension of multiple simultaneous sounds. The comparison between the horizontal and vertical interface showed no significant differences between the two. It is important to point out that all participants were novice users of the system; therefore it is possible that the overall performance could change with a more extensive use of the interfaces and an increased number of trials or experiments sets. Our interviews with visually impaired and blind computer users showed that they are used to sharing their auditory channel in order to perform multiple simultaneous tasks such as listening to the radio, talking to somebody, using the computer, etc. As the perception of multiple simultaneous sounds requires the entire capacity of the auditory channel and total concentration of the listener, it does therefore not enable such multitasking.  相似文献   

13.
This paper introduces a novel interface designed to help blind and visually impaired people to explore and navigate on the Web. In contrast to traditionally used assistive tools, such as screen readers and magnifiers, the new interface employs a combination of both audio and haptic features to provide spatial and navigational information to users. The haptic features are presented via a low-cost force feedback mouse allowing blind people to interact with the Web, in a similar fashion to their sighted counterparts. The audio provides navigational and textual information through the use of non-speech sounds and synthesised speech. Interacting with the multimodal interface offers a novel experience to target users, especially to those with total blindness. A series of experiments have been conducted to ascertain the usability of the interface and compare its performance to that of a traditional screen reader. Results have shown the advantages that the new multimodal interface offers blind and visually impaired people. This includes the enhanced perception of the spatial layout of Web pages, and navigation towards elements on a page. Certain issues regarding the design of the haptic and audio features raised in the evaluation are discussed and presented in terms of recommendations for future work.  相似文献   

14.
Navigation within a closed environment requires analysis of a variety of acoustic cues, a task that is well developed in many visually impaired individuals, and for which sighted individuals rely almost entirely on visual information. For blind people, the act of creating cognitive maps for spaces, such as home or office buildings, can be a long process, for which the individual may repeat various paths numerous times. While this action is typically performed by the individual on-site, it is of some interest to investigate at which point this task can be performed off-site, at the individual's discretion. In short, is it possible for an individual to learn an architectural environment without being physically present? If so, such a system could prove beneficial for navigation preparation in new and unknown environments. The main goal of the present research can therefore be summarized as investigating the possibilities of assisting blind individuals in learning a spatial environment configuration through the listening of audio events and their interactions with these events within a virtual reality experience. A comparison of two types of learning through auditory exploration has been performed: in situ real displacement and active navigation in a virtual architecture. The virtual navigation rendered only acoustic information. Results for two groups of five participants showed that interactive exploration of virtual acoustic room simulations can provide sufficient information for the construction of coherent spatial mental maps, although some variations were found between the two environments tested in the experiments. Furthermore, the mental representation of the virtually navigated environments preserved topological and metric properties, as was found through actual navigation.  相似文献   

15.
This paper presents a mixed reality tool developed for the training of the visually impaired based on haptic and auditory feedback. The proposed approach focuses on the development of a highly interactive and extensible Haptic Mixed Reality training system that allows visually impaired to navigate into real size Virtual Reality environments. The system is based on the use of the CyberGrasp™ haptic device. An efficient collision detection algorithm based on superquadrics is also integrated into the system so as to allow real time collision detection in complex environments. A set of evaluation tests is designed in order to identify the importance of haptic, auditory and multimodal feedback and to compare the MR cane against the existing Virtual Reality cane simulation system.  相似文献   

16.
基于μClinux的嵌入式导盲系统   总被引:2,自引:0,他引:2       下载免费PDF全文
杨超  赵群飞 《计算机工程》2008,34(24):282-284
提出一种为盲人或视弱人群提供导航的基于盲道识别的嵌入式系统的设计方案。结合高性能定点DSP ADSP-BF533 和视频解码芯片SAA7113,设计图像处理硬件平台,移植嵌入式操作系统并给出算法实现与优化方法。实验结果表明,该系统在自然盲道情况下达到必需的处理实时性和判断的准确性要求,可较好地服务于盲人的独立出行。  相似文献   

17.
A method to detect obstacle-free paths in real-time which works as part of a cognitive navigation aid system for visually impaired people is proposed. It is based on the analysis of disparity maps obtained from a stereo vision system which is carried by the blind user. The presented detection method consists of a fuzzy logic system that assigns a certainty to be part of a free path to each group of pixels, depending on the parameters of a planar-model fitting. We also present experimental results on different real outdoor scenarios showing that our method is the most reliable in the sense that it minimizes the false positives rate.  相似文献   

18.
Contemporary knowledge systems have given too much importance to visual symbols, the written word for instance, as the repository of knowledge. The primacy of the written word and the representational world built around it is, however, under debate—especially from recent insights derived from cognitive science that seeks to bring back action, intent and emotion within the core of cognitive science (Freeman and Nunez in J Consciousness Stud 6(11/12), 1999). It is being argued that other sensory experiences, apart from the visual, along with desires (or intent) and emotions—like pain, pleasure, sorrow or joy—constitute equally important building blocks that shape an individual’s cognition of the world around. This multi-sensory cognition colored by emotions inspire action and hence is valid knowledge. This is probably nowhere more apparent than in the world of the visually impaired. Deprived of visual sensory capability, they have to perforce depend on other senses. But the dominant discourse in wider society plays a major role in determining what they (the blind) can do. A society built around visual symbols and the written word underplays other elements of cognition and in the process undervalues them. This also gets reflected in the construction of social artifacts of various kinds, such as the educational certification system (The Braille system is an attempt to make the written world accessible to the blind through tactile signals—so that words are ‘felt’ and ‘read.’ But it is quite cumbersome. For instance, even a blind highly skilled at writing in Braille would not be able to match the writing speed of an ordinary visually endowed literate person. Effective and efficient computer-based voice–text–voice converters might solve this problem better.)-based primarily on skills over the written word. Linguistic ability becomes most valuable and at another level the written word gets salience over the spoken word. The blind hardly has a chance, therefore, except through concessions or piety. A practice built around the imagery of an empowered blind person, therefore, must depart from mainstream conceptualization—for power is derived from what one has rather than from what one lacks. It must begin by tapping and valorizing one’s own endowments. This paper is an attempt to identify such a departure based on the experience of Blind Opera—a theatre group of the blind working in Kolkata, India. It seeks to provide an exposition in written word of an experience that can only be partially captured within the confines of a text. It is an incomplete account, therefore, and may be taken as an attempt to reach out and seek an exchange of experiences and insights.  相似文献   

19.
Research on spatial cognition and blind navigation suggests that a device aimed at helping blind people to shop independently should provide the shopper with effective interfaces to the locomotor and haptic spaces of the supermarket. In this article, we argue that robots can act as effective interfaces to haptic and locomotor spaces in modern supermarkets. We also present the design and evaluation of three product selection modalities—browsing, typing and speech, which allow the blind shopper to select the desired product from a repository of thousands of products.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号