首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Though several electronic assistive devices have been developed for the visually impaired in the past few decades, however, relatively few solutions have been devised to aid them in recognizing generic objects in their environment, particularly indoors. Nevertheless, research in this area is gaining momentum. Among the various technologies being utilized for this purpose, computer vision based solutions are emerging as one of the most promising options mainly due to their affordability and accessibility. This paper provides an overview of the various technologies that have been developed in recent years to assist the visually impaired in recognizing generic objects in an indoors environment with a focus on approaches based on computer vision. It aims to introduce researchers to the latest trends in this area as well as to serve as a resource for developers who wish to incorporate such solutions into their own work.  相似文献   

2.
Degradation of the visual system can lead to a dramatic reduction of mobility by limiting a person to his sense of touch and hearing. This paper presents the development of an obstacle detection system for visually impaired people. While moving in his environment the user is alerted to close obstacles in range. The system we propose detects an obstacle surrounding the user by using a multi-sonar system and sending appropriate vibrotactile feedback. The system aims at increasing the mobility of visually impaired people by offering new sensing abilities.  相似文献   

3.
Multimedia Tools and Applications - Video games are changing how we interact and communicate with each other. They can provide an authentic and collaborative platform for building new communities...  相似文献   

4.
Individuals with visual impairments often face challenges in their daily lives, particularly in terms of independent mobility. To address this issue, we present a mixed reality-based assistive system for visually impaired individuals, which comprises a Microsoft Hololens2 device and a website and utilizes a simultaneous localization and mapping (SLAM) algorithm to capture various large indoor scenes in real-time. This system incorporates remote multi-person assistance technology and navigation technology to aid visually impaired individuals. To evaluate the effectiveness of our system, we conducted an experiment in which several participants completed a large indoor scene maintenance task. Our experimental results demonstrate that the system is robust and can be utilized in a wide range of indoor environments. Additionally, the system enhances environmental perception and enables visually impaired individuals to navigate independently, thus facilitating successful task completion.  相似文献   

5.
6.
This paper describes a user study on the benefits and drawbacks of simultaneous spatial sounds in auditory interfaces for visually impaired and blind computer users. Two different auditory interfaces in spatial and non-spatial condition were proposed to represent the hierarchical menu structure of a simple word processing application. In the horizontal interface, the sound sources or the menu items were located in the horizontal plane on a virtual ring surrounding the user’s head, while the sound sources in the vertical interface were aligned one above the other in front of the user. In the vertical interface, the central pitch of the sound sources at different elevations was changed in order to improve the otherwise relatively low localization performance in the vertical dimension. The interaction with the interfaces was based on a standard computer keyboard for input and a pair of studio headphones for output. Twelve blind or visually impaired test subjects were asked to perform ten different word processing tasks within four experiment conditions. Task completion times, navigation performance, overall satisfaction and cognitive workload were evaluated. The initial hypothesis, i.e. that the spatial auditory interfaces with multiple simultaneous sounds should prove to be faster and more efficient than non-spatial ones, was not confirmed. On the contrary—spatial auditory interfaces proved to be significantly slower due to the high cognitive workload and temporal demand. The majority of users did in fact finish tasks with less navigation and key pressing; however, they required much more time. They reported the spatial auditory interfaces to be hard to use for a longer period of time due to the high temporal and mental demand, especially with regards to the comprehension of multiple simultaneous sounds. The comparison between the horizontal and vertical interface showed no significant differences between the two. It is important to point out that all participants were novice users of the system; therefore it is possible that the overall performance could change with a more extensive use of the interfaces and an increased number of trials or experiments sets. Our interviews with visually impaired and blind computer users showed that they are used to sharing their auditory channel in order to perform multiple simultaneous tasks such as listening to the radio, talking to somebody, using the computer, etc. As the perception of multiple simultaneous sounds requires the entire capacity of the auditory channel and total concentration of the listener, it does therefore not enable such multitasking.  相似文献   

7.
The author runs an educational experiment at his university where blind and visually impaired people can study computer science or mathematics under conditions adequate to their disabilities. The project is now in its first semester. The paper reveals the problems blind and visually impaired students are presently facing, and it describes the methods used in our educational experiment to overcome these difficulties. It also reports our experience with the project gained so far. An appendix is devoted to a brief survey about the technology used to make computers accessible to blind and visually impaired people.  相似文献   

8.
9.
The eyes are an essential tool for human observation and perception of the world, helping people to perform their tasks. Visual impairment causes many inconveniences in the lives of visually impaired people. Therefore, it is necessary to focus on the needs of the visually impaired community. Researchers work from different angles to help visually impaired people live normal lives. The advent of the digital age has profoundly changed the lives of the visually impaired community, making life more convenient. Deep learning, as a promising technology, is also expected to improve the lives of visually impaired people. It is increasingly being used in the diagnosis of eye diseases and the development of visual aids. The earlier accurate diagnosis of the eye disease by the doctor, the sooner the patient can receive the appropriate treatment and the better chances of a cure. This paper summarises recent research on the development of artificial intelligence-based eye disease diagnosis and visual aids. The research is divided according to the purpose of the study into deep learning methods applied in diagnosing eye diseases and smart devices to help visually impaired people in their daily lives. Finally, a summary is given of the directions in which artificial intelligence may be able to assist the visually impaired in the future. In addition, this overview provides some knowledge about deep learning for beginners. We hope this paper will inspire future work on the subjects..  相似文献   

10.
NAVIG: augmented reality guidance system for the visually impaired   总被引:1,自引:0,他引:1  
Navigating complex routes and finding objects of interest are challenging tasks for the visually impaired. The project NAVIG (Navigation Assisted by artificial VIsion and GNSS) is directed toward increasing personal autonomy via a virtual augmented reality system. The system integrates an adapted geographic information system with different classes of objects useful for improving route selection and guidance. The database also includes models of important geolocated objects that may be detected by real-time embedded vision algorithms. Object localization (relative to the user) may serve both global positioning and sensorimotor actions such as heading, grasping, or piloting. The user is guided to his desired destination through spatialized semantic audio rendering, always maintained in the head-centered reference frame. This paper presents the overall project design and architecture of the NAVIG system. In addition, details of a new type of detection and localization device are presented. This approach combines a bio-inspired vision system that can recognize and locate objects very quickly and a 3D sound rendering system that is able to perceptually position a sound at the location of the recognized object. This system was developed in relation to guidance directives developed through participative design with potential users and educators for the visually impaired.  相似文献   

11.
At first glance, making electronic music seems to be a domain which is also well suited for people with limited eye-sight. However, a closer analysis reveals that standard software and hardware are both strongly dominated by graphical output. In order to close this gap for visually impaired musicians, we developed a MIDI (Musical Instrument Digital Interface) sequencer with audio-feedback and a new interaction paradigm which eliminates interaction with the PC??s keyboard and screen. The blind musician relies solely on input via the instrument itself. He can both, record and play music via the claviature??s black & white keys but at the same time control all functions of a multi-track MIDI sequencer without ever taking the hands off the instrument. We also use the MIDI-connection for coding different kinds of feedback to the user in an efficient way. The software which runs on a PC that is connected to an electronic instrument has been evaluated and improved extensively.  相似文献   

12.
As a special group, visually impaired people (VIP) find it difficult to access and use visual information in the same way as sighted individuals. In recent years, benefiting from the development of computer hardware and deep learning techniques, significant progress have been made in assisting VIP with visual perception. However, most existing datasets are annotated in single scenario and lack of sufficient annotations for diversity obstacles to meet the realistic needs of VIP. To address this issue, we propose a new dataset called Walk On The Road (WOTR), which has nearly 190 K objects, with approximately 13.6 objects per image. Specially, WOTR contains 15 categories of common obstacles and 5 categories of road judging objects, including multiple scenario of walking on sidewalks, tactile pavings, crossings, and other locations. Additionally, we offer a series of baselines by training several advanced object detectors on WOTR. Furthermore, we propose a simple but effective PC-YOLO to obtain excellent detection results on WOTR and PASCAL VOC datasets. The WOTR dataset is available at https://github.com/kxzr/WOTR  相似文献   

13.
Technology advances and the continuing convergence of computing and telecommunications have made an unprecedented amount of information available to the public. For many people with disabilities, however, accessibility issues limit the impact of such widespread availability. Of the many types of disabilities-mobility, hearing, and learning impairments, for example-vision impairments are most pervasive in the general population, especially among seniors. The world's rapidly aging population is redefining visually impaired, which refers to individuals with low vision (that is, people for whom ordinary eyeglasses, contact lenses, or intraocular lens implants don't provide clear vision), color blindness, and blindness. In 1998, the US Congress amended the Rehabilitation Act, strengthening provisions covering access to government-posted information for people with disabilities. As amended, Section 508 requires federal agencies to ensure that all assets and technologies are accessible and usable by employees and the public, regardless of physical, sensory, or cognitive disabilities. Most current assistive technologies for visually impaired users are expensive, difficult to use, and platform dependent. A new approach by the US National Library of Medicine (NLM), National Institutes of Health (NIH), addresses these weaknesses by locating the assistive capability at the server, thus freeing visually impaired individuals from the software expense, technical complexity, and substantial learning curve of other assistive technologies. NLM's Senior Health Web site (http://nihseniorhealth.gov), a talking Web (a Web application that presents Web content as speech to users), demonstrates the approach's effectiveness.  相似文献   

14.
15.
Computer work is a visually demanding task associated with adverse eye symptoms. Frequent use of digital displays is known to cause a deterioration of the so-called binocular control. Direct glare further reduces the capacity for binocular coordination during computer work, leading to reduced reading ability and increased eye symptoms.The purpose of this study was to investigate the effect of different luminance levels of direct glare on binocular eye movement control and reading ability in a computer work environment.Sixteen participants with normal binocular vision performed equal reading tasks in a balanced study. Three controlled lighting conditions of direct glare (2000, 4000 and 6000 cd/m2) were tested, in addition to no glare. After each trial, the participants answered survey questionnaires regarding their understanding of the text, as well as their subjective experience of workload and perceived vision. Horizontal fixation disparity (FD) was measured before and after the reading tasks to evaluate binocular eye movement control.When comparing the responses of visual experience, a significant difference in reported eye symptoms was found between lighting conditions. Based on the variation (SD), a significant difference was found within mean values of repeated measurements of horizontal FD and a significantly higher variation in a comparison of initial FD values measured during lighting conditions of no glare, to final measured values in all three glare conditions. Reading ability was found to be significantly negative affected with the adversity of lighting conditions.This study supports the contention that binocular eye movement control is reduced caused by direct glare. Even lower degree of disability glare caused eye symptoms. The results establish the argument that working with flat screens raises visual demands.  相似文献   

16.
17.
At present, a visually impaired kayaker needs the assistance of a kayaker coach in charge of guiding him/her by using loud vocal instructions or a sound device during the sea ride. However, neither the coach nor the visually impaired kayaker feel at ease with such a guiding due to the amount of noise generated by the frequent vocal interactions. This paper describes a novel concept of sensory navigation guide meant to help kayakers with visual disabilities to practice sea kayaking in an autonomous way. The innovation here consists in providing kayakers with wristbands that automatically vibrate left or right, depending on the predefined trajectory to be followed. The main contributions of this work include an original navigation algorithm based on GPS feedback to track a corridor‐shaped path on the sea and the definition of specific metrics to analyze the performance of the kayakers along the ride. Experiments carried out on a population of 10 visually impaired kayakers and 10 sighted kayakers showed convincing results. A satisfaction survey confirmed that all participants acknowledged the added value of the system in terms of increased autonomy. Most of the visually impaired participants also enjoyed a greater sports entertainment experience thanks to this guiding system, which can be extended to other kinds of water sports.  相似文献   

18.
This paper presents a context-aware smartphone-based based visual obstacle detection approach to aid visually impaired people in navigating indoor environments. The approach is based on processing two consecutive frames (images), computing optical flow, and tracking certain points to detect obstacles. The frame rate of the video stream is determined using a context-aware data fusion technique for the sensors on smartphones. Through an efficient and novel algorithm, a point dataset on each consecutive frames is designed and evaluated to check whether the points belong to an obstacle. In addition to determining the points based on the texture in each frame, our algorithm also considers the heading of user movement to find critical areas on the image plane. We validated the algorithm through experiments by comparing it against two comparable algorithms. The experiments were conducted in different indoor settings and the results based on precision, recall, accuracy, and f-measure were compared and analyzed. The results show that, in comparison to the other two widely used algorithms for this process, our algorithm is more precise. We also considered time-to-contact parameter for clustering the points and presented the improvement of the performance of clustering by using this parameter.  相似文献   

19.
A flexible vision-based algorithm for a book sorting system is presented. The algorithm is based on a discrimination model that is adaptively generated for the current object classes by learning. The algorithm consists of an image normalization process, a feature element extraction process, a learning process, and a recognition process. The image normalization process extracts the contour of the object in an image, and geometrically normalizes the image. The feature extraction process converts the normalized image to the pyramidal representation, and the feature element is extracted from each resolution level. The learning process generates a discrimination model, which represents the differences between classes, based on hierarchical clustering. In the recognition process, the input images are hierarchically discriminated under the control of the decision tree. To evaluate the algorithm, a simulation system was implemented on a general-purpose computer and an image processor was developed  相似文献   

20.
A ship is constructed using blocks, which are the basic units in shipbuilding. Each block is designed and assembled individually and welded together to form an entire ship. Therefore, the assembly of blocks within a manufacturing schedule is important for the timely delivery of a ship. To maintain the block assembly schedule, the current status of the block assembly must be monitored, and fed back to the schedule operator. Currently, monitoring of the assembly status is performed manually by the worker, who determines the status of assembly of the block based on his/her experiences. Therefore, the efficiency and accuracy of the work cannot be guaranteed in the current practice. To address this problem, a vision-based system for monitoring block assembly is proposed in this work. The system consists of segmentation, identification and estimation units. Cameras acquire images of the blocks during assembly. The images are subsequently processed to extract the areas of the blocks. Next, the extracted blocks are identified and compared with CAD data for estimating the assembly progress. The estimated information is provided to the operator for efficient management of the block assembly schedule. The proposed system was tested with real examples that demonstrate the potential for use in a real assembly site.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号