首页 | 本学科首页   官方微博 | 高级检索  
     


A visual reasoning-based approach for driving experience improvement in the AR-assisted head-up displays
Affiliation:1. State Key Laboratory of Digital Manufacturing Equipment and Technology, Huazhong University of Science and Technology, Wuhan 430074, China;2. School of Mechanical and Electrical Engineering, Central South University, Changsha 410083, China;3. National NC System Engineering Research Center, Huazhong University of Science and Technology, Wuhan 430074, China;1. ISAE-SUPMECA, Quartz Laboratory, Saint-Ouen, France;2. Roberval Laboratory, University of Technology of Compiègne, Compiègne, France;3. Laboratory of Mechanics of Sousse, National Engineering School of Sousse, University of Sousse, Sousse, Tunisia;1. College of Mechanical and Electronic Engineering, Shandong University of Science and Technology, Shandong, Qingdao 266590, China;2. College of Intelligent Equipment, Shandong University of Science and Technology, Shandong, Taian 271019, China;1. Department of Architectural Engineering, The Pennsylvania State University, University Park, PA 16802, USA;2. Department of Architectural Engineering, The Pennsylvania State University, University Park, PA 16802, USA;3. Department of Civil and Environmental Engineering, University of Utah, Salt Lake City, UT 84112, USA;4. Myers-Lawson School of Construction, Virginia Polytechnic Institute and State University, 1345 Perry St., Blacksburg, VA 24061, USA
Abstract:Enabled by advanced data analytics and intelligent computing, augmented reality head-up displays (AR-HUDs) are appraised with a certain degree of intelligence towards an in-car assistance system providing more convenience for drivers and ensuring safer traffic. Nevertheless, current AR-HUDs systems fail to analyze perceptual results with recommended driving strategies as the cognitive intelligence, while solely rely on driver’s own decision-makings. To pave the way, this work stepwise proposes a visual reasoning-based approach for presenting drivers with perceptual, predictive, and reasoning information onto AR-HUDs toward cognitive intelligence. Firstly, a Driving Scenario Knowledge Graph comprising many road elements and empirical knowledge is established appropriately. Then, by analyzing the video streams and images collected by an in-car visual camera, the driving scene can be perceived comprehensively, including 1) identifying road elements and 2) moving elements’ intention recognition. Afterwards, a graph-based driving scenario reasoning model, driving scenario-adaptive KAGNET, is built for achieving driving strategy recommendations. Moreover, the analyzed information is shown on the HUDs via pre-defined AR graphics to support drivers intuitively. A case study is given lastly to prove its feasibility. As an explorative study, some limitations and future work are emphasized to attract further study and open discussion in this area for pursuing the better implementation of AR-HUDs.
Keywords:Visual reasoning  Smart traffic  Graph neural network  Augmented reality  Head-up displayVisual reasoning
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号