首页 | 本学科首页   官方微博 | 高级检索  
     


Knowledge-guided inference for voice-enabled CAD
Authors:XY Kou  ST Tan
Affiliation:Department of Mechanical Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong
Abstract:Voice based human-computer interactions have raised much interest and found various applications. Some extant voice based interactions only support voice commands with fixed vocabularies or preset expressions. This paper is motivated to investigate an approach to implement a flexible voice-enabled CAD system, where users are no longer constrained by predefined commands. Designers can, to a much more flexible degree, communicate with CAD modelers using natural language conversations. To accomplish this, a knowledge-guided approach is proposed to infer the semantics of voice input. The semantic inference is formulated as a template matching problem, where the semantic units parsed from voice input are the “samples” to be inspected and the semantic units in the predefined library are the feature templates. The proposed behavioral glosses, together with CAD-specific synonyms, hyponyms and hypernyms are extensively used in the parsing of semantic units and the subsequent template matching. Using such sources of knowledge, all the semantically equivalent expressions can be mapped to the same command set, and the Voice-enabled Computer Aided Design (VeCAD) system is then capable of processing new expressions it has never encountered and inferring/understanding the semantics at runtime. Experiments show that this knowledge-guided approach is helpful to enhance the robustness of semantic inference and can effectively eliminate the chance of overestimations and underestimations in design intent interpretation.
Keywords:Voice  Speech  Voice-enabled CAD  Knowledge  Synonym expansion  Template matching  Semantic inference
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号