首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
维吾尔语的手语合成有助于改善维吾尔族聋哑人与听力正常人进行自然交流,也可以应用于计算机辅助维吾尔哑语教学、维文电视节目播放等方面。维文手语库是维吾尔语手语合成的基础。通过分析维吾尔手语的特点,采用关键帧插值技术来控制VRML虚拟人的手势动作,利用Visual C++和OpenGL环境实现了一个维吾尔文的手势编辑系统,通过手势运动数据驱动虚拟人来实时显示当前的手势状态。通过该系统,收集了常用的维吾尔语词汇及32个维吾尔字母的手势运动数据。  相似文献   

2.
The sign language is a method of communication for the deaf-mute. Articulated gestures and postures of hands and fingers are commonly used for the sign language. This paper presents a system which recognizes the Korean sign language (KSL) and translates into a normal Korean text. A pair of data-gloves are used as the sensing device for detecting motions of hands and fingers. For efficient recognition of gestures and postures, a technique of efficient classification of motions is proposed and a fuzzy min-max neural network is adopted for on-line pattern recognition.  相似文献   

3.
4.
Sign language communication includes not only lexical sign gestures but also grammatical processes which represent inflections through systematic variations in sign appearance. We present a new approach to analyse these inflections by modelling the systematic variations as parallel channels of information with independent feature sets. A Bayesian network framework is used to combine the channel outputs and infer both the basic lexical meaning and inflection categories. Experiments using a simulated vocabulary of six basic signs and five different inflections (a total of 20 distinct gestures) obtained from multiple test subjects yielded 85.0% recognition accuracy. We also propose an adaptation scheme to extend a trained system to recognize gestures from a new person by using only a small set of data from the new person. This scheme yielded 88.5% recognition accuracy for the new person while the unadapted system yielded only 52.6% accuracy.  相似文献   

5.
Information and mathematical models are proposed for the animation of sign language communication based on a virtual human model. A model is developed to fix sign language morphemes and is used to create a technology and software to generate, store, and reproduce gestures. Algorithmic solutions are proposed for the computation of human-like trajectories of hand and body movements in passing from a gesture to another and also facial expressions and articulation.  相似文献   

6.
We live in a society that depends on high-tech devices for assistance with everyday tasks, including everything from transportation to health care, communication, and entertainment. Tedious tactile input interfaces to these devices result in inefficient use of our time. Appropriate use of natural hand gestures will result in more efficient communication if the underlying meaning is understood. Overcoming natural hand gesture understanding challenges is vital to meet the needs of these increasingly pervasive devices in our every day lives. This work presents a graph-based approach to understand the meaning of hand gestures by associating dynamic hand gestures with known concepts and relevant knowledge. Conceptual-level processing is emphasized to robustly handle noise and ambiguity introduced during generation, data acquisition, and low-level recognition. A simple recognition stage is used to help relax scalability limitations of conventional stochastic language models. Experimental results show that this graph-based approach to hand gesture understanding is able to successfully understand the meaning of ambiguous sets of phrases consisting of three to five hand gestures. The presented approximate graph-matching technique to understand human hand gestures supports practical and efficient communication of complex intent to the increasingly pervasive high-tech devices in our society.  相似文献   

7.
具有不同数目状态结点的HMMs在中国手语识别中的应用   总被引:3,自引:0,他引:3  
中国手语是中国聋人使用的语言,主要通过手势动作来表达一定的含义。因而,手语识别问题是动态连续信号的识别问题。目前大部分手语识别系统采用HMMs(hidden Markov models)作为系统的识别系统。由于各个词包含的基本手势数不同,若所有模型都由同样数目的状态结点构成会影响识别率。而由人为每个词设置状态数又很难达到完全准确,所述系统使用一种基于动态规划的估计状态结点数的办法,并实现了基于具有不同状态数目的HMM的训练及识别过程,实验结果表明,该系统在手语的识别速度和识别精度方面都有所提高。  相似文献   

8.
A new approach is presented to deal with the problem of modelling and simulating the control mechanisms underlying planned-arm-movements. We adopt a synergetic view in which we assume that the movement patterns are not explicitly programmed but rather are emergent properties of a dynamic system constrained by physical laws in space and time. The model automatically translates a high-level command specification into a complete movement trajectory. This is an inverse problem, since the dynamic variables controlling the current state of the system have to be calculated from movement outcomes such as the position of the arm endpoint. The proposed method is based on an optimization strategy: the dynamic system evolves towards a stable equilibrium position according to the minimization of a potential function. This system, which could well be described as a feedback control loop, obeys a set of non-linear differential equations. The gradient descent provides a solution to the problem which proves to be both numerically stable and computationally efficient. Moreover, the addition into the control loop of elements whose structure and parameters have a pertinent biological meaning allows for the synthesis of gestural signals whose global patterns keep the main invariants of human gestures. The model can be exploited to handle more complex gestures involving planning strategies of movement. Finally, the extension of the approach to the learning and control of non-linear biological systems is discussed.  相似文献   

9.
Schaeffer's sign language consists of a reduced set of gestures designed to help children with autism or cognitive learning disabilities to develop adequate communication skills. Our automatic recognition system for Schaeffer's gesture language uses the information provided by an RGB‐D camera to capture body motion and recognize gestures using dynamic time warping combined with k‐nearest neighbors methods. The learning process is reinforced by the interaction with the proposed system that accelerates learning itself thus helping both children and educators. To demonstrate the validity of the system, a set of qualitative experiments with children were carried out. As a result, a system which is able to recognize a subset of 11 gestures of Schaeffer's sign language online was achieved.  相似文献   

10.
手语是一种靠动作、视觉进行交流的特殊语言,在手语表达过程中,头部运动蕴涵语义以及情感信息。本文分析了手语表达中手势动作和头部动作的运动相关性,利用隐马尔可夫模型(HMMs)为每一个离散的头部动作表示建模,基于一阶马尔科夫模型和插值算法生成平滑的头动动画。  相似文献   

11.
手语是一种靠动作、视觉进行交流的特殊语言,在手语表达过程中,头部运动蕴涵语义以及情感信息。本文分析了手语表达中手势动作和头部动作的运动相关性,利用隐马尔可夫模型(HMMs)为每一个离散的头部动作表示建模,基于一阶马尔科夫模型和插值算法生成平滑的头动动画。  相似文献   

12.
Sign language is the most important means of communication for deaf people. Given the lack of familiarity of non-deaf people with the language of deaf people, designing a translator system which facilitates the communication of deaf people with the surrounding environment seems to be necessary. The system of translating the sign language into spoken languages should be able to identify the gestures in sign language videos. Consequently, this study provides a system based on machine vision to recognize the signs in continuous Persian sign language video. This system generally consists of two main phases of sign words extraction and their classification. Several stages, including tracking and separating the sign words, are conducted in the sign word extraction phase. The most challenging part of this process is separation of sign words from video sequences. To do this, a new algorithm is presented which is capable of detecting accurate boundaries of words in the Persian sign language video. This algorithm decomposes sign language video into the sign words using motion and hand shape features, leading to more favorable results compared to the other methods presented in the literature. In the classification phase, separated words are classified and recognized using hidden Markov model and hybrid KNN-DTW algorithm, respectively. Due to the lack of proper database on Persian sign language, the authors prepared a database including several sentences and words performed by three signers. Simulation of proposed words boundary detection and classification algorithms on the above database led to the promising results. The results indicated an average rate of 93.73 % for accurate words boundary detection algorithm and the average rate of 92.4 and 92.3 % for words recognition using hands motion and shape features, respectively.  相似文献   

13.
Research in automatic analysis of sign language has largely focused on recognizing the lexical (or citation) form of sign gestures, as they appear in continuous signing, and developing algorithms that scale well to large vocabularies. However, successful recognition of lexical signs is not sufficient for a full understanding of sign language communication. Nonmanual signals and grammatical processes, which result in systematic variations in sign appearance, are integral aspects of this communication but have received comparatively little attention in the literature. In this survey, we examine data acquisition, feature extraction and classification methods employed for the analysis of sign language gestures. These are discussed with respect to issues such as modeling transitions between signs in continuous signing, modeling inflectional processes, signer independence, and adaptation. We further examine works that attempt to analyze nonmanual signals and discuss issues related to integrating these with (hand) sign gestures. We also discuss the overall progress toward a true test of sign recognition systems -dealing with natural signing by native signers. We suggest some future directions for this research and also point to contributions it can make to other fields of research. Web-based supplemental materials (appendices), which contain several illustrative examples and videos of signing, can be found at www.computer.org/publications/dlib.  相似文献   

14.
15.
Supervisory control theory enables control system designers to specify a model of the uncontrolled system in combination with control requirements, and subsequently use a synthesis algorithm for automatic controller generation. The use of supervisory control synthesis can significantly reduce development time of supervisory controllers as a result of unambiguous specification of control requirements, and synthesis of controllers that by definition are nonblocking and satisfy the control requirements. This is especially important for evolving systems, where requirements change frequently.For successful industrial application, the specification formalism should be expressive and intuitive enough to be used by domain experts, who define control requirements, and software experts, who implement control requirements and synthesize controllers. This paper defines such a supervisory control specification formalism that consists of automata, synchronizing actions, guards, updates, invariants, independent and dependent variables, where the values of the dependent variables can be defined in terms of functions on the independent variables.We also show how the language enables systematic, compositional specification of a control system for a patient communication system of an MRI scanner. We show that our specification formalism can deal with both event-based and state-based interfaces. To support systematic, modular specification of models for supervisory control synthesis, we introduce state trackers that record sequences of events in terms of states. The synthesized supervisor has been successfully validated by means of interactive user guided simulation.  相似文献   

16.
Many previous researchers have tried developing sign languages recognition systems in general and Arabic sign language specifically. They succeeded to achieve acceptable results for isolated gestures level, but none of them investigated the recognition of connected sequence of gestures. This paper focuses on how to recognize real-time connected sequence of gestures using graph-matching technique, also how the continuous input gestures are segmented and classified. Graphs are a general and powerful data structure useful for the representation of various objects and concepts. This work is a component of a real-time Arabic Sign Language Recognition system that applied pulse-coupled neural network for static posture recognition in its first phase. This work can be adapted and applied to different sign languages and other recognition problems.  相似文献   

17.
The aim of a cooperative system is to coordinate and support group activities. Cooperative Systems Design Language (CSDL) is an experimental language designed to support the development of cooperative systems from specification to implementation. In CSDL, a system is defined as a collection of reusable entities implementing floor control disciplines and shared workspaces. CSDL tries to address the difficulties of integrating different aspects of cooperative systems: cooperation control, communication, and system modularization. This paper presents CSDL as a specification language. Basic units are coordinators that can be combined hierarchically. A coordinator is composed of a specification, a body, and a context. The specification defines the cooperation policy; the body controls the underlying communication channels; and the context defines coordinators' interaction in modular systems  相似文献   

18.
Sign language in Arab World has been recently recognized and documented. There have been no serious attempts to develop a recognition system that can be used as a communication means between hearing-impaired and other people. This paper introduces the first automatic Arabic sign language (ArSL) recognition system based on hidden Markov models (HMMs). A large set of samples has been used to recognize 30 isolated words from the Standard Arabic sign language. The system operates in different modes including offline, online, signer-dependent, and signer-independent modes. Experimental results on using real ArSL data collected from deaf people demonstrate that the proposed system has high recognition rate for all modes. For signer-dependent case, the system obtains a word recognition rate of 98.13%, 96.74%, and 93.8%, on the training data in offline mode, on the test data in offline mode, and on the test data in online mode respectively. On the other hand, for signer-independent case the system obtains a word recognition rate of 94.2% and 90.6% for offline and online modes respectively. The system does not rely on the use of data gloves or other means as input devices, and it allows the deaf signers to perform gestures freely and naturally.  相似文献   

19.
Communication between people with disabilities and people who do not understand sign language is a growing social need and can be a tedious task. One of the main functions of sign language is to communicate with each other through hand gestures. Recognition of hand gestures has become an important challenge for the recognition of sign language. There are many existing models that can produce a good accuracy, but if the model test with rotated or translated images, they may face some difficulties to make good performance accuracy. To resolve these challenges of hand gesture recognition, we proposed a Rotation, Translation and Scale-invariant sign word recognition system using a convolutional neural network (CNN). We have followed three steps in our work: rotated, translated and scaled (RTS) version dataset generation, gesture segmentation, and sign word classification. Firstly, we have enlarged a benchmark dataset of 20 sign words by making different amounts of Rotation, Translation and Scale of the original images to create the RTS version dataset. Then we have applied the gesture segmentation technique. The segmentation consists of three levels, i) Otsu Thresholding with YCbCr, ii) Morphological analysis: dilation through opening morphology and iii) Watershed algorithm. Finally, our designed CNN model has been trained to classify the hand gesture as well as the sign word. Our model has been evaluated using the twenty sign word dataset, five sign word dataset and the RTS version of these datasets. We achieved 99.30% accuracy from the twenty sign word dataset evaluation, 99.10% accuracy from the RTS version of the twenty sign word evolution, 100% accuracy from the five sign word dataset evaluation, and 98.00% accuracy from the RTS version five sign word dataset evolution. Furthermore, the influence of our model exists in competitive results with state-of-the-art methods in sign word recognition.  相似文献   

20.
This paper presents a novel method for rapidly generating 3D architectural models based on hand motion and design gestures captured by a motion capture system. A set of sign language-based gestures, architectural hand signs (AHS), has been developed. AHS is performed on the left hand to define various “components of architecture”, while “location, size and shape” information is defined by the motion of Marker-Pen on the right hand. The hand gestures and motions are recognized by the system and then transferred into 3D curves and surfaces correspondingly. This paper demonstrates the hand gesture-aided architectural modeling method with some case studies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号