首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
With the recent development of deep learning technology comes the wide use of artificial intelligence (AI) models in various domains. AI shows good performance for definite-purpose tasks, such as image recognition and text classification. The recognition performance for every single task has become more accurate than feature engineering, enabling more work that could not be done before. In addition, with the development of generation technology (e.g., GPT-3), AI models are showing stable performances in each recognition and generation task. However, not many studies have focused on how to integrate these models efficiently to achieve comprehensive human interaction. Each model grows in size with improved performance, thereby consequently requiring more computing power and more complicated designs to train than before. This requirement increases the complexity of each model and requires more paired data, making model integration difficult. This study provides a survey on visual language integration with a hierarchical approach for reviewing the recent trends that have already been performed on AI models among research communities as the interaction component. We also compare herein the strengths of existing AI models and integration approaches and the limitations they face. Furthermore, we discuss the current related issues and which research is needed for visual language integration. More specifically, we identify four aspects of visual language integration models: multimodal learning, multi-task learning, end-to-end learning, and embodiment for embodied visual language interaction. Finally, we discuss some current open issues and challenges and conclude our survey by giving possible future directions.  相似文献   

3.
Effective teaching should focus the attention of learners to its essential aspects. It follows that instructional software can be designed in such a way that allows learners to experience the important variations in the critical aspects of the content to be learned. This paper reports on the experience of designing such special kinds of instructional learning objects for the learning of Chinese characters. The design of these learning objects takes into consideration not only what Chinese characters are all about but also how learners commonly make errors while they learn to write the characters. Out of the analysis of these learners' errors, variations in the structural features of Chinese characters were pulled out and embodied in the design of the learning objects. Learners tinkering with the learning objects can thus implicitly develop a sense of the structural features or regularity of Chinese characters, which most importantly should prepare the learners to learn more new characters in the future. The main proposal of this paper is the notion of this variation‐affording instructional software that allows learners to attend to the essential aspects of what is to be learned. Furthermore, the idea of the learning object also differs from other instructional software in its small, self‐contained and reusable nature, such that teachers can flexibly embed the learning objects into their own teaching materials.  相似文献   

4.
Research suggests that certain visual instructional aids can reduce levels of disorientation and increase learning performance in, and positive attitudes towards, HLS for learners with specific individual differences. However, existing studies have looked at only one or two individual differences at a time, and/or considered only a small number of visual instructional aids. No study has considered the impact of the three most commonly studied individual differences – cognitive style, domain knowledge and computer experience – on learning performance, disorientation and attitudes in a HLS incorporating a full range of visual instructional aids. The study reported here addresses this shortcoming, examining the effects of, and between, these three individual differences in relation to learning performance, disorientation and attitudes in two HLS versions: one that incorporated a full set of visual instructional aids and one that did not. Significant effects were found between the three individual differences with respect to disorientation, learning performance and attitudes in the HLS that provided no instructional aids, whereas no such effects were found for the other HLS version. Analysis of the results led to a set of HLS design guidelines, presented in the paper, and the development of an agenda for future research. Limitations of the study and their implications for the generalizability of the findings are also presented.  相似文献   

5.
Cross-task generalization is a significant outcome that defines mastery in natural language understanding. Humans show a remarkable aptitude for this, and can solve many different types of tasks, given definitions in the form of textual instructions and a small set of examples. Recent work with pre-trained language models mimics this learning style: users can define and exemplify a task for the model to attempt as a series of natural language prompts or instructions. While prompting approaches have led to higher cross-task generalization compared to traditional supervised learning, analyzing ‘bias’ in the task instructions given to the model is a difficult problem, and has thus been relatively unexplored. For instance, are we truly modeling a task, or are we modeling a user's instructions? To help investigate this, we develop LINGO, a novel visual analytics interface that supports an effective, task-driven workflow to (1) help identify bias in natural language task instructions, (2) alter (or create) task instructions to reduce bias, and (3) evaluate pre-trained model performance on debiased task instructions. To robustly evaluate LINGO, we conduct a user study with both novice and expert instruction creators, over a dataset of 1,616 linguistic tasks and their natural language instructions, spanning 55 different languages. For both user groups, LINGO promotes the creation of more difficult tasks for pre-trained models, that contain higher linguistic diversity and lower instruction bias. We additionally discuss how the insights learned in developing and evaluating LINGO can aid in the design of future dashboards that aim to minimize the effort involved in prompt creation across multiple domains.  相似文献   

6.
7.
可视化语言文法形式化描述综述   总被引:4,自引:1,他引:3  
许红霞  张莉 《计算机科学》2005,32(4):201-204
可视化是人机交互的主要形式,可视化语言是计算机科学中一个重要研究领域,文法为可视化语言提供了一种有价值的形式化描述方法。本文基于可视化语言的特征,介绍了可视化语言文法形式化描述体系的基本理论,分析了几种典型形式模型,并探讨了当前的主要研究内容和面临的挑战。  相似文献   

8.
In an effort to enhance instruction and reach more students, educators design engaging online learning experiences, often in the form of online videos. While many instructional videos feature a picture-inpicture view of instructor, it is not clear how instructor presence influences learners' visual attention and what it contributes to learning and affect. Given this knowledge gap, this study explored the impact of instructor presence on learning, visual attention, and perceived learning in mathematics instructional videos of varying content difficulty. Thirty-six participants each viewed two 10-min-long mathematics videos (easy and difficult topics), with instructor either present or absent. Findings suggest that instructor attracted considerable visual attention, particularly when learners viewed the video on an easy topic. Although no significant difference in learning transfer was found for either topic, participants' recall of information from the video was better for easy topic when instructor was present. Finally, instructor presence positively influenced participants' perceived learning and satisfaction for both topics and led to a lower level of self-reported mental effort for difficult topic.  相似文献   

9.
In this paper, we present a usability study aiming at assessing a visual language-based tool for developing adaptive e-learning processes. The tool implements the adaptive self-consistent learning object SET (ASCLO-S) visual language, a special case of flow diagrams, to be used by instructional designers to define classes of learners through stereotypes and to specify the more suited adaptive learning process for each class of learners. The usability study is based on the combined use of two techniques: a questionnaire-based survey and an empirical analysis. The survey has been used to achieve feedbacks from the subjects’ point of view. In particular, it has been useful to capture the perceived usability of the subjects. The outcomes show that both the proposed visual notation and the system prototype are suitable for instructional designers with or without experience on the computer usage and on tools for defining e-learning processes. This result is further confirmed by the empirical analysis we carried out by analysing the correlation between the effort to develop adaptive e-learning processes and some measures suitable defined for those processes. Indeed, the empirical analysis revealed that the effort required to model e-learning processes is not influenced by the experience of the instructional designer with the use of e-learning tools, but it only depends on the size of the developed process.  相似文献   

10.
Editors for visual languages should provide a user-friendly environment supporting end users in the composition of visual sentences in an effective way. Syntax-aware editors are a class of editors that prompt users into writing syntactically correct programs by exploiting information on the visual language syntax. In particular, they do not constrain users to enter only correct syntactic states in a visual sentence. They merely inform the user when visual objects are syntactically correct. This means detecting both syntax and potential semantic errors as early as possible and providing feedback on such errors in a non-intrusive way during editing. As a consequence, error handling strategies are an essential part of such editing style of visual sentences.In this work, we develop a strategy for the construction of syntax-aware visual language editors by integrating incremental subsentence parsers into free-hand editors. The parser combines the LR-based techniques for parsing visual languages with the more general incremental Generalized LR parsing techniques developed for string languages. Such approach has been profitably exploited for introducing a noncorrecting error recovery strategy, and for prompting during the editing the continuation of what the user is drawing.  相似文献   

11.
In this paper a prototype of a visual specification language called Visual Coordination Diagrams (VCD) for high-level design of concurrent systems with heterogeneous coordination models is presented. The key property of VCD is the separation of behavioral aspects from coordination aspects. We also highlight the heterogeneity of VCD which has two levels. At first, it allows different coordination models to be mixed in a particular specification. Secondly, different formalisms can be incorporated to VCD for specification of behavioral aspects. This paper contains an overview of the language followed with its formal definition. An example of using the language is also given.  相似文献   

12.
An important step in the design of visual languages is the specification of the graphical objects and the composition rules for constructing feasible visual sentences. The presence of different typologies of visual languages, each with specific graphical and structural characteristics, yields the need to have models and tools that unify the design steps for different types of visual languages. To this aim, in this paper we present a formal framework of visual language classes. Each class characterizes a family of visual languages based upon the nature of their graphical objects and composition rules. The framework has been embedded in the Visual Language Compiler–Compiler (VLCC), a graphical system for the automatic generation of visual programming environments.  相似文献   

13.
14.
Recent advances in computing devices push researchers to envision new interaction modalities that go beyond traditional mouse and keyboard input. Typical examples are large displays for which researchers hope to create more “natural” means of interaction by using human gestures and body movements as input. In this article, we reflect about this goal of designing gestures that people can easily understand and use and how designers of gestural interaction can capitalize on the experience of 30 years of research on visual languages to achieve it. Concretely, we argue that gestures can be regarded as “visual expressions to convey meaning” and thus are a visual language. Based on what we have learned from visual language research in the past, we then explain why the design of a generic gesture set or language that spans many applications and devices is likely to fail. We also discuss why we recommend using gestural manipulations that enable users to directly manipulate on-screen objects instead of issuing commands with symbolic gestures whose meaning varies among different users, contexts, and cultures.  相似文献   

15.
This systematic review study synthesizes research findings pertaining to the use of augmented reality (AR) in language learning. Published research from 2014 to 2019 has been explored and specific inclusion and exclusion criteria have been applied resulting in 54 relevant publications. Our findings determined: (a) devices and software employed for mastering AR; languages and contexts in which AR had been applied; theoretical perspectives adopted for guiding the use of AR; the number of participants in AR activities and benefits from using AR as an educational tool in the language classroom; (b) alignment of the affordances of Augmented Reality with the KSAVE (Knowledge, Skills, Attitudes, Values, Ethics) 21st-century skills framework; (c) future directions in AR research and practice. The main findings from this review demonstrate the popularity of mobile-based AR for supporting vocabulary (23.9%), reading (12.7%), speaking (9.9%) writing (8.5%) or generic language skills (9.9%). Our findings also uncovered areas that merit future attention in the application of AR in language learning – for instance learning theories were not often considered in the implementation of AR. The study concludes with suggestions for future research especially in the areas of instructional design and user experience.  相似文献   

16.
This paper presents, illustrates and discusses theories and practices about the application of a domain-specific modeling (DSM) approach to facilitate the specification of Visual Instructional Design Languages (VIDLs) and the development of dedicated graphical editors. Although this approach still requires software engineering skills, it tackles the need of building VIDLs allowing both visual models for human-interpretation purposes (explicit designs, communication, thinking, etc.) and machine-readable notations for deployment or other instructional design activities. This article proposes a theoretical application and a categorization, based on a domain-oriented separation of concerns of instructional design. It also presents some practical illustrations from experiments of specific DSM tooling. Key lessons learned as well as observed obstacles and challenges to deal with are discussed in order to further develop such an approach.  相似文献   

17.
Representing design decisions for complex software systems, tracing them to code, and enforcing them throughout the lifecycle are pressing concerns for software architects and developers. To be of practical use, specification and modeling languages for software design need to combine rigor with abstraction and simplicity, and be supported by automated design verification tools that require minimal human intervention. This paper examines closely the use of the visual language of Codecharts for representing design decisions and demonstrate the process of verifying the conformance of a program to the chart. We explicate the abstract semantics of segments of the Java package java.awt as a finite structures, specify the Composite design pattern as a Codechart and unpack it as a set of formulas, and prove that the structure representing the program satisfies the formulas. We also describe a set of tools for modeling design patterns with Codecharts and for verifying the conformance of native (plain) Java programs to the charts.  相似文献   

18.
Diagrammatic visual languages can increase the ability of engineers to model and understand complex systems. However, to effectively use visual models, the syntax and semantics of these languages should be defined precisely. Since most diagrammatic visual models that are currently used to specify systems can be described as (directed) typed graphs, graph grammars have been identified as a suitable formalism to describe the abstract syntax of visual modeling languages. In this article, we investigate how advanced graph-transformation techniques, such as conditional, structure-generic and type-generic graph-transformation rules, can help to improve and simplify the specification of the abstract syntax of a visual modeling language. To demonstrate the practicability of an approach that unifies these advanced graph-transformation techniques, we define the abstract syntax of behavior trees (BTs), a graphical specification language for functional requirements. Additionally, we provide a translational semantics of BTs by formalizing a translation scheme to the input language of the SAL model checking tool for each of the graph-transformation rules.  相似文献   

19.
Carsten Schmidt  Uwe Kastens 《Software》2003,33(15):1471-1505
The implementation of visual languages requires a wide range of conceptual and technical knowledge from issues of user interface design and graphical implementation to aspects of analysis and transformation for languages in general. We present a powerful toolset that incorporates such knowledge. Our toolset generates editors from high‐level specifications. A language is specified by identifying certain patterns in the language structure and selecting a visual representation from a set of precoined solutions. Visual programs are represented by attributed abstract trees. Therefore, further phases of processing visual programs can be generated by state‐of‐the‐art tools for language implementation. We demonstrate that even challenging visual languages can be implemented with reasonably little effort and with rather limited technical knowledge. The approach is suitable for a large variety of visual language styles. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

20.
In this paper we present a framework for the fast prototyping of visual languages exploiting their local context based specification.In previous research, the local context specification has been used as a weak form of syntactic specification to define when visual sentences are well formed. In this paper we add new features to the local context specification in order to fully specify complex constructs of visual languages such as entity-relationships, use case and class diagrams. One of the advantages of this technique is its simplicity of application and, to show this, we present a tool implementing our framework.Moreover, we describe a user study aimed at evaluating the effectiveness and the user satisfaction when prototyping a visual language.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号