共查询到20条相似文献,搜索用时 31 毫秒
1.
Speech interfaces are becoming more and more popular as a means to interact with virtual environments but the development
and integration of these interfaces is usually still ad hoc, especially the speech grammar creation of the speech interface
is a process commonly performed by hand. In this paper, we introduce an approach to automatically generate a speech grammar
which is generated using semantic information. The semantic information is represented through ontologies and gathered from
the conceptual modelling phase of the virtual environment application. The utterances of the user will be resolved using queries
onto these ontologies such that the meaning of the utterance can be resolved. For validation purposes we augmented a city
park designer with our approach. Informal tests validate our approach, because they reveal that users mainly use words represented
in the semantic data, and therefore also words which are incorporated in the automatically generated speech grammar.
相似文献
Karin ConinxEmail: |
2.
Automatically Conflating Road Vector Data with Orthoimagery 总被引:2,自引:2,他引:0
Recent growth of the geospatial information on the web has made it possible to easily access a wide variety of spatial data.
The ability to combine various sets of geospatial data into a single composite dataset has been one of central issues of modern
geographic information processing. By conflating diverse spatial datasets, one can support a rich set of queries that could
have not been answered given any of these sets in isolation. However, automatically conflating geospatial data from different
data sources remains a challenging task. This is because geospatial data obtained from various data sources may have different
projections, different accuracy levels and different formats (e.g., raster or vector format), thus resulting in various positional
inconsistencies. Most of the existing algorithms only deal with vector to vector data conflation or require human intervention
to accomplish vector data to imagery conflation. In this paper, we describe a novel geospatial data fusion approach, named
AMS-Conflation, which achieves automatic vector to imagery conflation. We describe an efficient technique to automatically
generate control point pairs from the orthoimagery and vector data by exploiting the information from the vector data to perform
localized image processing on the orthoimagery. We also evaluate a filtering technique to automatically eliminate inaccurate
pairs from the generated control points. We show that these conflation techniques can automatically align the roads in orthoimagery,
such that 75% of the conflated roads are within 3.6 meters from the real road axes compared to 35% for the original vector
data for partial areas of the county of St. Louis, MO.
相似文献
Cyrus ShahabiEmail: |
3.
Partial cognates are pairs of words in two languages that have the same meaning in some, but not all contexts. Detecting the
actual meaning of a partial cognate in context can be useful for Machine Translation tools and for Computer-Assisted Language
Learning tools. We propose a supervised and a semi-supervised method to disambiguate partial cognates between two languages:
French and English. The methods use only automatically-labeled data; therefore they can be applied to other pairs of languages
as well. The aim of our work is to automatically detect the meaning of a French partial cognate word in a specific context.
相似文献
Diana InkpenEmail: |
4.
Maarek Y.S. Berry D.M. Kaiser G.E. 《IEEE transactions on pattern analysis and machine intelligence》1991,17(8):800-813
A technology for automatically assembling large software libraries which promote software reuse by helping the user locate the components closest to her/his needs is described. Software libraries are automatically assembled from a set of unorganized components by using information retrieval techniques. The construction of the library is done in two steps. First, attributes are automatically extracted from natural language documentation by using an indexing scheme based on the notions of lexical affinities and quantity of information. Then a hierarchy for browsing is automatically generated using a clustering technique which draws only on the information provided by the attributes. Due to the free-text indexing scheme, tools following this approach can accept free-style natural language queries 相似文献
5.
This paper presents a method for automatically annotating files created on portable devices with contextual metadata. We achieve this through the combination of two system components. One is a context dissemination mechanism which allows devices in a personal area network (PAN) to maintain a shared aggregate contextual perception. The other is a storage management system that uses such context information to automatically decorate files created on personal devices with annotations. As a result, the user is able to flexibly browse and lookup files that were generated on the move, based on the contextual situation at the time of their creation. What is equally important is that the user is relieved from the cumbersome task of having to manually provide annotations in an explicit fashion. This is especially valuable when generating files on the move, using U/I-restricted portable devices.
相似文献
Spyros LalisEmail: |
6.
Gerardo Canfora Andrea De lucia Giuseppe A. Di Lucca 《Automated Software Engineering》1999,6(3):233-263
The paper presents a case study in the development of software modularisation tools. The tools are produced by using a system for developing code analysers that uses a database to store both a no-loss fine-grained intermediate representation and the analyses' results. The analysers are automatically generated from a high-level specification of the desired analyses expressed in a domain-oriented language. We use a program intermediate representation, called F(p), as the user-visible data base conceptual model. Analysers are specified in a declarative language, called F(p) – , which allows the specification of an analysis in the form of a traversal of an algebraic expression, with accesses to, and stores of, the database information the algebraic expression indexes. A foreign language interface allows the analysers to be embedded into C programs. This is useful, for example, to implement the user interface of an analyser or to facilitate interoperation of the generated analysers with pre-existing tools. 相似文献
7.
Brian K. Smith Jeana Frost Meltem Albayrak Rajneesh Sudhakar 《Personal and Ubiquitous Computing》2007,11(4):273-286
Glucometers measure the accumulation of glucose in the bloodstream and are essential for avoiding health complications related
to diabetes. Despite their value as tools to record and present physiological data, they lack the ability to capture the behaviors
that cause fluctuations in blood glucose levels, activities that ultimately need to be understood and managed in order to
maintain good health. In this paper, we describe an intervention that introduces digital photography into diabetes self-management
routines to augment glucometer data and facilitate the sharing of experiences that affect long-term health. Two studies of
the approach are presented to illustrate the ways that diabetics use visualizations of past activities to reflect on their
health. We also discuss design suggestions for augmented memory systems based on our findings, focusing on ways to enhance
learning with repositories of past experiences collected automatically and/or manually.
相似文献
Rajneesh SudhakarEmail: |
8.
We describe a suite of standards, resources and tools for computational encoding and processing of Modern Hebrew texts. These
include an array of XML schemas for representing linguistic resources; a variety of text corpora, raw, automatically processed
and manually annotated; lexical databases, including a broad-coverage monolingual lexicon, a bilingual dictionary and a WordNet;
and morphological processors which can analyze, generate and disambiguate Hebrew word forms. The resources are developed under
centralized supervision, so that they are compatible with each other. They are freely available and many of them have already
been used for several applications, both academic and industrial.
相似文献
Shuly WintnerEmail: |
9.
H. Szer 《Software》2015,45(10):1359-1373
Static code analysis tools automatically generate alerts for potential software faults that can lead to failures. However, these tools usually generate a very large number of alerts, some of which are subject to false positives. Because of limited resources, it is usually hard to inspect all the alerts. As a complementary approach, runtime verification techniques verify dynamic system behavior with respect to a set of specifications. However, these specifications are usually created manually based on system requirements and constraints. In this paper, we introduce a noval approach and a toolchain for integrated static code analysis and runtime verification. Alerts that are generated by static code analysis tools are utilized for automatically generating runtime verification specifications. On the other hand, runtime verification results are used for automatically generating filters for static code analysis tools to eliminate false positives. The approach is illustrated for the static analysis and runtime verification of an open‐source bibliography reference manager software. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
10.
陈慧南 《计算机工程与应用》1999,35(11):59-61,64
文章讨论事件驱动图形用户界面原型的自动生成问题。作者采用操作次序表达式定义人机交互行为,以扩充的着色网为对话模型。文章给出了从操作次序表达式集合到扩充的着色网对话模型,并产生可视编程环境的用户界面原型的方法。 相似文献
11.
Some of the most significant challenges in automated CAD–FEA integration are information and model transformations between
CAD and FEA tools. These are especially labor-intensive and time-consuming in a newly characterized class of problems termed
highly coupled variable topology multi-body (HCVTMB) problems. This paper addresses these challenges with a knowledge-based
FEA modeling method called ZAP that consists of three stepping-stone information models and the mapping processes between
these models. The information and knowledge of a typical FEA modeling process are explicitly captured in semantically rich
information models to achieve benefits including knowledge sharing, system extension, and model modification. ZAP mapping
processes automatically transform abstract analytical concepts into tool-specific commands and functions that accomplish HCVTMB
model generation and solution management. This method enhances flexibility and reusability in FEA modeling and enables CAD–FEA
integration at the knowledge level. To demonstrate the efficacy of ZAP, we overview a sample HCVTMB problem—an electronic
chip package plastic ball grid array (PBGA) thermal analysis case study. Experience indicates that ZAP increases knowledge
capture and decreases modeling time from days/hours to hours/minutes compared to conventional methods, thus providing a key
enabler toward design optimization.
相似文献
Russell S. PeakEmail: |
12.
David E. Singh Florin Isaila Juan C. Pichel Jesús Carretero 《The Journal of supercomputing》2009,47(1):53-75
In this paper, we present a novel multiple phase I/O collective technique for generic block-cyclic distributions. The I/O
technique is divided into two stages: inspector and executor. During the inspector stage, the communication pattern is computed
and the required datatypes are automatically generated. This information is used during the executor stage in performing the
communication and file accesses. The two stages are decoupled, so that for repetitive file access patterns, the computations
from the inspector stage can be performed once and reused several times by the executor. This strategy allows to amortize
the inspector cost over several I/O operations. In this paper, we evaluate the performance of multiple phase I/O collective
technique and we compare it with other state of the art approaches. Experimental results show that for small access granularities,
our method outperforms in the large majority of cases other parallel I/O optimizations techniques.
相似文献
Jesús CarreteroEmail: |
13.
The use of guidelines to automatically verify Web accessibility 总被引:1,自引:1,他引:0
Accessibility is one of the key challenges that the Internet must currently face to guarantee universal inclusion. Accessible Web design requires knowledge and experience from the designer, who can be assisted by the use of broadly accepted guidelines. Nevertheless, guideline application may not be obvious, and many designers may lack experience to use them. The difficulty increases because, as the research on accessibility is progressing, existing sets of guidelines are updated and new sets are proposed by diverse institutions. Therefore, the availability of tools to evaluate accessibility, and eventually repair the detected bugs, is crucial. This paper presents a tool, EvalIris, developed to automatically check the accessibility of Websites using sets of guidelines that, by means of a well-defined XML structure, can be easily replaced or updated.
相似文献
Julio AbascalEmail: Phone: +34-943-018067Fax: +34-943-219306 |
14.
15.
Yasuyuki Sumi Sadanori Ito Tetsuya Matsuguchi Sidney Fels Shoichiro Iwasawa Kenji Mase Kiyoshi Kogure Norihiro Hagita 《Personal and Ubiquitous Computing》2007,11(4):265-271
This paper proposes a notion of interaction corpus, a captured collection of human behaviors and interactions among humans and artifacts. Digital multimedia and ubiquitous
sensor technologies create a venue to capture and store interactions that are automatically annotated. A very large-scale
accumulated corpus provides an important infrastructure for a future digital society for both humans and computers to understand
verbal/non-verbal mechanisms of human interactions. The interaction corpus can also be used as a well-structured stored experience,
which is shared with other people for communication and creation of further experiences. Our approach employs wearable and ubiquitous sensors, such as video cameras, microphones, and tracking tags, to capture all of the events from multiple viewpoints simultaneously.
We demonstrate an application of generating a video-based experience summary that is reconfigured automatically from the interaction
corpus.
相似文献
Yasuyuki SumiEmail: |
16.
传输触发结构ASIP软件工具的自动定制 总被引:1,自引:1,他引:0
软件工具在ASIP设计中扮演了非常重要的角色,自动定制软件工具对于提高ASIP设计的自动化程度意义重大.详细分析了传输触发结构(TTA)ASIP软件工具的自动定制问题,提出了扩展指令、目标代码编码、保留表等关键体系结构描述信息的自动生成方法.其中,扩展指令信息通过合并相关基准指令的语法树及其他描述信息获得;目标代码编码通过对功能单元端口和寄存器端口分类并顺序编号获得;指令保留表则通过分析指令执行过程中数据传输的时序与资源使用情况获得.实验结果表明,该方法灵活简单,当ASIP的指令集和其他体系结构信息变化时可以自动生成相应的软件工具,并能够保证软件工具的效率. 相似文献
17.
Occupant-generated work orders are recognized as a good potential data to support Facility Management (FM) activities, however they are unstructured and rarely contain the specific information engineers require to resolve the reported issues. Instead, this often requires multiple trips are often needed to identify the required trade, identify the problem and required parts/tools, and resolve. A key challenge is data quality: free-form (unstructured) text is collected that frequently lacks necessary detail for problem diagnosis. Machine Learning provides new opportunities within the FM domain to improve the quality of information collected through online work order reporting systems by automatically classifying WOs and prompting building occupants with appropriate FM team-developed questions in real time to gather the required specific information in structured form. This paper presents the development, comparison, and application of two sets of supervised machine learning models to perform this classification for WOs generated from occupant complaints. A set of ∼150,000 historical WOs was used for model development and textual classification using with various term and itemset frequency approaches was tested. Classifier prediction accuracies ranged from 46.6% to 81.3% for classification by detailed subcategory; this increased to between 68% (simple term frequency) to 90% (random forest) when the dataset only included the ten most common (accounting for 70% of all WOs) subcategories. Hierarchical classification decreased performance. An FM-BIM integration approach is finally presented using the resultant classifiers to provide facilities management teams with spatio-temporal visualization of the work order categories across a series of buildings to help prioritize and streamline operations and maintenance task assignments. 相似文献
18.
19.
Many security problems are caused by vulnerabilities hidden in enterprise computer networks. It is very important for system
administrators to have knowledge about the security vulnerabilities. However, current vulnerability assessment methods may
encounter the issues of high false positive rates, long computational time, and requirement of developing attack codes. Moreover,
they are only capable of locating individual vulnerabilities on a single host without considering correlated effect of these
vulnerabilities on a host or a section of network with the vulnerabilities possibly distributed among different hosts. To
address these issues, an active vulnerability assessment system NetScope with C/S architecture is developed for evaluating
computer network security based on open vulnerability assessment language instead of simulating attacks. The vulnerabilities
and known attacks with their prerequisites and consequences are modeled based on predicate logic theory and are correlated
so as to automatically construct potential attack paths with strong operation power of relational database management system.
The testing results from a series of experiments show that this system has the advantages of a low false positive rate, short
running periods, and little impact on the performance of audited systems and good scalability. The security vulnerabilities,
undetectable if assessed individually in a network, are discovered without the need to simulate attacks. It is shown that
the NetScope system is well suited for vulnerability assessment of large-scale computer networks such as campus networks and
enterprise networks. Moreover, it can also be easily integrated with other security tools based on relational databases.
相似文献
Xiaohong GuanEmail: |
20.
《Microprocessors and Microsystems》2005,29(2-3):51-62
The DEFACTO compilation and synthesis system is capable of automatically mapping computations expressed in high-level imperative programming languages as C to FPGA-based systems. DEFACTO combines parallelizing compiler technology with behavioral VHDI, synthesis tools to guide the application of high-level compiler transformations in the search of high-quality hardware designs. In this article we illustrate the effectiveness of this approach in automatically mapping several kernel codes to an FPGA quickly and correctly. We also present a detailed example of the comparison of the performance of an automatically generated design against a manually generated implementation of the same computation. The design-space-exploration component of DEFACTO is able to explore a large number of designs for a particular computation that would otherwise be impractical for any designers. 相似文献