共查询到20条相似文献,搜索用时 15 毫秒
1.
Inger Lytje 《AI & Society》1989,4(4):276-290
The article argues that cognitive linguistic theory may prove an alternative to the Montague paradigm for designing natural language understanding systems. Within this framework it describes a system which models language understanding as a dialogical process between user and computer. The system operates with natural language texts as input and represent language meaning as entity-relationship diagrams. 相似文献
2.
3.
4.
《Behaviour & Information Technology》2007,26(3):197-207
A natural language interface (NLI) enables the ease-of-use of information systems in performing sophisticated human - computer interaction. To address the challenges of mobile devices to user interaction in information management, we propose an NLI as a promising solution. In this paper, we review state-of-the-art NLI technologies and analyse user requirements for managing notable information on mobile devices. To minimize any technical difficulties arising from developing and improving the usability of NLI systems we develop general principles for NLI design, which fills in a gap in the literature. In order to satisfy user requirements for information management on mobile devices, we innovatively design NLI-enabled information management architecture. It is shown from two usage scenarios that the architecture could lead to reduced effort in user navigation and improved efficiency and effectiveness of managing information on mobile devices. We conclude the article with the implications of this study and suggestions for future direction. 相似文献
5.
This work analyzes the relative advantages of different metaheuristic approaches to the well-known natural language processing problem of part-of-speech tagging. This consists of assigning to each word of a text its disambiguated part-of-speech according to the context in which the word is used. We have applied a classic genetic algorithm (GA), a CHC algorithm, and a simulated annealing (SA). Different ways of encoding the solutions to the problem (integer and binary) have been studied, as well as the impact of using parallelism for each of the considered methods. We have performed experiments on different linguistic corpora and compared the results obtained against other popular approaches plus a classic dynamic programming algorithm. Our results claim for the high performances achieved by the parallel algorithms compared to the sequential ones, and state the singular advantages for every technique. Our algorithms and some of its components can be used to represent a new set of state-of-the-art procedures for complex tagging scenarios. 相似文献
6.
7.
We present a program that understands Natural Language Commands, i.e. which have a free form but a well-defined semantic content. We show that by Structural Pattern Recognition techniques, the program “learns” the vocabulary and the structure of the sentences. An example is given, “the taxi driver robot”, which is directed by Natural Language Commands. 相似文献
8.
Guzen Erozel 《Information Sciences》2008,178(12):2534-2552
The video databases have become popular in various areas due to the recent advances in technology. Video archive systems need user-friendly interfaces to retrieve video frames. In this paper, a user interface based on natural language processing (NLP) to a video database system is described. The video database is based on a content-based spatio-temporal video data model. The data model is focused on the semantic content which includes objects, activities, and spatial properties of objects. Spatio-temporal relationships between video objects and also trajectories of moving objects can be queried with this data model. In this video database system, a natural language interface enables flexible querying. The queries, which are given as English sentences, are parsed using link parser. The semantic representations of the queries are extracted from their syntactic structures using information extraction techniques. The extracted semantic representations are used to call the related parts of the underlying video database system to return the results of the queries. Not only exact matches but similar objects and activities are also returned from the database with the help of the conceptual ontology module. This module is implemented using a distance-based method of semantic similarity search on the semantic domain-independent ontology, WordNet. 相似文献
9.
基于语义的自然语言文本数字水印研究 总被引:1,自引:0,他引:1
数字水印技术是信息隐藏技术研究领域的重要分支,也是当今网络信息安全和数字媒体版权保护研究的重要手段之一。目前数字水印技术的研究主要集中在静止图像和视频的保护等方面,文本数字水印研究较少。针对自然语言文本自身的特性,分析和比较了目前主要的文本数字水印方法及其技术特点,提出了文本数字水印的理论目标和攻击模型,给出了一种基于语义的文本数字水印算法,最后展望了文本数字水印技术研究的发展前景。 相似文献
10.
Dan Klein Author Vitae Author Vitae 《Pattern recognition》2005,38(9):1407-1419
We present a generative probabilistic model for the unsupervised learning of hierarchical natural language syntactic structure. Unlike most previous work, we do not learn a context-free grammar, but rather induce a distributional model of constituents which explicitly relates constituent yields and their linear contexts. Parameter search with EM produces higher quality analyses for human language data than those previously exhibited by unsupervised systems, giving the best published unsupervised parsing results on the ATIS corpus. Experiments on Penn treebank sentences of comparable length show an even higher constituent F1 of 71% on non-trivial brackets. We compare distributionally induced and actual part-of-speech tags as input data, and examine extensions to the basic model. We discuss errors made by the system, compare the system to previous models, and discuss upper bounds, lower bounds, and stability for this task. 相似文献
11.
Watermarking technology can be beneficial in digital rights protection. However, the industry’s acceptance of the technology
has been lukewarm as experts have been able to hear audible artifacts introduced during the watermarking process. In this
paper, we present what we believe to be a truly inaudible solution to this problem. Our proposed watermarking technique embeds
the watermark signal in the phase of an audio signal, with secrecy as to which frequency components carry the watermark bits,
achieved via a pseudorandom generator. Inaudibility is realized by exploiting the human auditory system’s insensitivity to
absolute phase. Further, our algorithm includes a novel mechanism for segmenting an audio signal into variable frame-lengths
to provide robustness against de-synchronization attacks such as jitter and time-scaling. It uses a short-time Fourier transform
to first characterize local changes in the frequency content of an audio signal, from which, pairs of frequencies satisfying
specified conditions are identified, to mark the start and end of a segment. The insertion of synchronization marks adds further
robustness against such attacks. Robustness against other common attacks may be further enhanced through the use of concatenated
error-control codes which enable the correction of random and/or burst errors, which may be introduced during an attack.
相似文献
M. A. ArmandEmail: |
12.
一种基于自然语言信息隐藏的容量提高算法 总被引:1,自引:0,他引:1
给出了一个基于自然语言处理的文本信息隐藏的一般化模型,然后基于该模型讨论了增加信息隐藏容量的方法。接着给出了一个可以有效地增加信息隐藏容量的算法。从理论和实验上都表明,该算法可以有效地提高信息隐藏容量约25%以上。 相似文献
13.
Computer animation and visualization can facilitate communication between the hearing impaired and those with normal speaking capabilities. This paper presents a model of a system that is capable of translating text from a natural language into animated sign language. Techniques have been developed to analyse language and transform it into sign language in a systematic way. A hand motion coding method as applied to the hand motion representation, and control has also been investigated. Two translation examples are also given to demonstrate the practicality of the system. 相似文献
14.
15.
In knowledge discovery in a text database, extracting and returning a subset of information highly relevant to a user's query is a critical task. In a broader sense, this is essentially identification of certain personalized patterns that drives such applications as Web search engine construction, customized text summarization and automated question answering. A related problem of text snippet extraction has been previously studied in information retrieval. In these studies, common strategies for extracting and presenting text snippets to meet user needs either process document fragments that have been delimitated a priori or use a sliding window of a fixed size to highlight the results. In this work, we argue that text snippet extraction can be generalized if the user's intention is better utilized. It overcomes the rigidness of existing approaches by dynamically returning more flexible start-end positions of text snippets, which are also semantically more coherent. This is achieved by constructing and using statistical language models which effectively capture the commonalities between a document and the user intention. Experiments indicate that our proposed solutions provide effective personalized information extraction services. 相似文献
16.
Manfred Stede 《Artificial Intelligence Review》1992,6(4):383-414
Practical natural language understanding systems used to be concerned with very small miniature domains only: They knew exactly what potential text might be about, and what kind of sentence structures to expect. This optimistic assumption is no longer feasible if NLU is to scale up to deal with text that naturally occurs in the "real world". The key issue is robustness: The system needs to be prepared for cases where the input data does not correspond to the expectations encoded in the grammar. In this paper, we survey the approaches towards the robustness problem that have been developed throughout the last decade. We inspect techniques to overcome both syntactically and semantically ill-formed input in sentence parsing and then look briefly into more recent ideas concerning the extraction of information from texts, and the related question of the role that linguistic research plays in this game. Finally, the robust sentence parsing schemes are classified on a more abstract level of analysis.Dept. of Computer Science, University of TorontoFor helpful comments on earlier drafts of this paper, I thank Judy Dick, Graeme Hirst, Diane Horton, Kem Luther, and Jan Wiebe. Financial support by the University of Toronto is acknowledged. Communication and requests for reprints should be directed to the author at Department of Computer Science, University of Toronto, Toronto, Canada M5S 1A4. 相似文献
17.
Elisabeth 《Data & Knowledge Engineering》2002,41(2-3):247-272
Natural language and databases are core components of information systems. They are related to each other because they share the same purpose: the conceptualization aspects of the real world in order to deal with them in some way. Natural language processing (NLP) techniques may substantially enhance most phases of the information system lifecycle, starting with requirements analysis, specification and validation, and going up to conflict resolution, result processing and presentation. Furthermore, natural language based query languages and user interfaces facilitate the access to information for anyone and allow for new paradigms in the usage of computerized services. This paper investigates the use of NLP techniques in the design phase of information systems. Then, it reports on data base querying and information retrieval enhanced with NLP. 相似文献
18.
A. Arunkumar S. Sharma R. Agrawal S. Chandrasekaran C. Bryan 《Computer Graphics Forum》2023,42(3):409-421
Cross-task generalization is a significant outcome that defines mastery in natural language understanding. Humans show a remarkable aptitude for this, and can solve many different types of tasks, given definitions in the form of textual instructions and a small set of examples. Recent work with pre-trained language models mimics this learning style: users can define and exemplify a task for the model to attempt as a series of natural language prompts or instructions. While prompting approaches have led to higher cross-task generalization compared to traditional supervised learning, analyzing ‘bias’ in the task instructions given to the model is a difficult problem, and has thus been relatively unexplored. For instance, are we truly modeling a task, or are we modeling a user's instructions? To help investigate this, we develop LINGO, a novel visual analytics interface that supports an effective, task-driven workflow to (1) help identify bias in natural language task instructions, (2) alter (or create) task instructions to reduce bias, and (3) evaluate pre-trained model performance on debiased task instructions. To robustly evaluate LINGO, we conduct a user study with both novice and expert instruction creators, over a dataset of 1,616 linguistic tasks and their natural language instructions, spanning 55 different languages. For both user groups, LINGO promotes the creation of more difficult tasks for pre-trained models, that contain higher linguistic diversity and lower instruction bias. We additionally discuss how the insights learned in developing and evaluating LINGO can aid in the design of future dashboards that aim to minimize the effort involved in prompt creation across multiple domains. 相似文献
19.
The authors have been working on natural language understanding based on the knowledge representation language L md and its application to robot manipulation by verbal suggestion. The most remarkable feature of L md is its capability of formalizing spatiotemporal events in good correspondence with human/robotic sensations and actions, which can lead to integrated computation of sensory, motory and conceptual information. This paper describes briefly the process from text to robot action via semantic representation in L md and the experimental results of robot manipulation driven by verbal suggestion. This work was presented in part at the 13th International Symposium on Artificial Life and Robotics, Oita, Japan, January 31–February 2, 2008 相似文献
20.
Within the software industry software piracy is a great concern. In this article we address this issue through a prevention
technique called software watermarking. Depending on how a software watermark is applied it can be used to discourage piracy;
as proof of authorship or purchase; or to track the source of the illegal redistribution. In particular we analyze an algorithm
originally proposed by Geneviève Arboit in A Method for Watermarking Java Programs via Opaque Predicates. This watermarking technique embeds the watermark by adding opaque predicates to the application. We have found that the
Arboit technique does withstand some forms of attack and has a respectable data-rate. However, it is susceptible to a variety
of distortive attacks. One unanswered question in the area of software watermarking is whether dynamic algorithms are inherently
more resilient to attacks than static algorithms. We have implemented and empirically evaluated both static and dynamic versions
within the SandMark framework.
This work is supported by the NSF under grant CCR-0073483, by the AFRL under contract F33615-02-C-1146, and the GAANN Fellowship.
Ginger Myles is currently a research scientist at IBM’s Almaden Research Center and is finishing her Ph.D. degree in computer science
at the University of Arizona. She received a B.A. in mathematics from Beloit College in Beloit, Wisconsin and an M.S. in computer
science from the University of Arizona. Her research focuses on all aspects of content protection.
Christian Collberg received his PhD from the Department of Computer Science at the University of Lund, Sweden, after which he was on the faculty
at the University of Auckland, New Zealand. He is currently an Associate Professor at the University of Arizona. His primary
research area is the protection of software from reverse engineering, tampering, and piracy. In particular, the SandMark tool
(sandmark.cs.arizona.edu) developed at the University of Arizona is the premier tool for the study of software protection
algorithms. 相似文献