首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 952 毫秒
1.
In this paper, we investigate the relative effect of two strategies for language resource addition for Japanese morphological analysis, a joint task of word segmentation and part-of-speech tagging. The first strategy is adding entries to the dictionary and the second is adding annotated sentences to the training corpus. The experimental results showed that addition of annotated sentences to the training corpus is better than the addition of entries to the dictionary. In particular, adding annotated sentences is especially efficient when we add new words with contexts of several real occurrences as partially annotated sentences, i.e. sentences in which only some words are annotated with word boundary information. According to this knowledge, we performed real annotation experiments on invention disclosure texts and observed word segmentation accuracy. Finally we investigated various language resource addition cases and introduced the notion of non-maleficence, asymmetricity, and additivity of language resources for a task. In the WS case, we found that language resource addition is non-maleficent (adding new resources causes no harm in other domains) and sometimes additive (adding new resources helps other domains). We conclude that it is reasonable for us, NLP tool providers, to distribute only one general-domain model trained from all the language resources we have.  相似文献   

2.
In this work, we study the problem of annotating a large volume of Financial text by learning from a small set of human-annotated training data. The training data is prepared by randomly selecting some text sentences from the large corpus of financial text. Conventionally, bootstrapping algorithm is used to annotate large volume of unlabeled data by learning from a small set of annotated data. However, the small set of annotated data have to be carefully chosen as seed data. Thus, our approach is a digress from the conventional approach of bootstrapping as we let the users randomly select the seed data. We show that our proposed algorithm has an accuracy of 73.56% in classifying the financial texts into the different categories (“Accounting”, “Cost”, “Employee”, “Financing”, “Sales”, “Investments”, “Operations”, “Profit”, “Regulations” and “Irrelevant”) even when the training data is just 30% of the total data set. Additionally, the accuracy improves by an approximate average of 2% for an increase of the training data by 10% and the accuracy of our system is 77.91% when the training data is about 50% of the total data set. As a dictionary of hand chosen keywords prepared by domain experts are often used for financial text extraction, we assumed the existence of almost linearly separable hyperplanes between the different classes and therefore, we have used Linear Support Vector Machine along with a modified version of Label Propagation Algorithm which exploits the notion of neighborhood (in Euclidean space) for classification. We believe that our proposed techniques will be of help to Early Warning Systems used in banks where large volumes of unstructured texts need to be processed for better insights about a company.  相似文献   

3.
4.
Classification methods are becoming more and more useful as part of the standard data analyst’s toolbox in many application domains. The specific data and domain characteristics of social media tools used in online educational contexts present the challenging problem of training high-quality classifiers that bring important insight into activity patterns of learners. Currently, standard and also very successful model for classification tasks is represented by decision trees. In this paper, we introduce a custom-designed data analysis pipeline for predicting “spam” and “don’t care” learners from eMUSE online educational environment. The trained classifiers rely on social media traces as independent variables and on final grade of the learner as dependent variables. Current analysis evaluates performed activities of learners and the similarity of two derived data models. Experiments performed on social media traces from five years and 285 learners show satisfactory classification results that may be further used in productive environment. Accurate identification of “spam” and “don’t care” users may have further a great impact on producing better classification models for the rest of the “regular” learners.  相似文献   

5.
In the railway safety-critical domain requirements documents have to abide to strict quality criteria. Rule-based natural language processing (NLP) techniques have been developed to automatically identify quality defects in natural language requirements. However, the literature is lacking empirical studies on the application of these techniques in industrial settings. Our goal is to investigate to which extent NLP can be practically applied to detect defects in the requirements documents of a railway signalling manufacturer. To address this goal, we first identified a set of typical defects classes, and, for each class, an engineer of the company implemented a set of defect-detection patterns by means of the GATE tool for text processing. After a preliminary analysis, we applied the patterns to a large set of 1866 requirements previously annotated for defects. The output of the patterns was further inspected by two domain experts to check the false positive cases. Additional discard-patterns were defined to automatically remove these cases. Finally, SREE, a tool that searches for typically ambiguous terms, was applied to the requirements. The experiments show that SREE and our patterns may play complementary roles in the detection of requirements defects. This is one of the first works in which defect detection NLP techniques are applied on a very large set of industrial requirements annotated by domain experts. We contribute with a comparison between traditional manual techniques used in industry for requirements analysis, and analysis performed with NLP. Our experience shows that several discrepancies can be observed between the two approaches. The analysis of the discrepancies offers hints to improve the capabilities of NLP techniques with company specific solutions, and suggests that also company practices need to be modified to effectively exploit NLP tools.  相似文献   

6.
Detecting dense subgraphs such as cliques or quasi-cliques is an important graph mining problem. While this task is established for simple graphs, today’s applications demand the analysis of more complex graphs: In this work, we consider a frequently observed type of graph where edges represent different types of relations. These multiple edge types can also be viewed as different “layers” of a graph, which is denoted as a “multi-layer graph”. Additionally, each edge might be annotated by a label characterizing the given relation in more detail. By simultaneously exploiting all this information, the detection of more interesting subgraphs can be supported. We introduce the multi-layer coherent subgraph model, which defines clusters of vertices that are densely connected by edges with similar labels in a subset of the graph layers. We avoid redundancy in the result by selecting only the most interesting, non-redundant subgraphs for the output. Based on this model, we introduce the best-first search algorithm MiMAG. In thorough experiments, we demonstrate the strengths of MiMAG in comparison with related approaches on synthetic as well as real-world data sets.  相似文献   

7.
The focus of this article is on the creation of a collection of sentences manually annotated with respect to their sentence structure. We show that the concept of linear segments—linguistically motivated units, which may be easily detected automatically—serves as a good basis for the identification of clauses in Czech. The segment annotation captures such relationships as subordination, coordination, apposition and parenthesis; based on segmentation charts, individual clauses forming a complex sentence are identified. The annotation of a sentence structure enriches a dependency-based framework with explicit syntactic information on relations among complex units like clauses. We have gathered a collection of 3,444 sentences from the Prague Dependency Treebank, which were annotated with respect to their sentence structure (these sentences comprise 10,746 segments forming 6,341 clauses). The main purpose of the project is to gain a development data—promising results for Czech NLP tools (as a dependency parser or a machine translation system for related languages) that adopt an idea of clause segmentation have been already reported. The collection of sentences with annotated sentence structure provides the possibility of further improvement of such tools.  相似文献   

8.
Human–computer dialogue systems interact with human users using natural language. We used the ALICE/AIML chatbot architecture as a platform to develop a range of chatbots covering different languages, genres, text-types, and user-groups, to illustrate qualitative aspects of natural language dialogue system evaluation. We present some of the different evaluation techniques used in natural language dialogue systems, including black box and glass box, comparative, quantitative, and qualitative evaluation. Four aspects of NLP dialogue system evaluation are often overlooked: “usefulness” in terms of a user’s qualitative needs, “localizability” to new genres and languages, “humanness” or “naturalness” compared to human–human dialogues, and “language benefit” compared to alternative interfaces. We illustrated these aspects with respect to our work on machine-learnt chatbot dialogue systems; we believe these aspects are worthwhile in impressing potential new users and customers.  相似文献   

9.
Much of the vast literature on time series classification makes several assumptions about data and the algorithm’s eventual deployment that are almost certainly unwarranted. For example, many research efforts assume that the beginning and ending points of the pattern of interest can be correctly identified, during both the training phase and later deployment. Another example is the common assumption that queries will be made at a constant rate that is known ahead of time, thus computational resources can be exactly budgeted. In this work, we argue that these assumptions are unjustified, and this has in many cases led to unwarranted optimism about the performance of the proposed algorithms. As we shall show, the task of correctly extracting individual gait cycles, heartbeats, gestures, behaviors, etc., is generally much more difficult than the task of actually classifying those patterns. Likewise, gesture classification systems deployed on a device such as Google Glass may issue queries at frequencies that range over an order of magnitude, making it difficult to plan computational resources. We propose to mitigate these problems by introducing an alignment-free time series classification framework. The framework requires only very weakly annotated data, such as “in this ten minutes of data, we see mostly normal heartbeats\(\ldots \),” and by generalizing the classic machine learning idea of data editing to streaming/continuous data, allows us to build robust, fast and accurate anytime classifiers. We demonstrate on several diverse real-world problems that beyond removing unwarranted assumptions and requiring essentially no human intervention, our framework is both extremely fast and significantly more accurate than current state-of-the-art approaches.  相似文献   

10.
In this paper, we describe tools and resources for the study of African languages developed at the Collaborative Research Centre 632 “Information Structure”. These include deeply annotated data collections of 25 sub-Saharan languages that are described together with their annotation scheme, as well as the corpus tool ANNIS, which provides unified access to a broad variety of annotations created with a range of different tools. With the application of ANNIS to several African data collections, we illustrate its suitability for the purpose of language documentation, distributed access, and the creation of data archives.  相似文献   

11.
Extraction and normalization of temporal expressions from documents are important steps towards deep text understanding and a prerequisite for many NLP tasks such as information extraction, question answering, and document summarization. There are different ways to express (the same) temporal information in documents. However, after identifying temporal expressions, they can be normalized according to some standard format. This allows the usage of temporal information in a term- and language-independent way. In this paper, we describe the challenges of temporal tagging in different domains, give an overview of existing annotated corpora, and survey existing approaches for temporal tagging. Finally, we present our publicly available temporal tagger HeidelTime, which is easily extensible to further languages due to its strict separation of source code and language resources like patterns and rules. We present a broad evaluation on multiple languages and domains on existing corpora as well as on a newly created corpus for a language/domain combination for which no annotated corpus has been available so far.  相似文献   

12.
In the literature on logics of imperfect information it is often stated, incorrectly, that the Game-Theoretical Semantics of Independence-Friendly (IF) quantifiers captures the idea that the players of semantical games are forced to make some moves without knowledge of the moves of other players. We survey here the alternative semantics for IF logic that have been suggested in order to enforce this “epistemic reading” of sentences. We introduce some new proposals, and a more general logical language which distinguishes between “independence from actions” and “independence from strategies”. New semantics for IF logic can be obtained by choosing embeddings of the set of IF sentences into this larger language. We compare all the semantics proposed and their purported game-theoretical justifications, and disprove a few claims that have been made in the literature.  相似文献   

13.
Textual requirements are very common in software projects. However, this format of requirements often keeps relevant concerns (e.g., performance, synchronization, data access, etc.) from the analyst’s view because their semantics are implicit in the text. Thus, analysts must carefully review requirements documents in order to identify key concerns and their effects. Concern mining tools based on NLP techniques can help in this activity. Nonetheless, existing tools cannot always detect all the crosscutting effects of a given concern on different requirements sections, as this detection requires a semantic analysis of the text. In this work, we describe an automated tool called REAssistant that supports the extraction of semantic information from textual use cases in order to reveal latent crosscutting concerns. To enable the analysis of use cases, we apply a tandem of advanced NLP techniques (e.g, dependency parsing, semantic role labeling, and domain actions) built on the UIMA framework, which generates different annotations for the use cases. Then, REAssistant allows analysts to query these annotations via concern-specific rules in order to identify all the effects of a given concern. The REAssistant tool has been evaluated with several case-studies, showing good results when compared to a manual identification of concerns and a third-party tool. In particular, the tool achieved a remarkable recall regarding the detection of crosscutting concern effects.  相似文献   

14.
Automatically generating program translators from source and target language specifications is a non-trivial problem. In this paper we focus on the problem of automating the process of building translators between operations languages, a family of DSLs used to program satellite operations procedures. We exploit their similarities to semi-automatically build transformation tools between these DSLs. The input to our method is a collection of annotated context-free grammars. To simplify the overall translation process even more, we also propose an intermediate representation common to all operations languages. Finally, we discuss how to enrich our annotated grammars model with more advanced semantic annotations to provide a verification system for the translation process. We validate our approach by semi-automatically deriving translators between some real world operations languages, using the prototype tool which we implemented for that purpose.  相似文献   

15.
Most data hiding schemes change the least significant bits to conceal messages in the cover images. Matrix encoding scheme is a well known scheme in this field. The matrix encoding proposed by Crandall can be used in steganographic data hiding methods. Hamming codes are kinds of cover codes. “Hamming + 1” proposed by Zhang et al. is an improved version of matrix encoding steganography. The embedding efficiency of “Hamming + 1” is very high for data hiding, but the embedding rate is low. Our proposed “Hamming + 3” scheme has a slightly reduced embedding efficiency, but improve the embedding rate and image quality. “Hamming + 3” is applied to overlapped blocks, which are composed of 2k+3 pixels, where k=3. We therefore propose verifying the embedding rate during the embedding and extracting phases. Experimental results show that the reconstructed secret messages are the same as the original secret message, and the proposed scheme exhibits a good embedding rate compared to those of previous schemes.  相似文献   

16.
Distance automata are automata weighted over the semiring \((\mathbb {N}\cup \{\infty \},\min , +)\) (the tropical semiring). Such automata compute functions from words to \(\mathbb {N}\cup \{\infty \}\). It is known from Krob that the problems of deciding ‘ fg’ or ‘ f=g’ for f and g computed by distance automata is an undecidable problem. The main contribution of this paper is to show that an approximation of this problem is decidable. We present an algorithm which, given ε>0 and two functions f,g computed by distance automata, answers “yes” if f≤(1?ε)g, “no” if f≦?g, and may answer “yes” or “no” in all other cases. The core argument behind this quasi-decision procedure is an algorithm which is able to provide an approximated finite presentation of the closure under products of sets of matrices over the tropical semiring. Lastly, our theorem of affine domination gives better bounds on the precision of known decision procedures for cost automata, when restricted to distance automata.  相似文献   

17.
Given a large collection of co-evolving online activities, such as searches for the keywords “Xbox”, “PlayStation” and “Wii”, how can we find patterns and rules? Are these keywords related? If so, are they competing against each other? Can we forecast the volume of user activity for the coming month? We conjecture that online activities compete for user attention in the same way that species in an ecosystem compete for food. We present EcoWeb, (i.e., Ecosystem on the Web), which is an intuitive model designed as a non-linear dynamical system for mining large-scale co-evolving online activities. Our second contribution is a novel, parameter-free, and scalable fitting algorithm, EcoWeb-Fit, that estimates the parameters of EcoWeb. Extensive experiments on real data show that EcoWeb is effective, in that it can capture long-range dynamics and meaningful patterns such as seasonalities, and practical, in that it can provide accurate long-range forecasts. EcoWeb consistently outperforms existing methods in terms of both accuracy and execution speed.  相似文献   

18.
Accelerating Turing machines have attracted much attention in the last decade or so. They have been described as “the work-horse of hypercomputation” (Potgieter and Rosinger 2010: 853). But do they really compute beyond the “Turing limit”—e.g., compute the halting function? We argue that the answer depends on what you mean by an accelerating Turing machine, on what you mean by computation, and even on what you mean by a Turing machine. We show first that in the current literature the term “accelerating Turing machine” is used to refer to two very different species of accelerating machine, which we call end-stage-in and end-stage-out machines, respectively. We argue that end-stage-in accelerating machines are not Turing machines at all. We then present two differing conceptions of computation, the internal and the external, and introduce the notion of an epistemic embedding of a computation. We argue that no accelerating Turing machine computes the halting function in the internal sense. Finally, we distinguish between two very different conceptions of the Turing machine, the purist conception and the realist conception; and we argue that Turing himself was no subscriber to the purist conception. We conclude that under the realist conception, but not under the purist conception, an accelerating Turing machine is able to compute the halting function in the external sense. We adopt a relatively informal approach throughout, since we take the key issues to be philosophical rather than mathematical.  相似文献   

19.
This paper presents a new method with which to assist individuals with no background in linguistics to create monolingual dictionaries such as those used by the morphological analysers of many natural language processing applications. The involvement of non-expert users is especially critical for under-resourced languages which either lack or cannot afford the recruitment of a skilled workforce. Adding a word to a morphological dictionary usually requires identifying its stem along with the inflection paradigm that can be used in order to generate all the word forms of the new entry. Our method works under the assumption that the average speakers of a language can successfully answer the polar question “is x a valid form of the word w to be inserted?”, where x represents tentative alternative (inflected) forms of the new word w. The experiments show that with a small number of polar questions the correct stem and paradigm can be obtained from non-experts with high success rates. We study the impact of different heuristic and probabilistic approaches on the actual number of questions.  相似文献   

20.
For each sufficiently large n, there exists a unary regular language L such that both L and its complement L c are accepted by unambiguous nondeterministic automata with at most n states, while the smallest deterministic automata for these two languages still require a superpolynomial number of states, at least \(e^{\Omega(\sqrt[3]{n\cdot\ln^{2}n})}\). Actually, L and L c are “balanced” not only in the number of states but, moreover, they are accepted by nondeterministic machines sharing the same transition graph, differing only in the distribution of their final states. As a consequence, the gap between the sizes of unary unambiguous self-verifying automata and deterministic automata is also superpolynomial.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号