首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
在当前自然语言处理的研究状况下,文学语言处理应当受到足够的重视。诗词艺术集中体现了文学语言的形象性、情感性、个性等特征,是文学语言处理研究很好的切入点。风格评价是文学语言处理的重要课题,极具挑战性。本文以诗词语言为具体研究对象,以基于词联接的自然语言处理技术为技术背景,着重介绍并验证基于词联接的诗词风格评价技术。提出了计算方法,设计了诗词风格评价问卷调查实验。结果表明,人的诗词风格评价共性大于个性,基于词联接的诗词风格评价技术能够有效地评价诗词风格。  相似文献   

2.
3.
The present paper is a critique of quantitative studies of literature. It is argued that such studies are involved in an act of reification, in which, moreover, fundamental ingredients of the texts, e.g. their (highly important) range of figurative meanings, are eliminated from the analysis. Instead a concentration on lower levels of linguistic organization, such as grammar and lexis, may be observed, in spite of the fact that these are often the least relevant aspects of the text. In doing so, quantitative studies of literature significantly reduce not only the cultural value of texts, but also the generalizability of its own findings. What is needed, therefore, is an awareness and readiness to relate to matters of textuality as an organizing principle underlying the cultural functioning of literary works of art.Willie van Peer is Associate Professor of Literary Theory at the University of Utrecht (The Netherlands), author of Stylistics and Psychology: Investigations of Foregrounding (Croom Helm, 1986) and editor ofThe Taming of the Text: Explorations in Language, Literature and Culture (Routledge, 1988). His major research interests lie in theory formation and its epistemological problems, and in the interrelationship between literary form and function.  相似文献   

4.
We should follow Mark Olsen's lead and think with maximum ambition of the role of the computer in supporting literary research of the highest order. Thus the computer enables us to answer one of the great questions of literary criticism: how does a given writer contribute to the changing language? We can now chart the influence of given writers by correlating their words and phrasing with computerized dictionaries so as to produce profiles and histories of the way words have entered the language.Dennis Taylor is a professor of English at Boston College and a nineteenth century specialist. His two most recent books areHardy's Metres and Victorian Prosody andHardy's Literary Language and Victorian Philology, both from Clarendon Press, Oxford.  相似文献   

5.
Learning Syntax by Automata Induction   总被引:1,自引:1,他引:0  
In this paper we propose an explicit computer model for learning natural language syntax based on Angluin's (1982) efficient induction algorithms, using a complete corpus of grammatical example sentences. We use these results to show how inductive inference methods may be applied to learn substantial, coherent subparts of at least one natural language — English — that are not susceptible to the kinds of learning envisioned in linguistic theory. As two concrete case studies, we show how to learn English auxiliary verb sequences (such as could be taking, will have been taking) and the sequences of articles and adjectives that appear before noun phrases (such as the very old big deer). Both systems can be acquired in a computationally feasible amount of time using either positive examples, or, in an incremental mode, with implicit negative examples (examples outside a finite corpus are considered to be negative examples). As far as we know, this is the first computer procedure that learns a full-scale range of noun subclasses and noun phrase structure. The generalizations and the time required for acquisition match our knowledge of child language acquisition for these two cases. More importantly, these results show that just where linguistic theories admit to highly irregular subportions, we can apply efficient automata-theoretic learning algorithms. Since the algorithm works only for fragments of language syntax, we do not believe that it suffices for all of language acquisition. Rather, we would claim that language acquisition is nonuniform and susceptible to a variety of acquisition strategies; this algorithm may be one these.  相似文献   

6.
This paper considers the problem of quantifying literary style and looks at several variables which may be used as stylistic fingerprints of a writer. A review of work done on the statistical analysis of change over time in literary style is then presented, followed by a look at a specific application area, the authorship of Biblical texts.David Holmes is a Principal Lecturer in Statistics at the University of the West of England, Bristol with specific responsibility for co-ordinating the research programmes in the Department of Mathematical Sciences. He has taught literary style analysis to humanities students since 1983 and has published articles on the statistical analysis of literary style in theJournal of the Royal Statistical Society, History and Computing, andLiterary and Linguistic Computing. He presented papers at the ACH/ALLC conferences in 1991 and 1993.  相似文献   

7.
8.
The explicit consideration of literary theory has become increasingly important both in the field of textual studies generally and in undergraduate literature courses. But theory can seem vague and inconsequential to undergraduates. Our students use hypertext to model intertextuality and the Linear Modeling Kit (a software program we have developed) to model structuralist ideas about narrative. In making computer models, students explore the implications of analytic ideas by attempting to represent them in formal (in the sense of programmable) terms. Our experience shows that such modeling stimulates student questioning and discussion of marked precision and sophistication.Peter Havholm and Larry Stewart, both Professors of English at The College of Wooster, have collaborated on the use of computers in the teaching of literature since 1987 and have published several papers on the subject. They won an EDUCOM/NCRIPTAL Award for Distinguished Curricular Innovation in 1989. Stewart is co-author of A Guide to Literary Criticism and Research (3rd ed., Harcourt Brace Jovanovich, 1996). Havholm recently returned to teaching after fifteen years in administration. He has published Kipling and Fantasy, anthologized in Harold Orel, ed., Critical Essays on Rudyard Kipling (Boston: G.K. Hall, 1989), 92–105.  相似文献   

9.
We present a theoretical basis for supporting subjective and conditional probabilities in deductive databases. We design a language that allows a user greater expressive power than classical logic programming. In particular, a user can express the fact thatA is possible (i.e.A has non-zero probability),B is possible, but (A B) as a whole is impossible. A user can also freely specify probability annotations that may contain variables. The focus of this paper is to study the semantics of programs written in such a language in relation to probability theory. Our model theory which is founded on the classical one captures the uncertainty described in a probabilistic program at the level of Herbrand interpretations. Furthermore, we develop a fixpoint theory and a proof procedure for such programs and present soundness and completeness results. Finally we characterize the relationships between probability theory and the fixpoint, model, and proof theory of our programs.  相似文献   

10.
    
This article uses recent work on the computer-aided analysis of texts by the French writer Céline as a framework to discuss Olsen's paper on the current state of computer-aided literary analysis. Drawing on analysis of syntactic structures, lexical creativity and use of proper names, it makes two points: (1) given a rich theoretical framework and sufficiently precise models, even simple computer tools such as text editors and concordances can make a valuable contribution to literary scholarship; (2) it is important to view the computer not as a device for finding what we as readers have failed to notice, but rather as a means of focussing more closely on what we have already felt as readers, and of verifying hypotheses we have produced as researchers.Johanne Bénard is an Assistant Professor of French. She finished her Ph.D. thesis at the Université de Montréal in 1989 and is working on a book which can be described as an autobiographical reading of Céline's work. She has published various articles on Céline's correspondence (the latest being La lettre du/au père,Colloque international de Toulouse L.-F. Céline, 1990) and on the theory of autobiography (Le contexte de l'autobiographie,RSSI 11 [1991]). Her present interest is the linguistic aspects of Céline's text and the theory of orality.Greg Lessard is an Associate Professor in the French Studies and Computing and Information Science departments. His research areas include natural language generation, computer-aided text analysis, and the linguistic analysis of second-language performance errors. Recent publications include articles inResearch in Humanities Computing: 1989 on orality in Canadian French novels, and inLiterary and Linguistic Computing, 6, 4 (1991) on repeated structures in literary texts.  相似文献   

11.
There is a great deal of research aimed toward the development of temporal logics and model checking algorithms which can be used to verify properties of systems. In this paper, we present a methodology and supporting tools which allow researchers and practitioners to automatically generate model checking algorithms for temporal logics from algebraic specifications. These tools are extensions of algebraic compiler generation tools and are used to specify model checkers as mappings of the form , where L s is a temporal logic source language and L t is a target language representing sets of states of a model M, such that . The algebraic specifications for a model checker define the logic source language, the target language representing sets of states in a model, and the embedding of the source language into the target language. Since users can modify and extend existing specifications or write original specifications, new model checking algorithms for new temporal logics can be easily and quickly developed; this allows the user more time to experiment with the logic and its model checking algorithm instead of developing its implementation. Here we show how this algebraic framework can be used to specify model checking algorithms for CTL, a real-time CTL, CTL*, and a custom extension called CTL e that makes use of propositions labeling the edges as well as the nodes of a model. We also show how the target language can be changed to a language of binary decision diagrams to generate symbolic model checkers from algebraic specifications.  相似文献   

12.
Book Review     
This is a good book. Its main message is that a particular approach to natural language called type-logical grammar can, in-principle, be equipped with a learning theory. In this review, I first identify what type-logical grammar is, then outline what the learning theory is. Then I try to articulate why this message is important for the logical, linguistic and information-theoretic parts of cognitive science. Overall, I think the book’s main message is significant enough to warrant patience with its scientific limitations.  相似文献   

13.
《国际计算机数学杂志》2012,89(8):1619-1628
A language A is left cancellative if from AB=AC, it follows that B=C, for any two languages B and C. Semi-singular and inf-singular languages are two disjoint sub-sets of left cancellative languages and are introduced by Hsieh and Shyr [Left cancellative elements in the monoid of languages, Soochow J. Math. 4 (1978), pp. 7–15]. In this paper, we further study them. It is shown that all non-dense and all maximal left cancellative languages are semi-singular while all right dense left cancellative languages are inf-singular. Finally, a theorem shows that there is a left cancellative language which is neither semi-singular nor inf-singular.  相似文献   

14.
Some aspects of the physical nature of language are discussed. In particular, physical models of language must exist that are efficiently implementable. The existence requirement is essential because without physical models no communication or thinking would be possible. Efficient implementability for creating and reading language expressions is discussed and illustrated with a quantum mechanical model. The reason for interest in language is that language expressions can have meaning, either as an informal language or as a formal language associated with mathematical or physical theories. It is noted that any universally applicable physical theory, or coherent theory of physics and mathematics together, includes in its domain physical models of expressions for both the informal language used to discuss the theory and the expressions of the theory itself. It follows that there must be some formulas in the formal theory that express some of their own physical properties. The inclusion of intelligent systems in the domain of the theory means that the theory, e.g., quantum mechanics, must describe, in some sense, its own validation. Maps of language expressions into physical states are discussed. A spin projection example is discussed as are conditions under which such a map is a Gödel map. The possibility that language is also mathematical is very briefly discussed. PACS: 03.67–a; 03.65.Ta; 03.67.Lx  相似文献   

15.
Although many scholars in literature currently seem mainly interested in theory, the focus on literary texts is what defines literature studies. Computer technology and the statistical methods it fosters are applicable to both the theoretical and to the interpretative issues which scholars of literature habitually address. Genette's distinction between the homodiegetic and the autodiegetic perspective in first-person narrative can be confirmed statistically. Roquentin's loneliness inLa nausée can be shown to be a formal characteristic of the type of novel he narrates, thus validating his commentary on his society. The computer can be used to deal with standard literary questions in a principled fashion, and a new orientation of literature studies on a cultural history model, which Mark Olsen recommends, is not necessary.Paul A. Fortier is Distinguished Professor of French at the University of Manitoba and Vice-President of the Association for Computers and the Humanities. He has published extensively on the twentieth century French novel and on the use of computers and statistics for the study of literature.  相似文献   

16.
Literary criticism places fictional work in historical, social and psychological contexts to offer insights about the way that texts are produced and consumed. Critical theory offers a range of strategies for analysing what a text says and just as importantly, what it leaves unsaid. Literary analyses of scientific writing can also produce insights about how research agendas are framed and addressed. This paper provides three readings of a seminal ubiquitous computing scenario by Marc Weiser. Three approaches from literary and critical theory are demonstrated in deconstructive, psychoanalytic and feminist readings of the scenario. The deconstructive reading suggests that alongside the vision of convenient and efficient ubiquitous computing is a complex set of fears and anxieties that the text cannot quite subdue. A psychoanalytic reading considers what the scenario is asking us to desire and identifies the dream of surveillance without intrusion. A final feminist reading discusses gender and collapsing distinctions between public and private, office and home, family and work life. None of the readings are suggested as the final truth of what Weiser was “really” saying. Rather they articulate a set of issues and concerns that might frame design agendas differently. The scenario is then re-written in two pastiches that draw on source material with very different visions of ubiquitous computing. The Sal scenario is first rewritten in the style of Douglas Adams’ Hitchhiker’s Guide to the Galaxy. In this world, technology is broken, design is poor and users are flawed, fallible and vulnerable. The second rewrites the scenarios in the style of Philip K Dick’s novel Ubik. This scenario serves to highlight what is absent in Weiser’s scenario and indeed most design scenarios: money. The three readings and two pastiches underline the social conflict and struggle more often elided or ignored in the stories told in ubicomp literature. It is argued that literary forms of reading and writing can be useful in both questioning and reframing scientific writing and design agendas.  相似文献   

17.
It is envisaged that the application of the multilevel security (MLS) scheme will enhance flexibility and effectiveness of authorization policies in shared enterprise databases and will replace cumbersome authorization enforcement practices through complicated view definitions on a per user basis. However, the critical problem with the current model is that the belief at a higher security level is cluttered with irrelevant or inconsistent data as no mechanism for attenuation is supported. Critics also argue that it is imperative for MLS database users to theorize about the belief of others, perhaps at different security levels, an apparatus that is currently missing and the absence of which is seriously felt.The impetus for our current research is the need to provide an adequate framework for belief reasoning in MLS databases. In this paper, we show that these concepts can be captured in a F-logic style declarative query language, called MultiLog, for MLS deductive databases for which a proof theoretic, model theoretic and fixpoint semantics exist. This development is significant from a database perspective as it now enables us to compute the semantics of MultiLog databases in a bottom-up fashion. We also define a bottom-up procedure to compute unique models of stratified MultiLog databases. Finally, we establish the equivalence of MultiLog's three logical characterizations—model theory, fixpoint theory and proof theory.  相似文献   

18.
基于最大熵方法的统计语言模型   总被引:2,自引:0,他引:2  
针对现有统计语言模型中存在计算量过大和系统负担过重的问题,该文提出了一种基于最大熵方法的统计语言模型。模型在参数估计阶段,引入约束最优化理论中拉格朗日乘数定理和牛顿迭代算法,以确保模型在多个约束条件中可求出最优化参数值;在特征选择阶段,采用计算近似增益的平行算法,解决模型计算量过大和系统开销问题。将该模型用于汉语句子分析的软件实验中表明:模型具有较高的计算效率和鲁棒性。  相似文献   

19.
In this paper, we attempt to characterize the class of recursively enumerable languages with much smaller language classes than that of linear languages. Language classes, and , of (i,j) linear languages and (i,j) minimal linear languages are defined by posing restrictions on the form of production rules and the number of nonterminals. Then the homomorphic characterizations of the class of recursively enumerable languages are obtained using these classes and a class, , of minimal linear languages. That is, for any recursively enumerable language L over Σ, an alphabet Δ, a homomorphism h : Δ*→Σ* and two languages L1 and L2 over Δ in some classes mentioned above can be found such that L = h(L1L2). The membership relations of L1 and L2 of the main results are as follows:(I) For posing restrictions on the forms of production rules, the following result is obtained:(1) and .This result is the best one and cannot be improved using . However, with posing more restriction on L2, this result can be improved and the follwing statement is obtained.(2) and .(II) For posing restrictions on the numbers of nonterminals, the follwing result is obtained.(3) and .We believe this result is also the best.  相似文献   

20.
This paper presents a numeric and information theoretic model for themeasuring of language change, without specifying the particular type ofchange. It is shown that this measurement is intuitively plausibleand that meaningful measurements canbe made from as few as 1000 characters. This measurement techniqueis extended to the task of determining the ``rate' of language changebased on an examination of brief excerpts from the NationalGeographic Magazine and determining both their linguistic distancefrom one another as well as the number of years of temporal separation.A statistical analysis of these results shows, first, that language changecan be measured, and second, that the rate of languagechange has not been uniform, and that in particular, the period 1939-;1948had particularly slow change, while 1949-;1958 and 1959-;1968 hadparticularly rapid changes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号