首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 703 毫秒
1.
TACT, a freeware program from the University of Toronto's Centre for Computing in the Humanities, is a highly sophisticated tool for text retrieval; although written for experienced critics and researchers, it can teach undergraduate students to read literature in new, fresh ways. Without requiring that the user become either a programmer, linguist, mathematician, or statistician,TACT introduces the literature student to the computer as a research tool. Studies of imagery and symbolism, of structural patterns, and of prosody can result from the student's careful tagging of a literary text and can yield significant insights into the work of literature. Students who use the computer as such a tool learn to read literary texts more closely and to think more clearly about literary problems.Mark Hawthorne, Professor of English at James Madison University, has published books on Maria Edgeworth and John and Michael Banim and articles inModern Fiction Studies, Studies in Romanticism, Victorian Poetry, and Modern Language Notes. His research interests include Anglo-Irish literature, computer applications, and postmodern literature.  相似文献   

2.
The effect of explaining the value of text review was studied. Students (n = 136) were randomly assigned to read a text passage displayed by computer with or without an explanation and in three presentation modes: required or optional review when answer to adjunct questions were incorrect, or reading the text without questions. Review groups learned more than those merely reading the text, and an interaction between student's prior knowledge and explanations indicated that explanation facilitated the learning of students with little familiarity with the material, while slightly impairing knowledgeable students' performance. The implications of these findings for using computers for such training and for ATI research are discussed.  相似文献   

3.
Texts on-line     
The study of signs is divided between those scholars who use the Saussurian binary sign (semiology) and those who prefer Charles Peirce's tripartite sign (semiotics). The common view of the opposition between the two types of signs does not take into consideration the methodological conditions of applicability of these two types of signs. This is particularly important in the field of literary studies and hence for the preparation of electronic programs for text analysis. The Peircian sign explicitly entails the discovery of a truth of meaning that claims to be universal and not reducible to a collection of opinions based on fragmented information; it also imposes the task of elucidating a transhistorical and universal significantion encoded in a text. Contrary to Peirce's view of the sign, our use of computer programs for text analysis, however, demonstrates that we implicitly treat every literary text as a set of linguistic data (letters, phonemes, syntagmatic segments, etc.) which are reducible to units that can be treated separately. A brief comparison of the results obtained from computer analyses of the French poet Stéphane Mallarmé's text, “Le Cygne,” with those obtained from two Peircian analyses (by Riffaterre and Champigny) of the same text demonstrates that our current methods of computer textual analysis are based on a Saussurian semiology, which is unidimensional and limited, and that these methods are still quite unable to produce a semiotic interpretation based on a totalizing hierarchy of the text's various discursive components.  相似文献   

4.
This article uses recent work on the computer-aided analysis of texts by the French writer Céline as a framework to discuss Olsen's paper on the current state of computer-aided literary analysis. Drawing on analysis of syntactic structures, lexical creativity and use of proper names, it makes two points: (1) given a rich theoretical framework and sufficiently precise models, even simple computer tools such as text editors and concordances can make a valuable contribution to literary scholarship; (2) it is important to view the computer not as a device for finding what we as readers have failed to notice, but rather as a means of focussing more closely on what we have already felt as readers, and of verifying hypotheses we have produced as researchers.  相似文献   

5.
This experiment extended the Computers Are Social Actors (CASA) paradigm by examining how output modality (text plus cartoon character vs. synthetic speech), computer gender (male vs. female), and user gender (male vs. female) moderate the ways in which people respond to computers that flatter. Specifically, participants played a trivia game with a computer, which they knew might provide incorrect answers. Participants in the generic-comment condition received strictly factual feedback, whereas those in the flattery condition were given additional remarks praising their performance. Consistent with Fogg and Nass [1997. Silicon sycophants: the effects of computers that flatter. International Journal of Human–Computer Studies 46, 551–561] study, flattery led to more positive overall impressions and performance evaluations of the computer, but such effects were found only in the text plus character condition and among women. In addition, flattery increased participants’ suspicion about the validity of the computer's feedback and lowered conformity to the computer's suggestions. Participants conformed more to the male than female computers when computer gender was manifested in gendered cartoon characters in the text condition, with no corresponding effects in the speech condition. Results suggest that synthetic speech output might suppress social responses to computers, such as flattery effects and gender stereotyping.  相似文献   

6.
Zipf's first and second laws define two striking phenomena in literary text. The two laws have applications in various fields of computer science. Recently, the study of continuous speech recognition in artificial intelligence has called for the use of statistical models of text generation. A major issue is the lack of effective and objective evaluation of the models. In this paper, four leading statistical models of text generation are evaluated with respect to Zipf's laws and we identify the Simon-Yule model as a promising approach. A significant implication of the findings for text modeling is also discussed.  相似文献   

7.
This study investigated how Chinese undergraduate college students studying English as a foreign language learned new vocabulary with inference-based computer games embedded in eBooks. The investigators specifically examined (a) the effectiveness of computer games (using inferencing) in eBooks, compared with hardcopy booklets for vocabulary retention, and (b) the relationship between students' performance on computer games and performance on a vocabulary test. A database recorded students' game playing behaviors in the log file. Students were pre- and post-tested on new vocabulary words with the Vocabulary Knowledge Scale. Participants learned significantly more vocabulary (p < .0005) in the computer game condition (web-based text and computer games) than in the control condition (their usual study method, hardcopy text, lists of words and multiple-choice questions). Students' scores in the games correlated significantly with their vocabulary post-test scores (r = .515, p < .01).  相似文献   

8.
Conclusion The objective of this work is not to replicate subjective impressions and certainly not to supplant them, but to explore means by which the second dimension of literary impact, qualities of emotional expression, can be objectively studied through the collection and display of measures made possible by the computer. With that goal, this paper has illustrated two approaches to the analysis and display of three fundamental emotional tone scores. The first is the production of a combined score, tension, which has been derived from previous studies of literary text and criterion passages. The second approach is the generation of transition graphs which identify the emotional state of passages of text according to the categories proposed in Mehrabian's theoretical system. Both of these approaches to the modeling of emotional tone scores generate meaningful displays of data which can be used in objective comparisons of different stories and which lead to fresh interpretations of the reasons for their impact on a reader. They can be applied to actual samples of the kind of literature that is spontaneously read for pleasure in addition to being of interest for analytic purposes.  相似文献   

9.
Building on literary theory and data from a field study of text in chemotherapy, this article introduces the concept of intertext and the associated concepts of corpus and intertextuality to CSCW. It shows that the ensemble of documents used and produced in practice can be said to form a corpus of written texts. On the basis of the corpus, or subsections thereof, the actors in cooperative work create intertext between relevant (complementary) texts in a particular situation, for a particular purpose. The intertext of a particular situation can be constituted by several kinds of intertextuality, including the complementary type, the intratextual type and the mediated type. In this manner the article aims to systematically conceptualise cooperative actors’ engagement with text in text-laden practices. The approach is arguably novel and beneficial to CSCW. The article also contributes with a discussion of computer enabling the activity of creating intertext. This is a key concern for cooperative work as intertext is central to text-centric work practices such as healthcare.  相似文献   

10.
Literary criticism places fictional work in historical, social and psychological contexts to offer insights about the way that texts are produced and consumed. Critical theory offers a range of strategies for analysing what a text says and just as importantly, what it leaves unsaid. Literary analyses of scientific writing can also produce insights about how research agendas are framed and addressed. This paper provides three readings of a seminal ubiquitous computing scenario by Marc Weiser. Three approaches from literary and critical theory are demonstrated in deconstructive, psychoanalytic and feminist readings of the scenario. The deconstructive reading suggests that alongside the vision of convenient and efficient ubiquitous computing is a complex set of fears and anxieties that the text cannot quite subdue. A psychoanalytic reading considers what the scenario is asking us to desire and identifies the dream of surveillance without intrusion. A final feminist reading discusses gender and collapsing distinctions between public and private, office and home, family and work life. None of the readings are suggested as the final truth of what Weiser was “really” saying. Rather they articulate a set of issues and concerns that might frame design agendas differently. The scenario is then re-written in two pastiches that draw on source material with very different visions of ubiquitous computing. The Sal scenario is first rewritten in the style of Douglas Adams’ Hitchhiker’s Guide to the Galaxy. In this world, technology is broken, design is poor and users are flawed, fallible and vulnerable. The second rewrites the scenarios in the style of Philip K Dick’s novel Ubik. This scenario serves to highlight what is absent in Weiser’s scenario and indeed most design scenarios: money. The three readings and two pastiches underline the social conflict and struggle more often elided or ignored in the stories told in ubicomp literature. It is argued that literary forms of reading and writing can be useful in both questioning and reframing scientific writing and design agendas.  相似文献   

11.
In this paper, we address the problem of literary writing style determination using a comparison of the randomness of two given texts. We attempt to comprehend if these texts are generated from distinct probability sources that can reveal a difference between the literary writing styles of the corresponding authors. We propose a new approach based on the incorporation of the known Friedman-Rafsky two-sample test into a multistage procedure with the aim of stabilizing the process. A sampling procedure constructed by applying the N-grams methodology is applied to simulate samples drawn from the pooled text with the aim of evaluating the null hypothesis distribution that appears after the writing styles coincide. Next, samples from different files are selected, and the p-values of the test statistics are calculated. An empirical distribution of these values is compared numerous times with the uniform one on the interval [0, 1], and the writing styles are recognized as different if the rejection fraction in this comparison’s sequence is significantly greater than 0.5. The offered approach is language independent in the community of alphabetic languages and does not involve the use of linguistics. In comparison with most existing methods our approach does not deal with any authorship attribute determination. A text itself, more precisely speaking, the distribution of sequential text templates and their mutual occurrences essentially identifies the style. Experiments demonstrate the strong capability of the proposed method.  相似文献   

12.
Evaluating OS/2     
Abstract

Every once in a while the technologists get out of hand a make changes that affect existing programs —that is, affect them so that they don't work any more. By the time you read this, the newest of such changes, IBM's OS/2 and its Microsoft version, should be available from your local computer store.  相似文献   

13.
Currently most literary critics reject the use of science and technology to gain information about texts, while most computer text-analysts have become absorbed in science and technology and forgotten they were seeking information about literature. Whether these two trends will continue into the 1990's remains to be seen; that they explain a good deal about the world we work in now can, I think, be demonstrated. This essay looks at the questions of what literary computing could offer to literary critics, why computer users get lost in scientific jargon, what happens when text becomes input and, most importantly, what happens when text becomes output; it closes with a discussion of why the synthesis will be so difficult.Rosanne G. Potter is associate professor in the English Department at Iowa State University, Ames, Iowa 50011.  相似文献   

14.
The focus of this study is first, the qualitative changes within the human agent as a result of extensive computer tool use (over 5 years), also described as the effect of tool use [Pea, R. D. (1985). Beyond amplification: using the computer to reorganize mental functioning. Educational Psychologist, 20(4), 167–182; Salomon, G. (1990). Cognitive effects with and of computer technology. Communication Research, 17(1), 26–44], and second, the “quantitative changes in accomplishment” of the human agent in the presence of computer tools, also described as effect with-tools [Pea (1985, p. 57); Solomon (1990)]. This research used ill-structured problem solving as the task and experts with more than 6 years of domain and tool experience to document the changes in their knowledge structures. The study also compared the differences between the ill-structured problem solving with and without the computer tool to identify differences that may be a result of the computer’s presence.  相似文献   

15.
A brief, problem‐oriented phase such as an inventing activity is one potential instructional method for preparing learners not only cognitively but also motivationally for learning. Student teachers often need to overcome motivational barriers in order to use computer‐based learning opportunities. In a preliminary experiment, we found that student teachers who were given paper‐based course material spent more time on follow‐up coursework than teachers who were given a well‐developed computer‐based learning environment (CBLE), leading to higher learning outcomes. Thus, we tested inventing as an instructional method that may help overcome motivational barriers of teachers' use of computer‐supported tools or learning environments in our main experimental study (N = 44). As a computer‐based environment, we used the ‘Assessment of Learning strategies in Learning journals’. The inventing group produced ideas about criteria to evaluate learning strategies based on student cases prior to the learning phase. The control condition read a text containing possible answers to the inventing problem. The inventing activity enhanced motivation prior to the learning phase and assessment skills as assessed by transfer problems. Hence, the inventing activity prepared student teachers to learn from a CBLE in a motivational as well as cognitive way.  相似文献   

16.
Identifying a person's likeness is made more intricate by cultural factors. In the upper Middle Ages, a literary account would have (so the Hebrew Chronicle of Ahimaaz , Italy, 1054) a father recognize his son even when the bewitched son's likeness is, alas, asinine, the enabling factor being (like in Early Modern narratives) the call of the blood. Even those disliking the field of "literature and law" making its entry at schools of law will hopefully find merit in this article, as it shows how an intricate story of action and epistemic states can be set in formulae. My formalism is for a sample case from fiction, namely, a medieval literary text concerning a person being identified based on a portrait, notwithstanding that person's claim of a different identity. The Middle English Kyng Alisaunder relates that while in India, Alexander the Great passed himself for somebody else (General Antigonus) when a local prince visited him to seek his support, yet Queen Candace called the trick on the evidence of a portrait. Representational features involved in the formulae include: beliefs; seeing to it that something happens; setting and achieving a goal; taking an assumed identity; perceptions by various sensorial modes; communicating a proposition or giving an order; giving testimony about perceptual evidence or about one's own past actions, beliefs, and goals; as well as having a portrait made by somebody else so that a third party could be recognized on its evidence.  相似文献   

17.
Abstract

In the present study, text was horizontally advanced in jumps of five character spaces at a time along a single line of 20 character spaces on a computer display. Forty-eight subjects read the test thus presented over four consecutive days, and the text display rate was under either subject or experimenter control. In general, the results showed that the subjects' reading performance increased over the time of the study, indicating that effects of practice existed in reading computer-displayed moving text. On the last day, when the display rate was held constant, giving subjects control resulted in worse comprehension performance than when such control was not given. Implications of these results for reading computer-displayed moving text are discussed.  相似文献   

18.
Bach's cantatas are particularly rich in text imagery, and they typically employ chromatic melodies to accentuate the more piquant literary images, especially in recitatives. Heretofore theories about the intentionality of Bach's compositional choices in this regard have necessarily remained conjectural. In the following study, an objective measurement of pitch diversity in the vocal lines of Bach's church cantata recitatives in relation to literary themes was made possible with specially designed computer software allowing pertinent information to be entered efficiently into a relational database. Because the software tracked not only the 90,000 pitches constituting the vocal lines of these movements but also other attributes (e.g., overall length, presence or absence of accompaniment, opening and closing keys, chronological position, among others), interrelationships among the various attributes could be examined. Findings demonstrate clear correlation between pitch diversity and the degree of affective tension implied by particular textual subjects. While the findings do not prove exclusive causation (other factors such as tonal and structural considerations, social occasion, and evolution of style can also play a role), they do link the two elements, especially in light of Bach's method of composition as documented by Robert Marshall. This study is important for its systematic and comprehensive approach, its findings giving definition and clarity to commonly held generalizations about the relationships between melodic chromaticism (of which pitch diversity is an important aspect and indicator) and textual content. Furthermore, the software holds promise for additional studies of Bach's pitch materials and for studies in other stylistic contexts.  相似文献   

19.
A large-scale, cluster-randomized controlled field trial (Nclassrooms = 47; Nstudents = 1,013) assessed the impact of a digital text-to-speech reading material that supported 8-year-olds’ decoding and reading comprehension. An active control group used the most prevalent Danish learning material with a research-based systematic, explicit phonics approach supporting primarily decoding. The digital tool allows children to read unfamiliar text for meaning. Students are supported in mapping between orthography and phonology by three levels of text-to-speech support and in identifying spelling patterns. The risk of students overusing text-to-speech was countered by postponing access to having words read aloud by directing students towards identifying and training relevant orthographic patterns before activating text-to-speech. Results showed no statistically significant difference in decoding, but treatment improved reading comprehension. The study demonstrates how digital tools can facilitate strengthening students' decoding skills as efficiently as a traditional phonics-based programme while students are reading text of relatively high orthographic complexity for meaning.  相似文献   

20.
The rapid growth of biomedical literature prompts pervasive concentrations of biomedical text mining community to explore methodology for accessing and managing this ever-increasing knowledge. One important task of text mining in biomedical literature is gene mention normalization which recognizes the biomedical entities in biomedical texts and maps each gene mention discussed in the text to unique organic database identifiers. In this work, we employ an information retrieval based method which extracts gene mention’s semantic profile from PubMed abstracts for gene mention disambiguation. This disambiguation method focuses on generating a more comprehensive representation of gene mention rather than the organic clues such as gene ontology which has fewer co-occurrences with the gene mention. Furthermore, we use an existing biomedical resource as another disambiguation method. Then we extract features from gene mention detection system’s outcome to build a false positive filter according to Wikipedia’s retrieved documents. Our system achieved F-measure of 83.1% on BioCreative II GN test data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号