首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 406 毫秒
1.
This study reports on computer-aided investigation of salient differences in the essay idiolects of the Mexican writers Octavio Paz and Rosario Castellanos and suggests that some of them may be linked to gender. It describes use of ready-made software and computational strategies requiring no tagging and minimal ocular scan. It suggests some parameters that can be searched and in most cases quantified to explore characteristics posited by linguistic and literary scholars, taking into consideration the particular language and culture of the authors.Estelle Irizarry, professor of Spanish at Georgetown University, is the author of twenty books on Hispanic literature.  相似文献   

2.
Statistical information on a substantial corpus of representative Spanish texts is needed in order to determine the significance of data about individual authors or texts by means of comparison. This study describes the organization and analysis of a 150,000-word corpus of 30 well-known twentieth-century Spanish authors. Tables show the computational results of analyses involving sentences, segments, quotations, and word length.The article explains the considerations that guided content, selection, and sample size, and describes special editing needed for the input of Spanish text. Separate sections highlight and comment upon some of the findings.The corpus and the tables provide objective data for studies of homogeneity and heterogeneity. The format of the tables permits others to add to the original 30 authors, organize the results by categories, or use the cumulative results for normative comparisons.Estelle Irizarry is Professor of Spanish at Georgetown University and author of 20 books and annotated editions dealing with Hispanic literature, art, and hoaxes. Her latest book, an edition of Infortunios de Alonso Ramirez, treats the disputed authorship of Spanish America's first novel. She is Courseware Editor of CHum.  相似文献   

3.
This article describes results from a simulation model of rural land use, focusing on how the relative advantages of imitative and nonimitative approaches to land use selection change under different circumstances. It is shown that the success of "imitation" depends in quite complex ways on the type of imitation used, the strategies of other agents with which the imitator is interacting, and aspects of the heterogeneity of the environment.  相似文献   

4.
This paper describes a taxonomy for a ubiquitous computing software stack called UbiqStack. Through the lens of the UbiqStack taxonomy we survey a variety of subsystems designed to be the building blocks from which sophisticated infrastructures for ubiquitous computing are assembled. Our experience shows that many of these building blocks fit neatly into one of the five UbiqStack categories, each containing functionally-equivalent components. Effectively identifying the best-fit “Lego pieces”, which in turn determines the composite functionality of the resulting infrastructure, is critical. The selection process, however, is impeded by the lack of convention for labeling these classes of building blocks. The lack of clarity with respect to what ready-made subsystems are available within each class often results in naive re-implementation of ready-made components, monolithic and clumsy implementations, and implementations that impose non-standard interfaces onto the applications above. This paper describes the UbiqStack classes of subsystems and explores each in light of the experience gained over 2 years of active development of both ubiquitous computing applications and software infrastructures for their deployment.  相似文献   

5.
This article first briefly discusses the use of the computer in three fields of historical research in Norway: text retrieval in medieval documents, roll call analysis, and the study of social history and historical demography. The treatment of highly structured source material like censuses is then explored more fully, especially the coding of information about family status, occupation and birth place. In order to standardize this information, historians have developed several coding schemes and sophisticated software for the combined use of the full text and the encoded versions.Gunnar Thorvaldsen is Manager of research at the Norwegian Historical Data Center, the University of Tromsø. His main research interests are migration and record linkage. He has published several articles on historical computing, e.g. The Preservation of Computer Readable Records in the Nordic Countries,History and Computing, 4 (1992).  相似文献   

6.
Imitation is a powerful mechanism for efficient learning of novel behaviors that both supports and takes advantage of sociality. A fundamental problem for imitation is to create an appropriate (partial) mapping between the body of the system being imitated and the imitator. By considering for each of these two systems an associated automaton (respectively, transformation semigroup) structure, attempts at such mapping can be considered (partial) relational homomorphisms. This article shows how mathematical techniques can be applied to characterize how far a behavior is from a successful imitation and how to evaluate attempts at imitation arising from a particular correspondence between the imitator and model. For the imitator and the imitated, affordances in the agent-environment structural coupling are likely to be different, all the more so in the case of dissimilar embodiment. We argue that the use of what is afforded to the imitator to attain corresponding effects or, as in dance, sequences of effects, is necessary and sufficient for successful imitation. However, the judged degree of success or failure of an attempted behavioral match depends on some externally imposed or in the case ofautonomous agents internally determined criteria on effects of the attempted imitative behavior (including effects attained successively as well as final effects). These criteria correspond to metrics measures of difference which can guide the evaluation of a correspondence, the learning of a correspondence, or learning how to apply one. Metrics on states and sequences of action events in the system-environment coupling allow judgment of similarity for observer-dependent' purposes. This allows one to formally define successful imitation with respect to such criteria. The resulting measures can be used to compare various candidate mappings (e.g., body plan or perception-action correspondences). Additionally, this may be applied in the automated construction and learning of mappings to be used in imitation for artificial, hardware, and software systems.  相似文献   

7.
In 1975, Academician Glushkov proposed the concept of conveyor production of software from ready-made programs. This paper presents new theoretical results and analyzes the development of this concept on the basis of examples of earlier and current software factories. This makes it possible to testify to fact of emergence of the following two basic production concepts: interface as a stub in transmitting and transforming given software and as an integrated environment for assembling various ready-made products in some programming languages. Over the years, they were constantly being improved and become the basis for a modern software factory including an assembly infrastructure with the use of human, technological, and tool resources for assembling ready-made programs. Within the framework of a system, new interface tools will be developed for heterogeneous programs to convert standard data types to those available in many programming languages.  相似文献   

8.
X射线衍射分析CAI软件   总被引:2,自引:0,他引:2  
朱敏  朱彦 《计算机应用》1998,18(10):26-28
本文介绍了X射线衍射分析辅助教学软件的设计思想和设计方法。  相似文献   

9.
In our work, we review and empirically evaluate five different raw methods of text representation that allow automatic processing of Wikipedia articles. The main contribution of the article—evaluation of approaches to text representation for machine learning tasks—indicates that the text representation is fundamental for achieving good categorization results. The analysis of the representation methods creates a baseline that cannot be compensated for even by sophisticated machine learning algorithms. It confirms the thesis that proper data representation is a prerequisite for achieving high-quality results of data analysis. Evaluation of the text representations was performed within the Wikipedia repository by examination of classification parameters observed during automatic reconstruction of human-made categories. For that purpose, we use a classifier based on a support vector machines method, extended with multilabel and multiclass functionalities. During classifier construction we observed parameters such as learning time, representation size, and classification quality that allow us to draw conclusions about text representations. For the experiments presented in the article, we use data sets created from Wikipedia dumps. We describe our software, called Matrix’u, which allows a user to build computational representations of Wikipedia articles. The software is the second contribution of our research, because it is a universal tool for converting Wikipedia from a human-readable form to a form that can be processed by a machine. Results generated using Matrix’u can be used in a wide range of applications that involve usage of Wikipedia data.  相似文献   

10.
Language can be a tool to marginalize certain groups due to the fact that it may reflect a negative mentality caused by mental barriers or historical delays. In order to prevent misuse of language, several agents have carried out campaigns against discriminatory language, criticizing the use of some terms and phrases. However, there is an important gap in detecting discriminatory text in documents because language is very flexible and, usually, contains hidden features or relations. Furthermore, the adaptation of approaches and methodologies proposed in the literature for text analysis is complex due to the fact that these proposals are too rigid to be adapted to different purposes for which they were intended. The main novelty of the methodology is the use of ontologies to implement the rules that are used by the developed text analyzer, providing a great flexibility for the development of text analyzers and exploiting the ability to infer knowledge of the ontologies. A set of rules for detecting discriminatory language relevant to gender and people with disabilities is also presented in order to show how to extend the functionality of the text analyzer to different discriminatory text areas.  相似文献   

11.
针对计算机网络协议的通信过程无法实际观察的现状,在分析了网络协议特征的基础上,利用免费的NS-2网络仿真软件,对上层网络协议进行了实例仿真实现。NS-2在教学中应用,将有利于学生更加直观地理解网络协议的实现原理,有利于改善教学效果。  相似文献   

12.
The intent of this article is to provide the reader with an historical perspective of software vulnerability assessment. This historical overview will examine the lessons learned from the periods of formal approaches as applied to system certification and validation; to the periods where ‘simplistic’ tools are introduced to perform the tasks of vulnerability assessment; then to an overview of a macroscopic approach for studying the fundamental output of the complex nonlinear system known as software development; and finally to the present, where state-of-the-art tools and methodologies are beginning to apply principles of formal methods to the evaluation of software. The events and lessons learned from each of these periods will become evident to the reader, concluding with a requirement set and an outline for moving vulnerability analysis into the future.  相似文献   

13.
《Knowledge》1999,12(3):113-127
The article introduces argumentation theory, some examples of computational models of argumentation, some application examples, considers the significance and problems of argumentation systems, and outlines the significance and difficulties of the field. Also, the article describes a system which used rhetorical reasoning rules such as fairness, reciprocity, and deterrence which was used to simulate the text of a debate. The text was modelled using modern argumentation theory, and this model was used to build the system. The article discusses the system with regard to several aspects: its ability to deal with meaningful contradiction such as claim X supporting claim Y, yet claim Y attacking claim X; recursive arguments; inconsistency maintenance; expressiveness; encapsulation, the use of definitions as the basis for rules, and making generalisations using taxonomies. The article concludes with a discussion of domain dependence, rule plausibility, and some comparisons with formal logic.  相似文献   

14.
Accurate estimation of software development effort is critical in software engineering. Underestimates lead to time pressures that may compromise full functional development and thorough testing of software. In contrast, overestimates can result in noncompetitive contract bids and/or over allocation of development resources and personnel. As a result, many models for estimating software development effort have been proposed. This article describes two methods of machine learning, which we use to build estimators of software development effort from historical data. Our experiments indicate that these techniques are competitive with traditional estimators on one dataset, but also illustrate that these methods are sensitive to the data on which they are trained. This cautionary note applies to any model-construction strategy that relies on historical data. All such models for software effort estimation should be evaluated by exploring model sensitivity on a variety of historical data  相似文献   

15.
16.
Materials and Process Specifications are complex semi-structured documents containing numeric data, text, and images. This article describes a coarse-grain extraction technique to automatically reorganize and summarize spec content. Specifically, a strategy for semantic-markup, to capture content within a semantic ontology, relevant to semi-automatic extraction, has been developed and experimented with. The working prototypes were built in the context of Cohesia's existing software infrastructure, and use techniques from Information Extraction, XML technology, etc.  相似文献   

17.
We describe the reconstruction of a phylogeny for a set of taxa, with a character-based cladistics approach, in a declarative knowledge representation formalism, and show how to use computational methods of answer set programming to generate conjectures about the evolution of the given taxa. We have applied this computational method in two domains: historical analysis of languages and historical analysis of parasite-host systems. In particular, using this method, we have computed some plausible phylogenies for Chinese dialects, for Indo-European language groups, and for Alcataenia species. Some of these plausible phylogenies are different from the ones computed by other software. Using this method, we can easily describe domain-specific information (e.g., temporal and geographical constraints), and thus prevent the reconstruction of some phylogenies that are not plausible. This paper is a revised and extended version of [3].  相似文献   

18.
An increasing number of people are becoming users of unfamiliar software. They can be genuinely "new" computer users or part of a growing group who are transferring skills and knowledge from a familiar product such as a word processor to a functionally similar, but different, unfamiliar one. The problem for users in this position is that they do not have access to training courses to teach them how to use such software and are usually forced to rely on text-based documentation. LIY is a method for producing computer-based tutorials to teach the user ofasoftware product.This paper describes how LIY is, in turn, (1) a method for application system design which recognizes the need for tutorial design (a task analysis and user interface specification provide information structures that are passed to the tutorial designer); (2) a support environment for the tutorial designer (in addition to prompting for courseware for nodes in the task analysis, LIY provides a ready-made rule base for constraining the degree of learner control available while the tutorial is in use. The designer is able to tailor this rule base for a specific tutorial); and (3) a tutorial delivery environment (the tutorial adapts to individual learners and offers a degree of learner control).  相似文献   

19.
This paper describes a case study in the use of the COCOMO cost estimation model as a tool to provide an independent prognosis and validation of the schedule of a software project at IBM UK Laboratories Ltd, Hursley. Clearly case studies have the danger of being anecdotal however software engineers often work in situations where sufficient historical data is not available to calibrate models to the local environment. It is often necessary for the software engineer to attempt to use such tools on individual projects to justify their further use. This case study describes how we began to use COCOMO and concentrates on some of the problems and benefits which were encountered when trying to use COCOMO in a live development environment.The paper begins by discussing some problems in mapping the COCOMO phases on to the IBM development process. The practical aspects of gathering the development parameters of the model are described and the results of the work are presented in comparison to a schedule assessment using other prognosis techniques and the planned schedule at other milestones in the project's history. Some difficulties experienced in interpreting the data output from the model are discussed. This is followed by a brief comparison with other schedule analysis techniques used in quality assurance. We hope this case study shows that despite the problems in trying to use models such as COCOMO there are significant benefits in helping the user understand what is required to use such tools more effectively to improve software development cost estimates in the future.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号