首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
This paper considers the problem of quantifying literary style and looks at several variables which may be used as stylistic fingerprints of a writer. A review of work done on the statistical analysis of change over time in literary style is then presented, followed by a look at a specific application area, the authorship of Biblical texts.David Holmes is a Principal Lecturer in Statistics at the University of the West of England, Bristol with specific responsibility for co-ordinating the research programmes in the Department of Mathematical Sciences. He has taught literary style analysis to humanities students since 1983 and has published articles on the statistical analysis of literary style in theJournal of the Royal Statistical Society, History and Computing, andLiterary and Linguistic Computing. He presented papers at the ACH/ALLC conferences in 1991 and 1993.  相似文献   

2.
One recurring theme in the TEI project has been the need to represent non-hierarchical information in a natural way — or at least in a way that is acceptable to those who must use it — using a technical tool that assumes a single hierarchical representation. This paper proposes solutions to a variety of such problems: the encoding of segments which do not reflect a document's primary hierarchy; relationships among non-adjacent segments of texts; ambiguous content; overlapping structures; parallel structures; cross-references; vague locations.David T. Barnard is Professor of Computing and Information Science at Queen's University. His research interests are in structured text processing and the compilation of programming languages. His recent publications include Tree-to-tree Correction for Document Trees, Queen's Technical Report, and Error Handling in a Parallel LR Substring Parser,Computer Languages, 19,4 (1993) 247–59.Lou Burnard is Director of the Oxford Text Archive at Oxford University Computing Services, with interests in electronic text and database technology. He is European Editor of the Text Encoding Initiative's Guidelines.Jean-Pierre Gaspart is with Associated Consultants and Software Engineers.Lynne A. Price (Ph.D., computer sciences, University of Wisconsin-Madison) is a senior software engineer at Frame Technology Corp. Her main area of research has been representing text structure for automatic processing. She has served on both the US and international SGML standards committee for several years and is the editor ofInternational Standard ISO/IEC 13673 on Conformance Testing for Standard Generalized Markup Language (SGML) Systems.C. M. Sperberg-McQueen is a Senior Research Programmer at the academic computer center of the University of Illinois at Chicago; his interests include medieval Germanic languages and literatures and the theory of electronic text markup. Since 1988 he has been editor in chief of the ACH/ACL/ALLC Text Encoding Initiative.Giovanni Battista Varile works for the Commission of the European Communities.This paper is derived from a working paper of the Metalanguage Committee entitled Notes on SGML Solutions to Markup Problems which was produced following a meeting of the committee in Luxembourg. The co-authors all participated in that meeting and provided input to this paper. Others serving on the committee at other times included David Durand (Boston University), Nancy Ide (Vassar College) and Frank Tompa (University of Waterloo).  相似文献   

3.
This paper discusses the basic design of the encoding scheme described by the Text Encoding Initiative'sGuidelines for Electronic Text Encoding and Interchange (TEI document number TEI P3, hereafter simplyP3 orthe Guidelines). It first reviews the basic design goals of the TEI project and their development during the course of the project. Next, it outlines some basic notions relevant for the design of any markup language and uses those notions to describe the basic structure of the TEI encoding scheme. It also describes briefly the core tag set defined in chapter 6 of P3, and the default text structure defined in chapter 7 of that work. The final section of the paper attempts an evaluation of P3 in the light of its original design goals, and outlines areas in which further work is still needed.C. M. Sperberg-McQueen is a Senior Research Programmer at the academic computer center of the University of Illinois at Chicago; his interests include medieval Germanic languages and literatures and the theory of electronic text markup. Since 1988 he has been editor in chief of the ACH/ACL/ALLC Text Encoding Initiative. Lou Burnard is Director of the Oxford Text Archive at Oxford University Computing Services, with interests in electronic text and database technology. He is European Editor of the Text Encoding Initiative's Guidelines.  相似文献   

4.
Sets with small generalized Kolmogorov complexity   总被引:1,自引:0,他引:1  
Summary We study the class of sets with small generalized Kolmogorov complexity. The following results are established: 1. A set has small generalized Kolmogorov complexity if and only if it is semi-isomorphic to a tally set. 2. The class of sets with small generalized Kolmogorov complexity is properly included in the class of self-p-printable sets. 3. The class of self-p-printable sets is properly included in the class of sets with selfproducible circuits. 4. A set S has self-producible circuits if and only if there is a tally set T such that P(T)=P(S). 5. If a set S has self-producible circuits, then NP(S)=NP B(S), where NP B( ) is the restriction of NP( ) studied by Book, Long, and Selman [4]. 6. If a set S is such that NP(S) =NP B(S), then NP(S) P(SSAT).  相似文献   

5.
Similarity searching in metric spaces has a vast number of applications in several fields like multimedia databases, text retrieval, computational biology, and pattern recognition. In this context, one of the most important similarity queries is the k nearest neighbor (k-NN) search. The standard best-first k-NN algorithm uses a lower bound on the distance to prune objects during the search. Although optimal in several aspects, the disadvantage of this method is that its space requirements for the priority queue that stores unprocessed clusters can be linear in the database size. Most of the optimizations used in spatial access methods (for example, pruning using MinMaxDist) cannot be applied in metric spaces, due to the lack of geometric properties. We propose a new k-NN algorithm that uses distance estimators, aiming to reduce the storage requirements of the search algorithm. The method stays optimal, yet it can significantly prune the priority queue without altering the output of the query. Experimental results with synthetic and real datasets confirm the reduction in storage space of our proposed algorithm, showing savings of up to 80% of the original space requirement.
Gonzalo NavarroEmail:

Benjamin Bustos   is an assistant professor in the Department of Computer Science at the University of Chile. He is also a researcher at the Millennium Nucleus Center for Web Research. His research interests are similarity searching and multimedia information retrieval. He has a doctoral degree in natural sciences from the University of Konstanz, Germany. Contact him at bebustos@dcc.uchile.cl. Gonzalo Navarro   earned his PhD in Computer Science at the University of Chile in 1998, where he is now Full Professor. His research interests include similarity searching, text databases, compression, and algorithms and data structures in general. He has coauthored a book on string matching and around 200 international papers. He has (co)chaired international conferences SPIRE 2001, SCCC 2004, SPIRE 2005, SIGIR Posters 2005, IFIP TCS 2006, and ENC 2007 Scalable Pattern Recognition track; and belongs to the Editorial Board of Information Retrieval Journal. He is currently Head of the Department of Computer Science at University of Chile, and Head of the Millenium Nucleus Center for Web Research, the largest Chilean project in Computer Science research.   相似文献   

6.
Despite a century of research, statistical and computationalmethods for authorship attribution are neither reliable, well-regarded,widely used, or well-understood. This article presents a surveyof the current state of the art as well as a framework for uniformand unified development of a tool to apply the state of theart, despite the wide variety of methods and techniques used.The usefulness of the framework is confirmed by the developmentof a tool using that framework that can be applied to authorshipanalysis by researchers without a computing specialization.Using this tool, it may be possible both to expand the poolof available researchers as well as to enhance the quality ofthe overall solutions [for example, by incorporating improvedalgorithms as discovered through empirical analysis (Juola,P. (2004a). Ad-hoc Authorship Attribution Competition. In Proceedings2004 Joint International Conference of the Association for Literaryand Linguistic Computing and the Association for Computers andthe Humanities (ALLC/ACH 2004), Göteborg, Sweden)].  相似文献   

7.
The study of the history of new words in theNewOED described in this paper was undertaken in 1986-87, and is based on the material then available. Since then, theNewOED has been finished, and PAT, the inquiry system developed at the University of Waterloo for the investigation of theNewOED data base, has been much altered and improved. Nevertheless, this report should prove useful in indicating the potentiality for analyzing the computerizedNewOED and some of the problems. This project is a study of the ways in which new words are created in English at various periods of time. A chronological dictionary 's created listing words introduced into the language over 50 year increments. These words are then classified by the processes used in forming them to show, in proportional terms, if certain processes are more common at some times than at others.H. M. Logan, Associate Professor, Department of English, University of Waterloo, Waterloo, Ontario, Canada, has written The Dialect of the Middle English Life of St. Katherine (Mouton, 1973), making use of the computer in a study of medieval dialectology. He has also written articles on computer stylistics and literary analysis inCHum, ALLC Journal, Language and Style, College Literature, and on the dictionary inDictionaries.  相似文献   

8.
The Shakespeare Clinic has developed 51 computer tests of Shakespeare play authorship and 14 of poem authorship, and applied them to 37 claimed true Shakespeares, to 27 plays of the Shakespeare Apocrypha, and to several poems of unknown or disputed authorship. No claimant, and none of the apocryphal plays or poems, matched Shakespeare. Two plays and one poem from the Shakespeare Canon,Titus Andronicus, Henry VI, Part 3, and A Lover's Complaint, do not match the others.Ward Elliott is the Burnet C. Wohlford Professor of American Political Institutions at Claremont McKenna College. He is interested in, and has published in, almost everything,including politics, pollution, transportation, smog and Shakespeare.Robert J. Valenza is W.M. Keck Professor of Mathematics and Computer Science at Claremont McKenna College. He has written research articles in mathematics and metaphysics, as well as stylometrics. He is author ofLinear Algebra: An Introduction to Abstract Mathematics (Springer-Verlag, 1993).  相似文献   

9.
Knowledge tracing: Modeling the acquisition of procedural knowledge   总被引:1,自引:1,他引:0  
This paper describes an effort to model students' changing knowledge state during skill acquisition. Students in this research are learning to write short programs with the ACT Programming Tutor (APT). APT is constructed around a production rule cognitive model of programming knowledge, called theideal student model. This model allows the tutor to solve exercises along with the student and provide assistance as necessary. As the student works, the tutor also maintains an estimate of the probability that the student has learned each of the rules in the ideal model, in a process calledknowledge tracing. The tutor presents an individualized sequence of exercises to the student based on these probability estimates until the student has mastered each rule. The programming tutor, cognitive model and learning and performance assumptions are described. A series of studies is reviewed that examine the empirical validity of knowledge tracing and has led to modifications in the process. Currently the model is quite successful in predicting test performance. Further modifications in the modeling process are discussed that may improve performance levels.  相似文献   

10.
This paper traces the history of the Text Encoding Initiative, through the Vassar Conference and the Poughkeepsie Principles to the publication, in May 1994, of theGuidelines for the Electronic Text Encoding and Interchange. The authors explain the types of questions that were raised, the attempts made to resolve them, the TEI project's aims, the general organization of the TEI committees, and they discuss the project's future.Nancy Ide is Associate Professor and chair of Computer Science at Vassar College, and Visiting Researcher at CNRS. She is president of the Association for Computers and the Humanities and chair of the Steering Committee of the Text Encoding Initiative. C. M. Sperberg-McQueen is a Senior Research Programmer at the academic computer center of the University of Illinois at Chicago; his interests include medieval Germanic languages and literatures and the theory of electronic text markup. Since 1988 he has been editor in chief of the ACH/ACL/ALLC Text Encoding Initiative.  相似文献   

11.
This study reports on a statistical approach to Francophone African literature, addressing the issues of discourse bias and the specificity of female writing as against male. The research is based on a comparison of all the characters present in 20 novels written by male and female African authors, under the headings of importance, power and attitude. It suggests that a number of significant differences characterize the make-up of novels written by African female and male authors.Beverley Ormerod, from Jamaica, is currently associate professor of French at the University of Western Australia. Her publications concerning the literature of the French Caribbean and Francophone Africa includeAn Introduction to the French Caribbean Novel (London: Heinemann, 1985) andRomancières africaines d'expression française, with J.-M. Volet (Paris: L'Harmattan, 1994). Jean-Marie Volet, born in Switzerland and currently Honorary Research Fellow in the Department of French Studies at the University of Western Australia, is pursuing research in the fields of African literature and women's writing. His publications includeLa Parole aux Africaines (Amsterdam: Rodopi, 1993) andRomancières africaines d'expression française, with B. Ormerod (Paris: L'Harmattan, 1994). Hélène Jaccomard, a tutor in the Department of French Studies at the University of Western Australia and a translator, has research interests in autobiography and literary theory. She has published a book onLe Lecteur et la lecture dans l'autobiographie française contemporaine (Geneva: Droz, 1994) and articles, including Françoise d'Eaubonne: Accuser (la) Réception,The French Review, 68, 3 (1994).  相似文献   

12.
We construct an algorithm to split an image into a sum u + v of a bounded variation component and a component containing the textures and the noise. This decomposition is inspired from a recent work of Y. Meyer. We find this decomposition by minimizing a convex functional which depends on the two variables u and v, alternately in each variable. Each minimization is based on a projection algorithm to minimize the total variation. We carry out the mathematical study of our method. We present some numerical results. In particular, we show how the u component can be used in nontextured SAR image restoration.Jean-François Aujol graduated from 1 Ecole Normale Supérieure de Cachan in 2001. He was a PHD student in Mathematics at the University of Nice-Sophia-Antipolis (France). He was a member of the J.A. Dieudonné Laboratory at Nice, and also a member of the Ariana research group (CNRS/INRIA/UNSA) at Sophia-Antipolis (France). His research interests are calculus of variations, nonlinear partial differential equations, numerical analysis and mathematical image processing (and in particular classification, texture, decomposition model, restoration). He is Assistant Researcher at UCLA (Math Department).Gilles Aubert received the These dEtat es-sciences Mathematiques from the University of Paris 6, France, in 1986. He is currently professor of mathematics at the University of Nice-Sophia Antipolis and member of the J.A. Dieudonne Laboratory at Nice, France. His research interests are calculus of variations, nonlinear partial differential equations and numerical analysis; fields of applications including image processing and, in particular, restoration, segmentation, optical flow and reconstruction in medical imaging.Laure Blanc-Féraud received the Ph.D. degree in image restoration in 1989 and the Habilitation á Diriger des Recherches on inverse problems in image processing in 2000, from the University of Nice-Sophia Antipolis, France. She is currently director of research at CNRS in Sophia Antipolis. Her research interests are inverse problems in image processing by deterministic approach using calculus of variation and PDEs. She is also interested in stochastic models for parameter estimation and their relationship with the deterministic approach. She is currently working in the Ariana research group (I3S/INRIA) which is focussed on Earth observation.Antonin Chambolle studied mathematics and physics at the Ecole normale Supérieure in Paris and received the Ph.D. degree in applied mathematics from the Université de Paris-Dauphine in 1993. Since then he has been a CNRS researcher at the CEREMADE, Université de Paris-Dauphine, and, for a short period, a researcher at the SISSA, Trieste, Italy. His research interest include calculus of variations, with applications to shape optimization, mechanics and image processing.  相似文献   

13.
Several recent papers have adapted notions of geometric topology to the emerging field of digital topology. An important notion is that of digital homotopy. In this paper, we study a variety of digitally-continuous functions that preserve homotopy types or homotopy-related properties such as the digital fundamental group.Laurence Boxer is Professor of Computer and Information Sciences at Niagara University, and Research Professor of Computer Science and Engineering at the State University of New York at Buffalo. He received his Ph.D. in Mathematics from the University of Illinois at Urbana-Champaign. His research interests are computational geometry, parallel algorithms, and digital topology. Dr. Boxer is co-author, with Russ Miller, of Algorithms Sequential and Parallel, A Unified Approach, a recent textbook published by Prentice Hall.  相似文献   

14.
Applying the method of discourse structure analysis described by Grosz and Sidner to lyric poetry, one views the poet as the Initiating Conversational Participant, and the reader as the Other Conversational Participant as she recreates the poem upon reading it. In poetry the linguistic and intentional structures function in counterpoint to the metrical and stanzaic structures, respectively, producing the effects that define poetry. Analysis of attentional state can reveal the dynamics of the focussing process in a poem, providing a unique perspective on its operation. More research is needed to extend the theory to adequately handle lyric poetry.Mary Dee Harris, Ph. D., is currently a consultant in Natural Language Processing and Artificial Intelligence in the Washington, DC, area. Her research interests are: interaction of metaphor and discourse structure, knowledge-based natural language processing, cognitive linguistic approaches to natural language processing. Her publications include: Introduction to Natural Language Processing (Prentice-Hall, 1985); Dylan Thomas the Craftsman: Computer Analysis of the Composition of a Poem,ALLC Bulletin, 7, 3 (1979), 295–300; Poetry vs the computer, in Festschrift in honor of Roberto Busa, S. J., edited by Antonio Zampolli and Laura Cignoni, University of Pisa, Fall, 1987.  相似文献   

15.
Humanities Computing is an emergent field. The activities describedas ‘Humanities Computing’ continue to expand innumber and sophistication, yet no concrete definition of thefield exists, and there are few academic departments that specializein this area. Most introspection regarding the role, meaning,and focus of "Humanities Computing" has come from a practicaland pragmatic perspective from scholars and educators withinthe field itself. This article provides an alternative, externalized,viewpoint of the focus of Humanities Computing, by analysingthe discipline through its community, research, curriculum,teaching programmes, and the message they deliver, either consciouslyor unconsciously, about the scope of the discipline. It engageswith Educational Theory to provide a means to analyse, measure,and define the field, and focuses specifically on the ACH/ALLC2005 Conference to identify and analyse those who are involvedwith the humanities computing community.  相似文献   

16.
We discuss the following problem, which arises in computer animation and robot motion planning: given are N positions or keyframes (ti) of a moving body 3 at time instances ti. Compute a smooth rigid body motion (t) that interpolates or approximates the given positions (ti) such that chosen feature points of the moving system run on smooth paths. We present an algorithm that can be considered as a transfer principle from curve design algorithms to motion design. The algorithm relies on known curve design algorithms and on registration techniques from computer vision. We prove that the motion generated in this way is of the same smoothness as the curve design algorithm employed. Supplementary material is available in the online version of this article at http://dx.doi.org/10.1007/s00371-003-0221-3To the memory of Peter Steiner  相似文献   

17.
Traditionally, direct marketing companies have relied on pre-testing to select the best offers to send to their audience. Companies systematically dispatch the offers under consideration to a limited sample of potential buyers, rank them with respect to their performance and, based on this ranking, decide which offers to send to the wider population. Though this pre-testing process is simple and widely used, recently the industry has been under increased pressure to further optimize learning, in particular when facing severe time and learning space constraints. The main contribution of the present work is to demonstrate that direct marketing firms can exploit the information on visual content to optimize the learning phase. This paper proposes a two-phase learning strategy based on a cascade of regression methods that takes advantage of the visual and text features to improve and accelerate the learning process. Experiments in the domain of a commercial Multimedia Messaging Service (MMS) show the effectiveness of the proposed methods and a significant improvement over traditional learning techniques. The proposed approach can be used in any multimedia direct marketing domain in which offers comprise both a visual and text component.
Giuseppe TribulatoEmail:

Sebastiano Battiato   was born in Catania, Italy, in 1972. He received the degree in Computer Science (summa cum laude) in 1995 and his Ph.D in Computer Science and Applied Mathematics in 1999. From 1999 to 2003 he has lead the “Imaging” team c/o STMicroelectronics in Catania. Since 2004 he works as a Researcher at Department of Mathematics and Computer Science of the University of Catania. His research interests include image enhancement and processing, image coding and camera imaging technology. He published more than 90 papers in international journals, conference proceedings and book chapters. He is co-inventor of about 15 international patents. He is reviewer for several international journals and he has been regularly a member of numerous international conference committees. He has participated in many international and national research projects. He is an Associate Editor of the SPIE Journal of Electronic Imaging (Specialty: digital photography and image compression). He is director of ICVSS (International Computer Vision Summer School). He is a Senior Member of the IEEE. Giovanni Maria Farinella   is currently contract researcher at Dipartimento di Matematica e Informatica, University of Catania, Italy (IPLAB research group). He is also associate member of the Computer Vision and Robotics Research Group at University of Cambridge since 2006. His research interests lie in the fields of computer vision, pattern recognition and machine learning. In 2004 he received his degree in Computer Science (egregia cum laude) from University of Catania. He was awarded a Ph.D. (Computer Vision) from the University of Catania in 2008. He has co-authored several papers in international journals and conferences proceedings. He also serves as reviewer numerous international journals and conferences. He is currently the co-director of the International Summer School on Computer Vision (ICVSS). Giovanni Giuffrida   is an assistant professor at University of Catania, Italy. He received a degree in Computer Science from the University of Pisa, Italy in 1988 (summa cum laude), a Master of Science in Computer Science from the University of Houston, Texas, in 1992, and a Ph.D. in Computer Science, from the University of California in Los Angeles (UCLA) in 2001. He has an extensive experience in both the industrial and academic world. He served as CTO and CEO in the industry and served as consultant for various organizations. His research interest is on optimizing content delivery on new media such as Internet, mobile phones, and digital tv. He published several papers on data mining and its applications. He is a member of ACM and IEEE. Catarina Sismeiro   is a senior lecturer at Imperial College Business School, Imperial College London. She received her Ph.D. in Marketing from the University of California, Los Angeles, and her Licenciatura in Management from the University of Porto, Portugal. Before joining Imperial College Catarina had been and assistant professor at Marshall School of Business, University of Southern California. Her primary research interests include studying pharmaceutical markets, modeling consumer behavior in interactive environments, and modeling spatial dependencies. Other areas of interest are decision theory, econometric methods, and the use of image and text features to predict the effectiveness of marketing communications tools. Catarina’s work has appeared in innumerous marketing and management science conferences. Her research has also been published in the Journal of Marketing Research, Management Science, Marketing Letters, Journal of Interactive Marketing, and International Journal of Research in Marketing. She received the 2003 Paul Green Award and was the finalist of the 2007 and 2008 O’Dell Awards. Catarina was also a 2007 Marketing Science Institute Young Scholar, and she received the D. Antonia Adelaide Ferreira award and the ADMES/MARKTEST award for scientific excellence. Catarina is currently on the editorial boards of the Marketing Science journal and the International Journal of Research in Marketing. Giuseppe Tribulato   was born in Messina, Italy, in 1979. He received the degree in Computer Science (summa cum laude) in 2004 and his Ph.D in Computer Science in 2008. From 2005 he has lead the research team at Neodata Group. His research interests include data mining techniques, recommendation systems and customer targeting.   相似文献   

18.
A control problem for a random process depending additively on the fractional Brownian motion with the Hurst parameter H (1/2, 1) is analyzed.  相似文献   

19.
We give an optimality analysis for computations of complex square roots in real arithmetic by certain computation tress that use real square root operations. Improving standard elementary geometric constructions Schönhage suggests better methods which will be shown to be unimprobable. The iteration of such a procedure for 2 k -th roots is however improvable, and an improved version of it can also be shown to be unimprovable. In particular, repeated usage of an optimal square root procedure does not yield an optimal one for 2 k -th roots.To answer this kind of questions about resolution by real radicals we apply methods of real algebra which lead into the theory of real field, ring, and integral ring extensions.  相似文献   

20.
When conducting a systematic literature review, researchers usually determine the relevance of primary studies on the basis of the title and abstract. However, experience indicates that the abstracts for many software engineering papers are of too poor a quality to be used for this purpose. A solution adopted in other domains is to employ structured abstracts to improve the quality of information provided. This study consists of a formal experiment to investigate whether structured abstracts are more complete and easier to understand than non-structured abstracts for papers that describe software engineering experiments. We constructed structured versions of the abstracts for a random selection of 25 papers describing software engineering experiments. The 64 participants were each presented with one abstract in its original unstructured form and one in a structured form, and for each one were asked to assess its clarity (measured on a scale of 1 to 10) and completeness (measured with a questionnaire that used 18 items). Based on a regression analysis that adjusted for participant, abstract, type of abstract seen first, knowledge of structured abstracts, software engineering role, and preference for conventional or structured abstracts, the use of structured abstracts increased the completeness score by 6.65 (SE 0.37, p < 0.001) and the clarity score by 2.98 (SE 0.23, p < 0.001). 57 participants reported their preferences regarding structured abstracts: 13 (23%) had no preference; 40 (70%) preferred structured abstracts; four preferred conventional abstracts. Many conventional software engineering abstracts omit important information. Our study is consistent with studies from other disciplines and confirms that structured abstracts can improve both information content and readability. Although care must be taken to develop appropriate structures for different types of article, we recommend that Software Engineering journals and conferences adopt structured abstracts.
Stephen G. LinkmanEmail:

David Budgen   is a Professor of Software Engineering and Chairman of the Department of Computer Science at Durham University in the UK. His research interests include software design, design environments, healthcare computing and evidence-based software engineering. He was awarded a BSc(Hons) in Physics and a PhD in Theoretical Physics from Durham University, following which he worked as a research scientist for the Admiralty and then held academic positions at Stirling University and Keele University before moving to his present post at Durham University in 2005. He is a member of the IEEE Computer Society, the ACM and the Institution of Engineering & Technology (IET). Barbara A. Kitchenham   is Professor of Quantitative Software Engineering at Keele University in the UK. From 2004–2007, she was a Senior Principal Researcher at National ICT Australia. She has worked in software engineering for nearly 30 years both in industry and academia. Her main research interest is software measurement and its application to project management, quality control, risk management and evaluation of software technologies. Her most recent research has focused on the application of evidence-based practice to software engineering. She is a Chartered Mathematician and Fellow of the Institute of Mathematics and Its Applications, a Fellow of the Royal Statistical Society and a member of the IEEE Computer Society. Stuart M. Charters   is a Lecturer of Software and Information Technology in the Applied Computing Group, Lincoln University, NZ. Stuart received his BSc(Hons) in Computer Science and PhD in Computer Science from Durham University UK. His research interests include evidence-based software engineering, software visualisation and grid computing. Mark Turner   is a Lecturer in the School of Computing and Mathematics at Keele University, UK. His research interests include evidence-based software engineering, service-based software engineering and dynamic access control. Turner received a PhD in computer science from Keele University. He is a member of the IEEE Computer Society and the British Computer Society. Pearl Brereton   is Professor of Software Engineering in the School of Computing and Mathematics at Keele University. She was awarded a BSc degree (first class honours) in Applied Mathematics and Computer Science from Sheffield University and a PhD in Computer Science from Keele University. Her research focuses on evidence-based software engineering and service-oriented systems. She is a member of the IEEE Computer Society, the ACM, and the British Computer Society. Stephen G. Linkman   is a Senior Lecturer in the School of Computing and Mathematics at Keele University and holds an MSc from the University of Leicester. His main research interests lie in the fields of software metrics and their application to project management, quality control, risk management and the evaluation of software systems and process. He is a visiting Professor at the University of Sao Paulo in Brazil.   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号