首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
The extraordinary impact of Thomas Paine's Common Sense has often been attributed to its style — to the simplicity and forcefulness with which Paine expressed ideas that many others before him had expressed. Comparative analysis of Common Sense and other pre-Revolutionary pamphlets suggests that Common Sense was indeed stylistically unique; no other pamphleteer came close to matching Paine's combination of simplicity and forcefulness.Lee Sigelman is Professor of Political Science at The George Washington University. His research interests range widely throughout the social sciences, including research methods, mass communication, political behavior, and political culture. He has recently published articles in Computers and the Humanities analyzing the work of Raymond Chandler and Edith Wharton.Colin Martindale is Professor of Psychology at the University of Maine. He is author of a number of articles and books on content analysis, literary history, and other topics. A recent book is The Clockwork Muse: The Predictability of Artistic Change (New York: Basic Books). He is Executive Editor of Empirical Studies of the Arts.Dean McKenzie is Professional Officer/Statistician for Psychological Medicine, Monash University, Melbourne, Australia. He is author of several articles concerned with machine learning and artificial intelligence.  相似文献   

2.
Reviewers are sharply divided about the success with which Marion Mainwaring completed Edith Wharton's unfinished novelThe Buccaneers. To gauge the seamlessness of the fit between Wharton's portion of the novel and the chapters that Mainwaring added, the present study presents a chapter-by-chapter analysis of the ratio of new types (i.e., words that did not appear in previous chapters) to tokens. Analysis of Wharton's classic novelsThe House of Mirth, Ethan Frome, andThe Age of Innocence indicates that the ratio of new types to tokens followed a standard progression in her work. Analysis of Wharton's twenty-nine chapters ofThe Buccaneers indicates that here, too, she was following the same course. However, analysis of the completed version ofThe Buccaneers reveals that the substitution of Mainwaring for Wharton as author caused a decisive break from the well established pattern.Lee Sigelman is Professor and Chair of Political Science at The George Washington University. His research interests range widely throughout the social sciences, including research methods, mass communication, political behavior, and popular culture. With Ernest Yanarella, he co-editedPolitical Mythology and Popular Fiction, and has published several articles on political themes in popular literature.  相似文献   

3.
4.
Neural Networks have recently been a matter of extensive research and popularity. Their application has increased considerably in areas in which we are presented with a large amount of data and we have to identify an underlying pattern. This paper will look at their application to stylometry. We believe that statistical methods of attributing authorship can be coupled effectively with neural networks to produce a very powerful classification tool. We illustrate this with an example of a famous case of disputed authorship, The Federalist Papers. Our method assigns the disputed papers to Madison, a result which is consistent with previous work on the subject.Fiona J. Tweedie is a research student and tutor at the University ot the West of England, Bristol, currently working on the provenance of De Doctrina Christiana, attributed to John Milton. She has presented papers at the ACH/ALLC conference in 1995 and has forthcoming papers in Forensic Linguistics and Revue.Sameer Singh is a research student and tutor at the University of the West of England, Bristol, working in the application of artificially intelligent methods and statistics for quantifying language disorders. His main research interests include neural networks, fuzzy logic, expert systems and linguistic computing.David I. Holmes is a Principal Lecturer in Statistics at the University of the West, Bristol. He has published several papers on the statistical analysis of literary style in journals including the Journal of the Royal Statistical Society and History and Computing. He has presented papers at ACH/ALLC conferences in 1991, 1993 and 1995.  相似文献   

5.
This paper considers the problem of quantifying literary style and looks at several variables which may be used as stylistic fingerprints of a writer. A review of work done on the statistical analysis of change over time in literary style is then presented, followed by a look at a specific application area, the authorship of Biblical texts.David Holmes is a Principal Lecturer in Statistics at the University of the West of England, Bristol with specific responsibility for co-ordinating the research programmes in the Department of Mathematical Sciences. He has taught literary style analysis to humanities students since 1983 and has published articles on the statistical analysis of literary style in theJournal of the Royal Statistical Society, History and Computing, andLiterary and Linguistic Computing. He presented papers at the ACH/ALLC conferences in 1991 and 1993.  相似文献   

6.
In this paper, we identify a new task for studying the outlying degree (OD) of high-dimensional data, i.e. finding the subspaces (subsets of features) in which the given points are outliers, which are called their outlying subspaces. Since the state-of-the-art outlier detection techniques fail to handle this new problem, we propose a novel detection algorithm, called High-Dimension Outlying subspace Detection (HighDOD), to detect the outlying subspaces of high-dimensional data efficiently. The intuitive idea of HighDOD is that we measure the OD of the point using the sum of distances between this point and itsknearest neighbors. Two heuristic pruning strategies are proposed to realize fast pruning in the subspace search and an efficient dynamic subspace search method with a sample-based learning process has been implemented. Experimental results show that HighDOD is efficient and outperforms other searching alternatives such as the naive top–down, bottom–up and random search methods, and the existing outlier detection methods cannot fulfill this new task effectively. Ji Zhang received his BS from Department of Information Systems and Information Management at Southeast University, Nanjing, China, in 2000 and MSc from Department of Computer Science at National University of Singapore in 2002. He worked as a researcher in Center for Information Mining and Extraction (CHIME) at National University of Singapore from 2002 to 2003 and Department of Computer Science at University of Toronto from 2003 to 2005. He is currently with Faculty of Computer Science at Dalhousie University, Canada. His research interests include Knowledge Discovery and Data Mining, XML and Data Cleaning. He has published papers in Journal of Intelligent Information Systems (JIIS), Journal of Database Management (JDM), and major international conferences such as VLDB, WWW, DEXA, DaWaK, SDM, and so on. Hai Wang is an assistant professor in the Department of Finance Management Science at Sobey School of Business of Saint Mary's University, Canada. He received his BSc in computer science from the University of New Brunswick, and his MSc and PhD in Computer Science from the University of Toronto. His research interests are in the areas of database management, data mining, e-commerce, and performance evaluation. His papers have been published in International Journal of Mobile Communications, Data Knowledge Engineering, ACM SIGMETRICS Performance Evaluation Review, Knowledge and Information Systems, Performance Evaluation, and others.  相似文献   

7.
Feasibility tests for hard real-time systems provide information about the schedulability of the task set. However, this information is a yes or a no answer, that is, whether the task set achieves the test or not. From the real-time system design point of view, having more information available would be useful. For example, how much the computation time can vary without jeopardising the system feasibility. This work specifically provides methods to determine off-line how much a task can increase its computation time, by maintaining the system feasibility under a dynamic priority scheduling. The extra time can be determined not only in all the task activations, but in n of a window of m invocations. This is what we call a window-constrained execution time system. The results presented in this work can be used in all kinds of real-time systems: fault tolerance management, imprecise computation, overrun handling, control applications, etc. Patricia Balbastre is an assistant professor of Computer Engineering. She graduated in Electronic Engineering at the Technical University of Valencia, Spain, in 1998. And the Ph.D. degree in Computer Science at the same university in 2002. Her main research interests include real-time operating systems, dynamic scheduling algorithms and real-time control. Ismael Ripoll received the B.S. degree from the Polytechnic University of Valencia, Spain, in 1992; the Ph.D. degree in Computer Science at the Polytechnic University of Valencia, Spain, in 1996. Currently he is Professor in the DISCA Department of the same University. His research interests include embedded and real-time operating systems. Alfons Crespo is Professor of the Department of Computer Engineering of the Technical University of Valencia. He received the PhD in Computer Science from the Technical University of Valencia, Spain, in 1984. He held the position of Associate professor in 1986 and full Professor in 1991. He leads the group of Industrial Informatics and has been the responsible of several European and Spanish research projects. His main research interest include different aspects of the real-time systems (scheduling, hardware support, scheduling and control integration, …). He has published more than 60 papers in specialised journals and conferences in the area of real-time systems.  相似文献   

8.
Active schedule is one of the most basic and popular concepts in production scheduling research. For identical parallel machine scheduling with jobs’ dynamic arrivals, the tight performance bounds of active schedules under the measurement of four popular objectives are respectively given in this paper. Similar analysis method and conclusions can be generalized to static identical parallel machine and single machine scheduling problem.  相似文献   

9.
De novo sequencing is one of the most promising proteomics techniques for identification of protein posttranslation modifications (PTMs) in studying protein regulations and functions. We have developed a computer tool PRIME for identification of b and y ions in tandem mass spectra, a key challenging problem in de novo sequencing. PRIME utilizes a feature that ions of the same and different types follow different mass-difference distributions to separate b from y ions correctly. We have formulated the problem as a graph partition problem. A linear integer-programming algorithm has been implemented to solve the graph partition problem rigorously and efficiently. The performance of PRIME has been demonstrated on a large amount of simulated tandem mass spectra derived from Yeast genome and its power of detecting PTMs has been tested on 216 simulated phosphopeptides.  相似文献   

10.
Electronic commerce is an important application that has evolved significantly recently. However, electronic commerce systems are complex and difficult to be correctly designed. Guaranteeing the correctness of an e-commerce system is not an easy task due to the great amount of scenarios where errors occur, many of them very subtle. In this work we presents a methodology that uses formal-method techniques, specifically symbolic model checking, to design electronic commerce applications and to automatically verify them. Also, a model checking pattern hierarchy has been developed—it specifies patterns to construct and verify the formal model of e-commerce systems. We consider this research the first step to the development of a framework, which will integrate the methodology, an e-commerce specification language based on business rules, and a model checker. Adriano Pereira received the B.S. and M.S. degrees in computer science in 2000 and 2002, respectively, and he is currently pursuing the Ph.D. degree in computer science from the Federal University of Minas Gerais, Belo Horizonte, Brazil. His current interests are on performance analysis and modeling of e-business and distributed systems, and formal methods. Mark Song received the B.S., M.S. and Ph.D. degrees in computer science from the Federal University of Minas Gerais, Belo Horizonte, Brazil. His current interests are on distributed systems and formal methods – especially BMC (Bounded Model Checking). Gustavo Franco received the B.S. and M.S. degrees in computer science in 2001 and 2004, respectively, from the Federal University of Minas Gerais, Belo Horizonte, Brazil. His research was on modeling the user behavior of e-business and distributed systems, and formal methods. Actually his current interests are on software engeneering and project management of IT projects.  相似文献   

11.
This article describes some approaches to imitation analysis and the use of ready-made software for this task. Devising computer-assisted techniques for exploring the conscious literary imitation of style is an application of particular relevance to contemporary Hispanic narrative and one that can be handled with a microcomputer and readily accessible software. The article describes some approaches to imitation analysis and the use of ready-made software to assess the effectiveness of stylistic imitation of eighteenth-century historical chronicle in La renuncia del héroe Baltasar (The Renunciation of the Hero Baltasar), by the Puerto Rican novelist Rodríguez Juliá. Even when employing familiar procedures of text analysis with computer, comparing a fictional text with a multiple and diverse corpus of authentic historical documents requires somewhat unique assumptions and hypotheses, since neither authorship, influence, or authenticity are in question.Estelle Irizarry is Professor of Spanish at Georgetown University, Washington, D. C., and author of 18 books and annotated editions dealing with modern Hispanic literature, art, and hoaxes.  相似文献   

12.
This paper reports on methodological considerations and the results of the Information Retrieval (IR) project PADOK I and II. PADOK has been carried out by the Linguistic Information Science Group of the University of Regensburg (LIR) since November 1984 and has been sponsored by the German Ministry for Research and Technology. The long term objective is to integrate artificial intelligence topics and the methods of information retrieval research without neglecting traditional IR methodology. In PADOK we consider a type of mass data IR system which indexes its documents rather shallowly (freetext or morphological components) and adds an intelligent information retrieval component to this kernel system. So far we have obtained, on the basis of two large-scale retrieval tests of the German Patent Information System results which show how the linguistically based functions of an indexing system contribute to its performance, and indicate what is the most reasonable basic content analysis program for a German Patent Information System. This paper focusses on the general principles and aims of PADOK I and PADOK R and on the statistical evaluation of the retrieval tests.Christa Womser-Hacker has a Ph.D. in Linguistic Information Science. From 1985 until 1990 she was involved in several LIR-Projects concerning text processing, evaluation of the German Patent Information System, man-machine-interaction, intelligent interfaces for databases. Since May 1990 she has been an LIR staff member. She is interested in information retrieval, (statistical) evaluation methods of man-machine-interaction, intelligent interfaces. She has published Der PADOK-Retrieval-test (1989) and Die statistische Auswertung des Retrievaltests (1990).Jürgen Krause is professor of Linguistic Information Science at the University of Regensburg. He is a member of the editorial boards of the periodicals Computer and the Humanities and GLDV-Forum, and co-editor of Sprache and Computer. His research interests include office automation, artificial intelligence help system, information retrieval, evaluation of natural language systems. He is co-editor (with Christa Womser-Hacker) of Das Deutsche Patentinformationssystem, Entwicklungstendenzen, Retrievaltests and Bewertungen (1990) and co-editor of Computer Talk (1991).  相似文献   

13.
Many supervised machine learning tasks can be cast as multi-class classification problems. Support vector machines (SVMs) excel at binary classification problems, but the elegant theory behind large-margin hyperplane cannot be easily extended to their multi-class counterparts. On the other hand, it was shown that the decision hyperplanes for binary classification obtained by SVMs are equivalent to the solutions obtained by Fisher's linear discriminant on the set of support vectors. Discriminant analysis approaches are well known to learn discriminative feature transformations in the statistical pattern recognition literature and can be easily extend to multi-class cases. The use of discriminant analysis, however, has not been fully experimented in the data mining literature. In this paper, we explore the use of discriminant analysis for multi-class classification problems. We evaluate the performance of discriminant analysis on a large collection of benchmark datasets and investigate its usage in text categorization. Our experiments suggest that discriminant analysis provides a fast, efficient yet accurate alternative for general multi-class classification problems. Tao Li is currently an assistant professor in the School of Computer Science at Florida International University. He received his Ph.D. degree in Computer Science from University of Rochester in 2004. His primary research interests are: data mining, machine learning, bioinformatics, and music information retrieval. Shenghuo Zhu is currently a researcher in NEC Laboratories America, Inc. He received his B.E. from Zhejiang University in 1994, B.E. from Tsinghua University in 1997, and Ph.D degree in Computer Science from University of Rochester in 2003. His primary research interests include information retrieval, machine learning, and data mining. Mitsunori Ogihara received a Ph.D. in Information Sciences at Tokyo Institute of Technology in 1993. He is currently Professor and Chair of the Department of Computer Science at the University of Rochester. His primary research interests are data mining, computational complexity, and molecular computation.  相似文献   

14.
Graphs are increasingly becoming a vital source of information within which a great deal of semantics is embedded. As the size of available graphs increases, our ability to arrive at the embedded semantics grows into a much more complicated task. One form of important hidden semantics is that which is embedded in the edges of directed graphs. Citation graphs serve as a good example in this context. This paper attempts to understand temporal aspects in publication trends through citation graphs, by identifying patterns in the subject matters of scientific publications using an efficient, vertical association rule mining model. Such patterns can (a) indicate subject-matter evolutionary history, (b) highlight subject-matter future extensions, and (c) give insights on the potential effects of current research on future research. We highlight our major differences with previous work in the areas of graph mining, citation mining, and Web-structure mining, propose an efficient vertical data representation model, introduce a new subjective interestingness measure for evaluating patterns with a special focus on those patterns that signify strong associations between properties of cited papers and citing papers, and present an efficient algorithm for the purpose of discovering rules of interest followed by a detailed experimental analysis. Imad Rahal is a newly appointed assistant professor in the Department of Computer Science at the College of Saint Benedict ∣ Saint John's University, Collegeville, MN, and a Ph.D. candidate at North Dakota State University, Fargo, ND. In August 2003, he earned his master's degree in computer science from North Dakota State University. Prior to that, he graduated summa cum laude from the Lebanese American University, Beirut, Lebanon, in February 2001 with a bachelor's degree in computer science. Currently, he is completing the final requirements for his Ph.D. degree in computer science on an NSF ND-EPSCoR doctoral dissertation assistantship with August of 2005 as a projected completion date. He is very active in research, proposal writing, and publications; his research interests are largely in the broad areas of data mining, machine learning, databases, artificial intelligence, and bioinformatics. Dongmei Ren is working for the Database Technology Institute for z/OS, IBM Silicon Valley Lab, San Jose, CA, as a staff software engineer. She holds a Ph.D. degree from North Dakota State University, Fargo, ND, and master's and bachelor's degrees from TianJin University, TianJin, China. She has been a software engineer at DaTang Telecommunications, Beijing, China. Her areas of expertise are outlier analysis, data mining and knowledge discovery, database systems, machine learning, intelligent systems, wireless networks and bioinformatics. She has been awarded the Siemens Scholarship research enhancement for excellent performance in study and research. She is a member of ACM, IEEE. Weihua Wu is a network monitoring & managed services analyst at Hewlett-Packard Co. in Canada. He holds a master's degree from North Dakota State University and a bachelor's degree from Nanjing University, both in computer science. His research areas of interest include data mining, knowledge discovery, data warehousing, information technology, network security, and bioinformatics. He has participated in various projects supported by NSF, DARPA, NASA, USDA, and GSA grants. Anne Denton is an assistant professor in computer science at North Dakota State University. Her research interests are in data mining, knowledge discovery in scientific data, and bioinformatics. Specific interests include data mining of diverse data, in which objects are characterized by a variety of properties such as numerical and categorical attributes, graphs, sequences, time-dependent attributes, and others. She received her Ph.D. in physics from the University of Mainz, Germany, and her M.S. in computer science from North Dakota State University, Fargo, ND. Christopher Besemann received his M.Sc. in computer science from North Dakota State University in Fargo, ND, 2005. Currently, he works in data mining research topics including association mining and relational data mining with recent work in model integration as a research assistant. He is accepted under a fellowship program for Ph.D. study at North Dakota State University. William Perrizo is a professor of computer science at North Dakota State University. He holds a Ph.D. degree from the University of Minnesota, a master's degree from the University of Wisconsin and a bachelor's degree from St. John's University. He has been a research scientist at the IBM Advanced Business Systems Division and the U.S. Air Force Electronic Systems Division. His areas of expertise are data mining, knowledge discovery, database systems, distributed database systems, high speed computer and communications networks, precision agriculture and bioinformatics. He is a member of ISCA, ACM, IEEE, IAAA, and AAAS.  相似文献   

15.
In the field of computer vision and pattern recognition, data processing and data analysis tasks are often implemented as a consecutive or parallel application of more-or-less complex operations. In the following we will present DocXS, a computing environment for the design and the distributed and parallel execution of such tasks. Algorithms can be programmed using an Eclipse-based user interface, and the resulting Matlab and Java operators can be visually connected to graphs representing complex data processing workflows. DocXS is platform independent due to its implementation in Java, is freely available for noncommercial research, and can be installed on standard office computers. One advantage of DocXS is that it automatically takes care about the task execution and does not require its users to care about code distribution or parallelization. Experiments with DocXS show that it scales very well with only a small overhead. The text was submitted by the authors in English. Steffen Wachenfeld received B.Sc. and M.Sc. (honors) degrees in Information Systems in 2003 and 2005 from the University of Muenster, Germany, and an M.Sc. (honors) degree in Computer Science in 2003 from the University of Muenster. He is currently a research fellow and PhD student in the Computer Science at the Dept. of Computer Science, University of Muenster. His research interests include low resolution text recognition, computer vision on mobile devices, and systems/system architectures for computer vision and image analysis. He is author or coauthor of more than ten scientific papers and a member of IAPR. Tobias Lohe, M.Sc. degree in Computer Science in 2007 from the University of Muenster, Germany, is currently a research associate and PhD student in Computer Science at the Institute for Robotics and Cognitive Systems, University of Luebeck, Germany. His research interests include medical imaging, signal processing, and robotics for minimally invasive surgery. Michael Fieseler is currently a student of Computer Science at the University of Muenster, Germany. He has participated in research in the field of computer vision and medical imaging. Currently he is working on his Master thesis on depth-based image rendering (DBIR). Xiaoyi Jiang studied Computer Science at Peking University, China, and received his PhD and Venia Docendi (Habilitation) degree in Computer Science from the University of Bern, Switzerland. In 2002 he became an associate professor at the Technical University of Berlin, Germany. Since October 2002 he has been a full professor at the University of Münster, Germany. He has coauthored and coedited two books published by Springer and has served as the co-guest-editor of two special issues in international journals. Currently, he is the Coeditor-in-Chief of the International Journal of Pattern Recognition and Artificial Intelligence. In addition he also serves on the editorial advisory board of the International Journal of Neural Systems and the editorial board of IEEE Transactions on Systems, Man, and Cybernetics—Part B, the International Journal of Image and Graphics, Electronic Letters on Computer Vision and Image Analysis, and Pattern Recognition. His research interests include medical image analysis, vision-based man-machine interface, 3D image analysis, structural pattern recognition, and mobile multimedia. He is a member of IEEE and a Fellow of IAPR.  相似文献   

16.
Robust Control for Steer-by-Wire Vehicles   总被引:1,自引:0,他引:1  
The design and analysis of steer-by-wire systems at the actuation and operational level is explored. At the actuation level, robust force feedback control using inverse disturbance observer structure and active observer algorithm is applied to enhance the robustness vs non-modelled dynamics and uncertain driver bio-impedance. At the operational level, the robustness aspects vs parameter uncertainties in vehicle dynamics and driver bio-impedance are issued and for a given target coupling dynamics between driver and vehicle the design task is converted to a model-matching problem. H techniques and active observer algorithms are used to design the steer-by-wire controller. Robustness issues at both levels are covered by mapping stability bounds in the space of physical uncertain parameters.Naim Bajçinca has been working as a researcher at German Aerospace Center (DLR) in Oberpfaffenhofen since 1998. He has graduated studies on Physics (1994) and Electrical Engineering (1995) at the University of Prishtina. His main interests include methods of robust and nonlinear control, model-reference control, uncertain time-delay systems with applications on haptics, active vehicle steering and master-slave systems.Rui Cortesão received the B.Sc. in Electrical Engineering., M.Sc. in Systems and Automation and Ph.D. in Control and Instrumentation from the University of Coimbra in 1994, 1997, and 2003, respectively. He has been visiting researcher at DLR for more than two years (1998–2003), working on compliant motion control, data fusion and steer-by-wire. In 2002 he was visiting researcher at Stanford University, working on haptic manipulation. He is Assistant Professor at the Electrical Engineering Department of the University of Coimbra since 2003 and researcher of the Institute for Systems and Robotics (ISR-Coimbra) since 1994. His research interest include data fusion, control, fuzzy systems, neural networks and robotics.Markus Hauschild received his Diploma in Mechatronics from Munich University of Applied Sciences in 2002. He joined the DLR Institute of Robotics and Mechatronics in 2001 working on control strategies for Harmonic Drive gears. His research interests include human-machine-interfaces, x-by-wire, and modeling of biomedical systems. In 2003 he was visiting researcher at National Yunlin University of Science and Technology, Taiwan, developing iterative learning control for compensation of periodic disturbances. In the European ENACTIVE network of excellence he is the DLR coordinator for the “actuators and sensors for haptic interfaces” activities. Presently he is a PhD student at University of Southern California in the Department of Biomedical Engineering.  相似文献   

17.
Centroidal Voronoi tessellations (CVT's) are special Voronoi tessellations for which the generators of the tessellation are also the centers of mass (or means) of the Voronoi cells or clusters. CVT's have been found to be useful in many disparate and diverse settings. In this paper, CVT-based algorithms are developed for image compression, image segmenation, and multichannel image restoration applications. In the image processing context and in its simplest form, the CVT-based methodology reduces to the well-known k-means clustering technique. However, by viewing the latter within the CVT context, very useful generalizations and improvements can be easily made. Several such generalizations are exploited in this paper including the incorporation of cluster dependent weights, the incorporation of averaging techniques to treat noisy images, extensions to treat multichannel data, and combinations of the aforementioned. In each case, examples are provided to illustrate the efficiency, flexibility, and effectiveness of CVT-based image processing methodologies. Qiang Du is a Professor of Mathematics at the Pennsylvania State University. He received his Ph.D. from the Carnegie Mellon University in 1988. Since then, he has held academic positions at several institutions such as the University of Chicago and the Hong Kong University of Science and Technology. He has published over 100 papers on numerical algorithms and their various applications. His recent research works include studies of bio-membranes, complex fluids, quantized vortices, micro-structure evolution, image and data analysis, mesh generation and optimization, and approximations of partial differential equations. Max Guzburger is the Frances Eppes Professor of Computational Science and Mathematics at Florida State University. He received his Ph.D. degree from New York University in 1969 and has held positions at the University of Tennessee, Carnegie Mellon University, Virginia Tech, and Iowa State University. He is the author of five books and over 225 papers. His research interest include computational methods for partial differential equations, control of complex systems, superconductivity, data mining, computational geometry, image processing, uncertainty quantification, and numerical analysis. Lili Ju is an Assistant Professor of Mathematics at the University of South Carolina, Columbia. He received a B.S. degree in Mathematics from Wuhan University in China in 1995, a M.S. degree in Computational Mathematics from the Chinese Academy of Sciences in 1998, and a Ph.D. in Applied Mathematics from Iowa State University in 2002. From 2002 to 2004, he was an Industrial Postdoctoral Researcher at the Institute of Mathematics and Its Applications at the University of Minnesota. His research interests include numerical analysis, scientific computation, parallel computing, and medical image processing. Xiaoqiang Wang is a graduate student in mathematics at the Pennsylvania State University, working under the supervision of Qiang Du. Starting in September 2005, he will be an Industrial Postdoctoral Researcher at the Institute of Mathematics and its Applications at the University of Minnesota. His research interests are in the fields of applied mathematics and scientific computation. His work involves numerical simulation and analysis, algorithms for image processing and data mining, parallel algorithms, and high-performance computing.  相似文献   

18.
The present paper summarizes the major methods and results of the multi-dimensional approach to genre variation. The approach combines the resources of computational tools, large text corpora, and multivariate statistical tools (such as factor analysis and cluster analysis). It has been used to address issues such as the relations among spoken and written genres in English, and the historical development of genres and styles. The approach has also been applied to other languages; in this regard it has been used to address broader theoretical issues, such as the extent to which genre, and style variation are comparable cross-linguistically, and the linguistic consequences of literacy.Douglas Biber is an associate professor in the Department of English, Applied Linguistics Program, Northern Arizona University, Flagstaff AZ. His research deals with the linguistic variation among registers and the diachronic evolution of registers, addressing both theoretical concerns and methodological issues relating to the design and use of computer-based text corpora.  相似文献   

19.
This paper describes a novel method for tracking complex non-rigid motions by learning the intrinsic object structure. The approach builds on and extends the studies on non-linear dimensionality reduction for object representation, object dynamics modeling and particle filter style tracking. First, the dimensionality reduction and density estimation algorithm is derived for unsupervised learning of object intrinsic representation, and the obtained non-rigid part of object state reduces even to 2-3 dimensions. Secondly the dynamical model is derived and trained based on this intrinsic representation. Thirdly the learned intrinsic object structure is integrated into a particle filter style tracker. It is shown that this intrinsic object representation has some interesting properties and based on which the newly derived dynamical model makes particle filter style tracker more robust and reliable.Extensive experiments are done on the tracking of challenging non-rigid motions such as fish twisting with selfocclusion, large inter-frame lip motion and facial expressions with global head rotation. Quantitative results are given to make comparisons between the newly proposed tracker and the existing tracker. The proposed method also has the potential to solve other type of tracking problems.  相似文献   

20.
Recently, mining from data streams has become an important and challenging task for many real-world applications such as credit card fraud protection and sensor networking. One popular solution is to separate stream data into chunks, learn a base classifier from each chunk, and then integrate all base classifiers for effective classification. In this paper, we propose a new dynamic classifier selection (DCS) mechanism to integrate base classifiers for effective mining from data streams. The proposed algorithm dynamically selects a single “best” classifier to classify each test instance at run time. Our scheme uses statistical information from attribute values, and uses each attribute to partition the evaluation set into disjoint subsets, followed by a procedure that evaluates the classification accuracy of each base classifier on these subsets. Given a test instance, its attribute values determine the subsets that the similar instances in the evaluation set have constructed, and the classifier with the highest classification accuracy on those subsets is selected to classify the test instance. Experimental results and comparative studies demonstrate the efficiency and efficacy of our method. Such a DCS scheme appears to be promising in mining data streams with dramatic concept drifting or with a significant amount of noise, where the base classifiers are likely conflictive or have low confidence. A preliminary version of this paper was published in the Proceedings of the 4th IEEE International Conference on Data Mining, pp 305–312, Brighton, UK Xingquan Zhu received his Ph.D. degree in Computer Science from Fudan University, Shanghai, China, in 2001. He spent four months with Microsoft Research Asia, Beijing, China, where he was working on content-based image retrieval with relevance feedback. From 2001 to 2002, he was a Postdoctoral Associate in the Department of Computer Science, Purdue University, West Lafayette, IN. He is currently a Research Assistant Professor in the Department of Computer Science, University of Vermont, Burlington, VT. His research interests include Data mining, machine learning, data quality, multimedia computing, and information retrieval. Since 2000, Dr. Zhu has published extensively, including over 40 refereed papers in various journals and conference proceedings. Xindong Wu is a Professor and the Chair of the Department of Computer Science at the University of Vermont. He holds a Ph.D. in Artificial Intelligence from the University of Edinburgh, Britain. His research interests include data mining, knowledge-based systems, and Web information exploration. He has published extensively in these areas in various journals and conferences, including IEEE TKDE, TPAMI, ACM TOIS, IJCAI, ICML, KDD, ICDM, and WWW, as well as 11 books and conference proceedings. Dr. Wu is the Editor-in-Chief of the IEEE Transactions on Knowledge and Data Engineering (by the IEEE Computer Society), the founder and current Steering Committee Chair of the IEEE International Conference on Data Mining (ICDM), an Honorary Editor-in-Chief of Knowledge and Information Systems (by Springer), and a Series Editor of the Springer Book Series on Advanced Information and Knowledge Processing (AI&KP). He is the 2004 ACM SIGKDD Service Award winner. Ying Yang received her Ph.D. in Computer Science from Monash University, Australia in 2003. Following academic appointments at the University of Vermont, USA, she currently holds a Research Fellow at Monash University, Australia. Dr. Yang is recognized for contributions in the fields of machine learning and data mining. She has published many scientific papers and book chapters on adaptive learning, proactive mining, noise cleansing and discretization. Contact her at yyang@mail.csse.monash.edu.au.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号