首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
Traditional multimedia systems deal with only a few basic media: text, graphics, audio and video. However, many other types of media, such as ultrasound, infrared and RF signals, can be represented by streams of data samples and processed within multimedia applications. In this paper, we introduce some of these new media domains and identify interesting opportunities enabled by their software-based processing. We also describe our SpectrumWare testbed for experimenting with these new media types and report on our experience to date.We believe that the time has come to broaden the scope of multimedia to include any form of sampled information. Advances in processor and analog-to-digital conversion technology have brought raw sample streams within the grasp of desktop computers and media processing systems. Coupling these new media types with software-based processing allows the construction of virtual devices that can handle different sample sources, modify their behavior based on information extracted from the media, and transform information between media domains.  相似文献   

2.
This paper provides a background to the somewhat nebulous field of computing known as software agent technology. It gives both an overview of some of the key issues faced by the field, and illustrates the context for the papers contained in the rest of the special issue. The paper begins with a brief introduction to the field and proceeds to survey existing work, showing where overlaps exist between agent technology research and interrelated fields such as Human-Computer Interaction (HCI) and Distributed Artificial Intelligence (DAI). The paper then alters focus to concentrate on applications to the personalisation of systems and services to individual users, and techniques which offer opportunities in this area. The other papers in the Special Issue then form the basis for a review of the current state of the art in the personalisation of systems using agent technology. The paper concludes by offering some suggestions for future development of the technologies mentioned.  相似文献   

3.
Neural Networks have recently been a matter of extensive research and popularity. Their application has increased considerably in areas in which we are presented with a large amount of data and we have to identify an underlying pattern. This paper will look at their application to stylometry. We believe that statistical methods of attributing authorship can be coupled effectively with neural networks to produce a very powerful classification tool. We illustrate this with an example of a famous case of disputed authorship, The Federalist Papers. Our method assigns the disputed papers to Madison, a result which is consistent with previous work on the subject.Fiona J. Tweedie is a research student and tutor at the University ot the West of England, Bristol, currently working on the provenance of De Doctrina Christiana, attributed to John Milton. She has presented papers at the ACH/ALLC conference in 1995 and has forthcoming papers in Forensic Linguistics and Revue.Sameer Singh is a research student and tutor at the University of the West of England, Bristol, working in the application of artificially intelligent methods and statistics for quantifying language disorders. His main research interests include neural networks, fuzzy logic, expert systems and linguistic computing.David I. Holmes is a Principal Lecturer in Statistics at the University of the West, Bristol. He has published several papers on the statistical analysis of literary style in journals including the Journal of the Royal Statistical Society and History and Computing. He has presented papers at ACH/ALLC conferences in 1991, 1993 and 1995.  相似文献   

4.
An effective fuzzy-nets training scheme for monitoring tool breakage   总被引:1,自引:0,他引:1  
Recent research results show that fuzzy logic and the neural networks systems are very effective in detecting the breakage of cutting tools during machining processes. In the present study, a fuzzy-nets training procedure was developed to build the rule banks to meet the dynamic requirements of machining processes. The system is capable of responding and adapting in real-time to either shut down the machine when a tool fracture occurs or tune the process parameters on-line. The training procedure was validated in a back-up truck problem. Furthermore, two fuzzy-nets systems were combined to serve as a tool breakage detection system for an end milling operation. When the system was evaluated for end milling, the adaptive capability (of the fuzzy-nets system) was shown to enable detection of tool breakage to occur on-line, approaching a real-time base.  相似文献   

5.
In the context of technology development and systems engineering, knowledge is typically treated as a complex information structure. In this view, knowledge can be stored in highly sophisticated data systems and processed by explicitly intelligent, software-based technologies. This paper argues that the current emphasis upon knowledge as information (or even data) is based upon a form of rationalism which is inappropriate for any comprehensive treatment of knowledge in the context of human-centred systems thinking. A human-centred perspective requires us to treat knowledge in human terms. The paper sets out the particular importance of tacit knowledge in this regard. It sets out two case studies which reveal the critical importance of a careful treatment of tacit knowledge for success in two complex, technology-driven projects.
Larry StapletonEmail:
  相似文献   

6.
Summary High performance distributed computing systems require high performance communication systems.F-channels andHierarchical F-channels address this need by permitting a high level of concurrency like non-FIFO channels while retaining the simplicity of FIFO channels critical to the design and proof of many distributed algorithms. In this paper, we present counter-based implementations for F-channels and Hierarchical F-channels using message augmentation-appending control information to a message. These implementations guarantee that no messages are unnecessarily delayed at the receiving end. Keith Shafer received the B.A. degree in computer science and mathematics in 1986 from Mount Vernon Nazarene College, Mount Vernon, Ohio, USA, and the M.S. and Ph.D. degrees in computer science from The Ohio State University, Columbus, Ohio, USA, in 1988 and 1992, respectively. He is currently a Senior Research Scientist at OCLC Online Computer Library Center Inco, Dublin, OH, USA. His research interests include tools for comparing logical channels and methods for automatically constructing corpus grammars from tagged documents as an aid for database preparation and document conversion. Dr. Shafer is a member of the IEEE Computer Society. Mohan Ahuja received the M.A. degree in 1983 and the Ph.D. degree in 1985, both in computer science, from the University of Texas at Austin. He is currently with Department of Computer Science and Engineering, Univ. of California, San Diego. His recent research contributions include Global Flushing, message receipt in Receive-Phases, Incremental Publication of a Partial Order, Design of Highways (a high-performance distributed programming system) and — in collaboration with others — Passive-space and Time View, Performance evaluation of F-Channels, and Units of Computation in Fault-Tolerant Distributed Systems. His current research interests are in high-performance distributed communication and computing architectures, building high-performance systems, distributed operating systems, distributed algorithms, fault tolerance, and performance evaluation.Parts of this paper appeared in two conference papers, (1) Distributed Modeling and Implementation of High Performance Communication Architectures, in proceedings of the Thirteenth IEEE International Conference on Distributed Computing Systems, papes 56–65, 1993 and (2) Process-Channelagem-Process model of asynchronous distributed communication, in proceedings of the Twelfth IEEE International Conference on Distributed Computing Systems, pages 4–11, 1992  相似文献   

7.

Context

Software product lines (SPL) are used in industry to achieve more efficient software development. However, the testing side of SPL is underdeveloped.

Objective

This study aims at surveying existing research on SPL testing in order to identify useful approaches and needs for future research.

Method

A systematic mapping study is launched to find as much literature as possible, and the 64 papers found are classified with respect to focus, research type and contribution type.

Results

A majority of the papers are of proposal research types (64%). System testing is the largest group with respect to research focus (40%), followed by management (23%). Method contributions are in majority.

Conclusions

More validation and evaluation research is needed to provide a better foundation for SPL testing.  相似文献   

8.
9.
Modular Control and Coordination of Discrete-Event Systems   总被引:1,自引:0,他引:1  
In the supervisory control of discrete-event systems based on controllable languages, a standard way to handle state explosion in large systems is by modular supervision: either horizontal (decentralized) or vertical (hierarchical). However, unless all the relevant languages are prefix-closed, a well-known potential hazard with modularity is that of conflict. In decentralized control, modular supervisors that are individually nonblocking for the plant may nevertheless produce blocking, or even deadlock, when operating on-line concurrently. Similarly, a high-level hierarchical supervisor that predicts nonblocking at its aggregated level of abstraction may inadvertently admit blocking in a low-level implementation. In two previous papers, the authors showed that nonblocking hierarchical control can be guaranteed provided high-level aggregation is sufficiently fine; the appropriate conditions were formalized in terms of control structures and observers. In this paper we apply the same technique to decentralized control, when specifications are imposed on local models of the global process; in this way we remove the restriction in some earlier work that the plant and specification (marked) languages be prefix-closed. We then solve a more general problem of coordination: namely how to determine a high level coordinator that forestalls conflict in a decentralized architecture when it potentially arises, but is otherwise minimally intrusive on low-level control action. Coordination thus combines both vertical and horizontal modularity. The example of a simple production process is provided as a practical illustration. We conclude with an appraisal of the computational effort involved.  相似文献   

10.
Computer aided design (CAD) and computer aided manufacturing (CAM) systems are now indispensable tools for all stages of product development. The flexibility and ease of use of these systems has dramatically increased productivity and quality of product while reducing lead times. These advances have been largely achieved by automating individual tasks. At present, these islands of automation are poorly linked. One reason for this is that current computer systems are unable to extract geometric and topological information automatically from solid models that is relevant to the down-stream application. In other words, feature information.The objective of the research reported in this paper was to develop a more generic methodology than heretofore, in order to find the generic protrusion and depression features of a CAD model. The approach taken is one relying on a more human type of analysis, one that is viewer-centered as opposed to the object-centered approach of most previous research in this area. The viewer-centered approach to feature recognition described is based on a novel geometric probing or tomographic methodology. A five-step algorithm is described and then applied to a number of components by way of illustration.  相似文献   

11.
The condition-based approach studies restrictions on the inputs to a distributed problem, called conditions, that facilitate its solution. Previous work considered mostly the asynchronous model of computation. This paper studies conditions for consensus in a synchronous system where processes can fail by crashing. It describes a full classification of conditions for consensus, establishing a continuum between the asynchronous and synchronous models, with the following hierarchy where includes all conditions (and in particular the trivial one made up of all possible input vectors). For a condition , we have:
–  For values of consensus is solvable in an asynchronous system with t failures, and we obtain the known hierarchy of conditions that allows solving asynchronous consensus with more and more efficient protocols as we go from d = 0 to d = −t.
–  For values of consensus is solvable in an asynchronous system with t failures, and we obtain the known hierarchy of conditions that allows solving asynchronous consensus with more and more efficient protocols as we go from d = 0 to d = −t.
–  For values of d<0 consensus is known not solvable in an asynchronous system with t failures, but we obtain a hierarchy of conditions that allows solving synchronous consensus with protocols that can take more and more rounds, as we go from d = 0 to d = t.
–  d = 0 is the borderline case where consensus can be solved in an asynchronous system with t failures, and can be solved optimally in a synchronous system.
After having established the complete hierarchy, the paper concentrates on the two last items: . The main result is that the necessary and sufficient number of rounds needed to solve uniform consensus for a condition (such that ) is d +1. In more detail, the paper presents a generic synchronous early-deciding uniform consensus protocol that enjoys the following properties. Let f be the number of actual crashes, I the input vector and the condition the protocol is instantiated with. The protocol terminates in two rounds when and , and in at most d +1 rounds when and . (It also terminates in one round when and .) Moreover, whether I belongs or not to C, no process requires more than min rounds to decide. The paper then proves a corresponding lower bound stating that at least d +1 rounds are necessary to get a decision in the worst case when (for and ). This paper is based on the DISC’03 and DISC’04 conference versions MRR03,MRR04 A. Mostefaoui is currently Associate Professor at the Computer Science Department of the University of Rennes, France. He received his Engineer Degree in Computer Science in 1990 from the University of Algiers, and a Ph.D. in Computer Science in 1994 from the University of Rennes, France. His research interests include fault-tolerance and synchronization in distributed systems, group communication, data consistency and distributed checkpointing. Achour Mostefaoui has published more than 70 scientific publications and served as a reviewer for more than 20 major journals and conferences. Moreover, Achour Mostéfaoui is heading the software engineer degree of the University of Rennes S. Rajsbaum received a degree in Computer Engineering from the National Autonomous University of Mexico (UNAM) in 1985, and a PhD in the Computer Science from the Technion, Israel, in 1991. Since then he has been a member of the Institute of Mathematics at UNAM, where he is now a Full Professor with Tenure. He has been a regular visiting scientist at the Laboratory for Computer Science of MIT. Also, he was a member of the Cambridge Research Laboratory of HP from 2000 to 2002. He was chair of the program committee for Latin American Theoretical Informatics LATIN2002, and for ACM Principles of Distributed Computing PODC03, and member of the Program Committee of various international conferences such as ADHOC, DISC, ICDCS, IPDPS, LADC, PODC, and SIROCCO. His research interests are in the theory of distributed computing, especially issues related to coordination, complexity and computability, and fault-tolerance. He has also published in graph theory and algorithms. Overall, he has published over fifty papers in journals and international conferences. He runs the Distributed Computing Column of SIGACT News, the newsletter of the ACM Special Interest Group on Algorithms and Computation Theory. He has been editor of several special journal issues, such as the Special 20th PODC Anniversary Special Issue of Distributed Computing Journal (with H. Attiya) and of Computer Networks journal special issue on algorithms. M. Raynalhas been a professor of computer science since 1981. At IRISA (CNRS-INRIA-University joint computing research laboratory located in Rennes), he founded a research group on Distributed Algorithms in 1983. His research interests include distributed algorithms, distributed computing systems, networks and dependability. His main interest lies in the fundamental principles that underly the design and the construction of distributed computing systems. He has been Principal Investigator of a number of research grants in these areas, and has been invited by many universities all over the world to give lectures on distributed algorithms and distributed computing. He belongs to the editorial board of several international journals. Professor Michel Raynal has published more than 90 papers in journals (JACM, Acta Informatica, Distributed Computing, Comm. of the ACM, Information and Computation, Journal of Computer and System Sciences, JPDC, IEEE Transactions on Computers, IEEE Transactions on SE, IEEE Transactions on KDE, IEEE Transactions on TPDS, IEEE Computer, IEEE Software, IPL, PPL, Theoretical Computer Science, Real-Time Systems Journal, The Computer Journal, etc.); and more than 190 papers in conferences (ACM STOC, ACM PODC, ACM SPAA, IEEE ICDCS, IEEE DSN, DISC, IEEE IPDPS, Europar, FST&TCS, IEEE SRDS, etc.). He has also written seven books devoted to parallelism, distributed algorithms and systems (MIT Press and Wiley). Michel Raynal has served in program committees for more than 70 international conferences (including ACM PODC, DISC, ICDCS, IPDPS, DSN, LADC, SRDS, SIROCCO, etc.) and chaired the program committee of more than 15 international conferences (including DISC -twice-, ICDCS, SIROCCO and ISORC). He served as the chair of the steering committee leading the DISC symposium series in 2002-2004. Michel Raynal received the IEEE ICDCS best paper Award three times in a row: 1999, 2000 and 2001. He is a general co-chair of the IEEE ICDCS conference that will be held in Lisbon in 2006.  相似文献   

12.
Summary Methodological design of distributed programs is necessary if one is to master the complexity of parallelism. The class of control programs, whose purpose is to observe or detect properties of an underlying program, plays an important role in distributed computing. The detection of a property generally rests upon consistent evaluations of a predicate; such a predicate can be global, i.e. involve states of several processes and channels of the observed program. Unfortunately, in a distributed system, the consistency of an evaluation cannot be trivially obtained. This is a central problem in distributed evaluations. This paper addresses the problem of distributed evaluation, used as a basic tool for solution of general distributed detection problems. A new evaluation paradigm is put forward, and a general distributed detection program is designed, introducing the iterative scheme ofguarded waves sequence. The case of distributed termination detection is then taken to illustrate the proposed methodological design. Jean-Michel Hélary is currently professor of Computer Science at the University of Rennes, France. He received a first Ph.D. degree in Numerical Analysis in 1968, then another Ph.D. Degree in Computer Science in 1988. His research interests include distributed algorithms and protocols, specially the methodological aspects. He is a member of an INRIA research group working at IRISA (Rennes) on distributed algorithms and applications. Professor Jean-Michel Hélary has published several papers on these subjects, and is co-author of a book with Michel Raynal. He serves as a PC member in an international conference. Michel Raynal is currently professor of Computer Science at the University of Rennes, France. He received the Ph.D. degree in 1981. His research interests include distributed algorithms, operating systems, protocols and parallelism. He is the head of an INRIA research group working at IRISA (Rennes) on distributed algorithms and applications. Professor Michel Raynal has organized several international conferences and has served as a PC member in many international workshops, conferences and symposia. Over the past 9 years, he has written 7 books that constitute an introduction to distributed algorithms and distributed systems (among them: Algorithms for Mutual Exclusion, the MIT Press, 1986, and Synchronization and Control of Distributed Programs, Wiley, 1990, co-authored with J.M. Hélary). He is currently involved in two european Esprit projects devoted to large scale distributed systems.This work was supported by French Research Program C3 on Parallelism and Distributed ComputingAn extended abstract has been presented to ISTCS '92 [12]  相似文献   

13.
A knowledge-based system for reactive scheduling decision-making in FMS   总被引:2,自引:0,他引:2  
This paper describes research into the development of an intelligent simulation environment. The environment was used to analyze reactive scheduling scenarios in a specific flexible manufacturing systems (FMS) configuration. Using data from a real FMS, simulation models were created to study the reactive scheduling problem and this work led to the concept of capturing instantaneous FMS status data as snapshot data for analysis. Various intelligent systems were developed and tested to asses their decision-making capabilities. The concepts of History Logging and expert system learning is proposed and these ideas are implemented into the environment to provide decision-making and control across a FMS schedule lifetime. This research proposes an approach for the analysis of reactive scheduling in an FMS. The approach and system that was subsequently developed was based on the principle of automated intelligent decision-making via knowledge elicitation from FMS status data, together with knowledge base augmentation to facilitate a learning ability based on past experiences.  相似文献   

14.

Background

Many papers are published on the topic of software metrics but it is difficult to assess the current status of metrics research.

Aim

This paper aims to identify trends in influential software metrics papers and assess the possibility of using secondary studies to integrate research results.

Method

Search facilities in the SCOPUS tool were used to identify the most cited papers in the years 2000-2005 inclusive. Less cited papers were also selected from 2005. The selected papers were classified according factors such as to main topic, goal and type (empirical or theoretical or mixed). Papers classified as “Evaluation studies” were assessed to investigate the extent to which results could be synthesized.

Results

Compared with less cited papers, the most cited papers were more frequently journal papers, and empirical validation or data analysis studies. However, there were problems with some empirical validation studies. For example, they sometimes attempted to evaluate theoretically invalid metrics and fail to appreciate the importance of the context in which data are collected.

Conclusions

This paper, together with other similar papers, confirms that there is a large body of research related to software metrics. However, software metrics researchers may need to refine their empirical methodology before they can answer useful empirical questions.  相似文献   

15.
As the second part of a special issue on Neural Networks and Structured Knowledge, the contributions collected here concentrate on the extraction of knowledge, particularly in the form of rules, from neural networks, and on applications relying on the representation and processing of structured knowledge by neural networks. The transformation of the low-level internal representation in a neural network into higher-level knowledge or information that can be interpreted more easily by humans and integrated with symbol-oriented mechanisms is the subject of the first group of papers. The second group of papers uses specific applications as starting point, and describes approaches based on neural networks for the knowledge representation required to solve crucial tasks in the respective application.The companion first part of the special issue [1] contains papers dealing with representation and reasoning issues on the basis of neural networks.  相似文献   

16.
Micro powder injection molding (PIM) is a promising process for low cost fabrication of three-dimensional microstructures. The PIM can be used for a wide range of metal and ceramic materials, combined with the potential for mass production. In this paper, initial study on the molding of 316L stainless steel microstructures was investigated. Three different micro-cavity shapes were used. Small powder with mean size of 4 m was used with two multi-component binder systems. Microstructures with dimension as small as 35 m could be injection molded. For successful molding, the binder system must provide high green strength to withstand ejection from the mold and suitable molding parameters used. For example, a high mold temperature is required and ejection speed must be reduced. The cross-sections of the microstructures are precisely replicated. The general shape in the depth direction is replicated although it is not as good as that for the cross-section. More work has to be conducted to realize the full potentials of the process.The authors would like to thank the Nanyang Technological University for awarding a research grant to conduct this research and Adeka Fine Chemicals (Tokyo) for the supply of PAN 250 binder.  相似文献   

17.
Property preserving abstractions for the verification of concurrent systems   总被引:9,自引:0,他引:9  
We study property preserving transformations for reactive systems. The main idea is the use of simulations parameterized by Galois connections (, ), relating the lattices of properties of two systems. We propose and study a notion of preservation of properties expressed by formulas of a logic, by a function mapping sets of states of a systemS into sets of states of a systemS'. We give results on the preservation of properties expressed in sublanguages of the branching time -calculus when two systemsS andS' are related via (, )-simulations. They can be used to verify a property for a system by verifying the same property on a simpler system which is an abstraction of it. We show also under which conditions abstraction of concurrent systems can be computed from the abstraction of their components. This allows a compositional application of the proposed verification method.This is a revised version of the papers [2] and [16]; the results are fully developed in [28].This work was partially supported by ESPRIT Basic Research Action REACT.Verimag is a joint laboratory of CNRS, Institut National Polytechnique de Grenoble, Université J. Fourier and Verilog SA associated with IMAG.  相似文献   

18.
Despite a large body of multidisciplinary research on helpful and user-orientedinterface design, help facilities found in most commercial software are so ill-conceived thatthey are often unhelpful. From a wide spectrum of disciplines and software tools, we presentan extensive review of related work, identifying their limitations as well as their most prom-isingaspects. Using this material, we attempt to recapitulate the necessary requirements foruseful help systems.  相似文献   

19.
The current security infrastructure can be summarized as follows: (1) Security systems act locally and do not cooperate in an effective manner, (2) Very valuable assets are protected inadequately by antiquated technology systems and (3) Security systems rely on intensive human concentration to detect and assess threats. In this paper we present DETER (Detection of Events for Threat Evaluation and Recognition), a research and development (R&D) project aimed to develop a high-end automated security system. DETER can be seen as an attempt to bridge the gap between current systems reporting isolated events and an automated cooperating network capable of inferring and reporting threats, a function currently being performed by humans. The prototype DETER system is installed at the parking lot of Honeywell Laboratories (HL) in Minneapolis. The computer vision module of DETER reliably tracks pedestrians and vehicles and reports their annotated trajectories to the threat assessment module for evaluation. DETER features a systematic optical and system design that sets it apart from toy surveillance systems. It employs a powerful Normal mixture model at the pixel level supported by an expectation-maximization (EM) initialization, the Jeffreys divergence measure, and the method of moments. It also features a practical and accurate multicamera calibration method. The threat assessment module utilizes the computer vision information and can provide alerts for behaviors as complicated as the hopping of potential vehicle thieves from vehicle spot to vehicle spot. Extensive experimental results measured during actual field operations support DETERs exceptional characteristics. DETER has recently been successfully productized. The product-grade version of DETER monitors movements across the length of a new oil pipeline.Received: 6 July 2001, Accepted: 12 November 2002, Published online: 23 July 2003 Correspondence to: V. Morellas  相似文献   

20.

Context

Comparing and contrasting evidence from multiple studies is necessary to build knowledge and reach conclusions about the empirical support for a phenomenon. Therefore, research synthesis is at the center of the scientific enterprise in the software engineering discipline.

Objective

The objective of this article is to contribute to a better understanding of the challenges in synthesizing software engineering research and their implications for the progress of research and practice.

Method

A tertiary study of journal articles and full proceedings papers from the inception of evidence-based software engineering was performed to assess the types and methods of research synthesis in systematic reviews in software engineering.

Results

As many as half of the 49 reviews included in the study did not contain any synthesis. Of the studies that did contain synthesis, two thirds performed a narrative or a thematic synthesis. Only a few studies adequately demonstrated a robust, academic approach to research synthesis.

Conclusion

We concluded that, despite the focus on systematic reviews, there is limited attention paid to research synthesis in software engineering. This trend needs to change and a repertoire of synthesis methods needs to be an integral part of systematic reviews to increase their significance and utility for research and practice.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号