首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Parallel manipulators: state of the art and perspectives   总被引:1,自引:0,他引:1  
《Advanced Robotics》2013,27(6):589-596
Parallel manipulators have been increasingly developed over the last few years from a theoretical view point as well as for practical applications. In this paper, recent advances are summarized and various applications for this kind of manipulator are illustrated.  相似文献   

2.
为了深入、详细、全面地研究面向对象软件度量,以1968年Rubey等人提出软件度量学的概念为起点,横跨四十余年,从度量方法的定义、理论验证、实验验证和辅助工具四个方面进行了阐述,并从软件内部属性、软件外部属性、数据源性质、数据源的开发语言、实验方法和实验结论等方面总结了典型的实验验证.最后,指出了其存在的问题,并指明了...  相似文献   

3.
Thermodynamics is a science concerning the state of a system, whether it is stable, metastable, or unstable, when interacting with its surroundings. The combined law of thermodynamics derived by Gibbs about 150 years ago laid the foundation of thermodynamics. In Gibbs combined law, the entropy production due to internal processes was not included, and the 2nd law was thus practically removed from the Gibbs combined law, so it is only applicable to systems under equilibrium, thus commonly termed as equilibrium or Gibbs thermodynamics. Gibbs further derived the classical statistical thermodynamics in terms of the probability of configurations in a system in the later 1800's and early 1900's. With the quantum mechanics (QM) developed in 1920's, the QM-based statistical thermodynamics was established and connected to classical statistical thermodynamics at the classical limit as shown by Landau in the 1940's. In 1960's the development of density functional theory (DFT) by Kohn and co-workers enabled the QM prediction of properties of the ground state of a system. On the other hand, the entropy production due to internal processes in non-equilibrium systems was studied separately by Onsager in 1930's and Prigogine and co-workers in the 1950's. In 1960's to 1970's the digitization of thermodynamics was developed by Kaufman in the framework of the CALculation of PHAse Diagrams (CALPHAD) modeling of individual phases with internal degrees of freedom. CALPHAD modeling of thermodynamics and atomic transport properties has enabled computational design of complex materials in the last 50 years. Our recently termed zentropy theory integrates DFT and statistical mechanics through the replacement of the internal energy of each individual configuration by its DFT-predicted free energy. The zentropy theory is capable of accurately predicting the free energy of individual phases, transition temperatures and properties of magnetic and ferroelectric materials with free energies of individual configurations solely from DFT-based calculations and without fitting parameters, and is being tested for other phenomena including superconductivity, quantum criticality, and black holes. Those predictions include the singularity at critical points with divergence of physical properties, negative thermal expansion, and the strongly correlated physics. Those individual configurations may thus be considered as the genomic building blocks of individual phases in the spirit of the materials genome®. This has the potential to shift the paradigm of CALPHAD modeling from being heavily dependent on experimental inputs to becoming fully predictive with inputs solely from DFT-based calculations and machine learning models built on those calculations and existing experimental data through newly developed and future open-source tools. Furthermore, through the combined law of thermodynamics including the internal entropy production, it is shown that the kinetic coefficient matrix of independent internal processes is diagonal with respect to the conjugate potentials in the combined law, and the cross phenomena that the phenomenological Onsager flux and reciprocal relationships are due to the dependence of the conjugate potential of a molar quantity on nonconjugate molar quantities and other potentials, which can be predicted by the zentropy theory and CALPHAD modeling.  相似文献   

4.
5.
Virtual reconstruction and representation of historical environments and objects have been of research interest for nearly two decades. Physically based and historically accurate illumination allows archaeologists and historians to authentically visualise a past environment to deduce new knowledge. This report reviews the current state of illuminating cultural heritage sites and objects using computer graphics for scientific, preservation and research purposes. We present the most noteworthy and up-to-date examples of reconstructions employing appropriate illumination models in object and image space, and in the visual perception domain. Finally, we also discuss the difficulties in rendering, documentation, validation and identify probable research challenges for the future. The report is aimed for researchers new to cultural heritage reconstruction who wish to learn about methods to illuminate the past.  相似文献   

6.
OSIRIDE (OSI over Heterogeneous Italian Data Network) is the result of the efforts of a number of persons mainly from the Italian National Research Council and industries. The first phase of OSIRIDE (named OSIRIDE/I), dealing with the formal specifications production of a number of protocols, was completed on schedule and in accordance with the initial objectives. The second phase of the project (named OSIRIDE/II), dealing with implementation, started at the beginning of the 1984. Previous papers have already introduced OSIRIDE, its goals, participants and technical choices. However, for the sake of understandability, a brief resumé of the whole project is dealt with in the first sections: for more information the interested reader is referred to the papers cited.  相似文献   

7.
Protocol design: redefining the state of the art   总被引:1,自引:0,他引:1  
Holzmann  G.J. 《Software, IEEE》1992,9(1):17-22
The application of formal methods to high-level protocol design is addressed. A formal method is considered to be one that has the capability of rendering correctness proofs. The traditional and formal design processes are described and compared. The framework for proving logical correctness in protocol engineering is then discussed  相似文献   

8.
Advances in multimedia data acquisition and storage technology have led to the growth of very large multimedia databases. Analyzing this huge amount of multimedia data to discover useful knowledge is a challenging problem. This challenge has opened the opportunity for research in Multimedia Data Mining (MDM). Multimedia data mining can be defined as the process of finding interesting patterns from media data such as audio, video, image and text that are not ordinarily accessible by basic queries and associated results. The motivation for doing MDM is to use the discovered patterns to improve decision making. MDM has therefore attracted significant research efforts in developing methods and tools to organize, manage, search and perform domain specific tasks for data from domains such as surveillance, meetings, broadcast news, sports, archives, movies, medical data, as well as personal and online media collections. This paper presents a survey on the problems and solutions in Multimedia Data Mining, approached from the following angles: feature extraction, transformation and representation techniques, data mining techniques, and current multimedia data mining systems in various application domains. We discuss main aspects of feature extraction, transformation and representation techniques. These aspects are: level of feature extraction, feature fusion, features synchronization, feature correlation discovery and accurate representation of multimedia data. Comparison of MDM techniques with state of the art video processing, audio processing and image processing techniques is also provided. Similarly, we compare MDM techniques with the state of the art data mining techniques involving clustering, classification, sequence pattern mining, association rule mining and visualization. We review current multimedia data mining systems in detail, grouping them according to problem formulations and approaches. The review includes supervised and unsupervised discovery of events and actions from one or more continuous sequences. We also do a detailed analysis to understand what has been achieved and what are the remaining gaps where future research efforts could be focussed. We then conclude this survey with a look at open research directions.  相似文献   

9.
10.
Moving target defense (MTD) has emerged as one of the game-changing themes to alter the asymmetric situation between attacks and defenses in cyber-security. Numerous related works involving several facets of MTD have been published. However, comprehensive analyses and research on MTD are still absent. In this paper, we present a survey on MTD technologies to scientifically and systematically introduce, categorize, and summarize the existing research works in this field. First, a new security model is introduced to describe the changes in the traditional defense paradigm and security model caused by the introduction of MTD. A function-and-movement model is provided to give a panoramic overview on different perspectives for understanding the existing MTD research works. Then a systematic interpretation of published literature is presented to describe the state of the art of the three main areas in the MTD field, namely, MTD theory, MTD strategy, and MTD evaluation. Specifically, in the area of MTD strategy, the common characteristics shared by the MTD strategies to improve system security and effectiveness are identified and extrapolated. Thereafter, the methods to implement these characteristics are concluded. Moreover, the MTD strategies are classified into three types according to their specific goals, and the necessary and sufficient conditions of each type to create effective MTD strategies are then summarized, which are typically one or more of the aforementioned characteristics. Finally, we provide a number of observations for the future direction in this field, which can be helpful for subsequent researchers.  相似文献   

11.
《IT Professional》2004,6(5):38-44
In the last 20 years, software engineering standards have evolved from documenting processes in the military complex to supporting the total software life cycle. Standards that define software engineering processes are still progressing in terms of the breadth and depth of their coverage and the maturity of the standards themselves. In this regard, they are moving closer to standards in older engineering professions. The shift from many major producers to just a few has changed the industry. As we've moved to a global economy, vendors have put more faith in international standards as a means of selling their products. Reducing the number of major producers has made the remaining more powerful. In this article, we examine the state of the art of software engineering process standards and discuss challenges that the profession must address. Our focus is on all existing software process standards.  相似文献   

12.
Ontology is one of the fundamental cornerstones of the semantic Web. The pervasive use of ontologies in information sharing and knowledge management calls for efficient and effective approaches to ontology development. Ontology learning, which seeks to discover ontological knowledge from various forms of data automatically or semi-automatically, can overcome the bottleneck of ontology acquisition in ontology development. Despite the significant progress in ontology learning research over the past decade, there remain a number of open problems in this field. This paper provides a comprehensive review and discussion of major issues, challenges, and opportunities in ontology learning. We propose a new learning-oriented model for ontology development and a framework for ontology learning. Moreover, we identify and discuss important dimensions for classifying ontology learning approaches and techniques. In light of the impact of domain on choosing ontology learning approaches, we summarize domain characteristics that can facilitate future ontology learning effort. The paper offers a road map and a variety of insights about this fast-growing field.  相似文献   

13.
The electromagnetic (EM)‐simulator‐based tuning process for rapid microwave design can combine EM accuracy with circuit‐design speed. Our own approach is based on the intuitive engineering idea of “space mapping.” In this article, we explain the art of microwave design optimization through “tuning space mapping” procedures. We list various appropriate types of models (called “surrogates”). We demonstrate the implementation of these surrogates through a simple bandstop filter. We provide application examples using commercial simulation software. Our purpose is to help microwave engineers understand the tuning space mapping methodology and to inspire new implementations and applications. © 2012 Wiley Periodicals, Inc. Int J RF and Microwave CAE 22: 639–651, 2012.  相似文献   

14.
Conventional information retrieval systems usually involve searching by terms from controlled vocabularies or by individual words in the text. These systems have been commercially successful but are limited by several problems, including cumbersome interfaces and inconsistency with human indexing. Research on methods that automate indexing and retrieval has been performed to address these problems. The three major types of automated systems are vector-based, probabilistic, and linguistic. This article describes these systems and provides an overview of the field of information retrieval in medicine.  相似文献   

15.
16.
Visual information is highly advantageous for the evolutionary success of almost all animals.This information is likewise critical for many computing tasks,and visual computing has achieved tremendous successes in numerous applications over the last 60 years or so.In that time,the development of visual computing has moved forwards with inspiration from biological mechanisms many times.In particular,deep neural networks were inspired by the hierarchical processing mechanisms that exist in the visual cortex of primate brains(including ours),and have achieved huge breakthroughs in many domainspecific visual tasks.In order to better understand biologically inspired visual computing,we will present a survey of the current work,and hope to offer some new avenues for rethinking visual computing and designing novel neural network architectures.  相似文献   

17.
无模型自适应控制的现状与展望   总被引:15,自引:4,他引:15  
给出了无模型控制的定义,并对已存在的无模型控制方法进行了分类.综述了无模型自适应控制理论和方法的现状和进展.讨论了无模型自适应控制与其他控制方法的主要区别,提出了两种无模型自适应控制方法与已有基于模型的控制方法优势互补的模块化设计方案,提出了有待于进一步研究的问题.  相似文献   

18.
19.
SUOWA operators are a particular case of Choquet integral that simultaneously generalize weighted means and OWA operators. Because they are constructed by using normalized capacities, they possess properties such as continuity, monotonicity, idempotency, compensativeness, and homogeneity of degree 1. Besides these ones, some articles published in recent years have shown that SUOWA operators also exhibit other interesting properties. So, we think that the time has come to summarize existing knowledge of these operators. The aim of this paper is to collect the main results obtained so far on SUOWA operators. Moreover, we also introduce some new results and illustrate the usefulness of SUOWA operators by using an example given by Beliakov (2018).  相似文献   

20.
Computer-supported argumentation: A review of the state of the art   总被引:1,自引:0,他引:1  
Argumentation is an important skill to learn. It is valuable not only in many professional contexts, such as the law, science, politics, and business, but also in everyday life. However, not many people are good arguers. In response to this, researchers and practitioners over the past 15–20 years have developed software tools both to support and teach argumentation. Some of these tools are used in individual fashion, to present students with the “rules” of argumentation in a particular domain and give them an opportunity to practice, while other tools are used in collaborative fashion, to facilitate communication and argumentation between multiple, and perhaps distant, participants. In this paper, we review the extensive literature on argumentation systems, both individual and collaborative, and both supportive and educational, with an eye toward particular aspects of the past work. More specifically, we review the types of argument representations that have been used, the various types of interaction design and ontologies that have been employed, and the system architecture issues that have been addressed. In addition, we discuss intelligent and automated features that have been imbued in past systems, such as automatically analyzing the quality of arguments and providing intelligent feedback to support and/or tutor argumentation. We also discuss a variety of empirical studies that have been done with argumentation systems, including, among other aspects, studies that have evaluated the effect of argument diagrams (e.g., textual versus graphical), different representations, and adaptive feedback on learning argumentation. Finally, we conclude by summarizing the “lessons learned” from this large and impressive body of work, particularly focusing on lessons for the CSCL research community and its ongoing efforts to develop computer-mediated collaborative argumentation systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号