共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Perception–action (PA) architectures are capable of solving a number of problems associated with artificial cognition, in particular, difficulties concerned with framing and symbol grounding. Existing PA algorithms tend to be ‘horizontal’ in the sense that learners maintain their prior percept–motor competences unchanged throughout learning. We here present a methodology for simultaneous ‘horizontal’ and ‘vertical’ perception–action learning in which there additionally exists the capability for incremental accumulation of novel percept–motor competences in a hierarchical fashion.The proposed learning mechanism commences with a set of primitive ‘innate’ capabilities and progressively modifies itself via recursive generalising of parametric spaces within the linked perceptual and motor domains so as to represent environmental affordances in maximally-compact manner. Efficient reparameterising of the percept domain is here accomplished by the exploratory elimination of dimensional redundancy and environmental context.Experimental results demonstrate that this approach exhibits an approximately linear increase in computational requirements when learning in a typical unconstrained environment, as compared with at least polynomially-increasing requirements for a classical perception–action system. 相似文献
3.
针对老年人和残疾人这类特殊用户群体与服务机器人构成的人机智能系统,提出了基于ACT-R(理性思维的适应性控制)认知架构模型的室内移动服务机器人人机耦合协同作业机制.基于ACT-R认知架构对人机一体化室内移动服务机器人人机协同作业系统进行了总体设计,利用简单自然的人机效应通道,设计了基于ACT-R认知架构的人机耦合界面;通过人-机-环境空间感知耦合,提出并建立了室内移动服务机器人人机一体化协同决策作业机制.最后在室内环境下进行移动服务机器人人机协同作业实验,系统安全高效地完成了作业任务,验证了该机制的有效性. 相似文献
4.
模拟人类认知过程是人工智能与计算认知学的一个重要研究领域。认知体系结构将人类认知模块化,通过模块之间的相互作用模拟人类认知过程。理性思维的自适应控制系统(adaptive control of thought-rational,ACT-R)是认知体系结构研究领域中的代表性理论,作为混合型认知体系结构,ACT-R使用symbolic系统与sub-symbolic系统共同模拟人类认知过程。ACT-R的相关研究已经应用到智能导师、智能agent等领域,并得到了越来越多的关注。通过对ACT-R的发展与相关研究进行综述,分别介绍了symbolic与sub-symbolic的系统组成、模块功能,以及ACT-R的相关应用等内容。 相似文献
5.
Lukasz G. Szafaryn Todd Gamblin Bronis R. de Supinski Kevin Skadron 《Journal of Parallel and Distributed Computing》2013
The increasing computational needs of parallel applications inevitably require portability across parallel architectures, which now include heterogeneous processing resources, such as CPUs and GPUs, and multiple SIMD/SIMT widths. However, the lack of a common parallel programming paradigm that provides predictable, near-optimal performance on each resource leads to the use of low-level frameworks with architecture-specific optimizations, which in turn cause the code base to diverge and makes porting difficult. Our experiences with parallel applications and frameworks lead us to the conclusion that achieving performance portability requires a common set of high-level directives and efficient mapping onto each architecture. 相似文献
6.
Chrysostomos D. Stylios Voula C. Georgopoulos Georgia A. Malandraki Spyridoula Chouliara 《Applied Soft Computing》2008,8(3):1243-1251
Medical decision support systems can provide assistance in crucial clinical judgments, particularly for inexperienced medical professionals. Fuzzy cognitive maps (FCMs) is a soft computing technique for modeling complex systems, which follows an approach similar to human reasoning and the human decision-making process. FCMs can successfully represent knowledge and human experience, introducing concepts to represent the essential elements and the cause and effect relationships among the concepts to model the behavior of any system. Medical decision systems are complex systems that can be decomposed to non-related and related subsystems and elements, where many factors have to be taken into consideration that may be complementary, contradictory, and competitive; these factors influence each other and determine the overall clinical decision with a different degree. Thus, FCMs are suitable for medical decision support systems and appropriate FCM architectures are proposed and developed as well as the corresponding examples from two medical disciplines, i.e. speech and language pathology and obstetrics, are described. 相似文献
7.
Data warehouse workloads are crucial for the support of on-line analytical processing (OLAP). The strategy to cope with OLAP queries on such huge amounts of data calls for the use of large parallel computers. The trend today is to use cluster architectures that show a reasonable balance between cost and performance. In such cases, it is necessary to tune the applications in order to minimize the amount of I/O and communication, such that the global execution time is reduced as much as possible.
In this paper, we model and analyze the most up-to-date strategies for ad hoc star join query processing in a cluster of computers. We show that, for ad hoc query processing and assuming a limited amount of resources available, these strategies still have room for improvement both in terms of I/O and inter-node data traffic communication. Our analysis concludes with the proposal of a hybrid solution that improves these two aspects compared to the previous techniques, and shows near optimal results in a broad spectrum of cases. 相似文献
8.
Engineering AgentSpeak(L): A Formal Computational Model 总被引:3,自引:0,他引:3
9.
Ambra Molesini Author Vitae Alessandro Garcia Author Vitae 《Journal of Systems and Software》2010,83(5):711-722
Design of stable software architectures has increasingly been a deep challenge to software developers due to the high volatility of their concerns and respective design decisions. Architecture stability is the ability of the high-level design units to sustain their modularity properties and not succumb to modifications. Architectural aspects are new modularity units aimed at improving design stability through the modularization of otherwise crosscutting concerns. However, there is no empirical knowledge about the positive and negative influences of aspectual decompositions on architecture stability. This paper presents an exploratory analysis of the influence exerted by aspect-oriented composition mechanisms in the stability of architectural modules addressing typical crosscutting concerns, such as error handling and security. Our investigation encompassed a comparative analysis of aspectual and non-aspectual decompositions based on different architectural styles applied to an evolving multi-agent software architecture. In particular, we assessed various facets of components’ and compositions’ stability through such alternative designs of the same multi-agent system using conventional quantitative indicators. We have also investigated the key characteristics of aspectual decompositions that led to (in)stabilities being observed in the target architectural options. The evaluation focused upon a number of architecturally-relevant changes that are typically performed through real-life maintenance tasks. 相似文献
10.
11.
An approach to develop Enterprise Integration Programs to assist enterprises in their migration path towards integration is proposed. It is called IE—GIP (Enterprise Integration—Business Processes Integrated Management, acronyms in Spanish). The topic is very important in industrial engineering nowadays because of the growing need to improve existing industrial systems and to organise such complex systems faster, better, and in a more systematic way. The contribution to the field of Enterprise Integration is mostly a methodological one. More specifically, it is based on the integration of Purdue Enterprise Reference Architecture (PERA) and Open System Architecture for Computer Integrated Manufacturing (CIMOSA) principles to propose an integration approach for industrial enterprises. Starting from existing leading proposals (CIMOSA, PERA, GERAM), a methodology has been defined and some extension to an architecture and supporting computer tools are discussed. The proposal covers the life cycle of an Enterprise Integration Program in a top-down approach. The approach is centred on the business process concept, but is based on a vision/process/people/technology view of the enterprise. The methodology divides the work into two major phases before system construction: master planning and CIM programme development. The method adapts the system life cycle of PERA but uses, whenever possible, the CIMOSA architecture with its business process approach. New CIMOSA-like constructs are introduced to be used in activities along with the methodology when necessary. To support the modelling phases of the proposal and to provide guidance to users of the methodology, computer supported tools have been developed in the course of this work. 相似文献
12.
Vassilios Peristeras Manuel Fradinho Deirdre Lee Wolfgang Prinz Rudolf Ruland Kashif Iqbal Stefan Decker 《Service Oriented Computing and Applications》2009,3(1):3-23
In this paper, we present the collaborative environment reference architecture (CERA) with the aim of supporting collaborative
work environment (CWE) interoperability. The vision of CERA is to support users who are engaged in common collaborative spaces
with similar work processes to work and collaborate seamlessly together, despite their use of proprietary CWE tools and systems.
The underlying CERA concepts, design principles, and models are discussed, as well as the architectural decisions made as
a result of the extended requirements analysis exercise. Furthermore, we present results from the Ecospace () project as an example of a CERA instantiation which focuses on facilitating users collaborating across different CWE systems,
namely BSCW, NetWeaver, and BC. We conclude with future research and implementation directions. 相似文献
13.
Alan Jay Smith 《Performance Evaluation》1981,1(2):104-117
The file system, and the components of the computer system associated with it (disks, drums, channels, mass storage, tapes and tape drives, controllers, I/O drivers, etc.) comprise a very substantial fraction of most computer systems; substantial in several aspects including amount of operating system code, expense for components, physical size and effect on performance. In this paper we survey the state of the art file and I/O system design and optimization as it applies to large data processing installations. In a companion paper, some research results applicable to both current and future system designs are summarized.Among the topics we discuss is the optimization of current file systems, where some material is provided regarding block size choice, data set placement, disk arm scheduling, rotational scheduling, compaction, fragmentation, I/O multipathing and file data structures. A set of references to the literature, especially to analytic I/O system models, is presented. The general tuning of file and I/O systems is also considered. Current and forthcoming disk architectures are the second topic. The count key data architecture of current disks (e.g. IBM 3350, 3380) and the fixed block architecture of new products (IBM 3310, 3370) are compared. The use of semiconductor drum replacements is considered and some commercially available systems are briefly described. 相似文献
14.
Reengineering for service oriented architectures: A strategic decision model for integration versus migration 总被引:1,自引:0,他引:1
Amjad Umar Author Vitae Adalberto Zordan Author Vitae 《Journal of Systems and Software》2009,82(3):448-462
Service Oriented Architecture (SOA) is a popular paradigm at present because it provides a standards-based conceptual framework for flexible and adaptable enterprise wide systems. This implies that most present systems need to be reengineered to become SOA compliant. However, SOA reengineering projects raise serious strategic as well as technical questions that require management oversight. This paper, based on practical experience with SOA projects, presents a decision model for SOA reengineering projects that combines strategic and technical factors with cost-benefit analysis for integration versus migration decisions. The paper identifies the key issues that need to be addressed in enterprise application reengineering projects for SOA, examines the strategic alternatives, explains how the alternatives can be evaluated based on architectural and cost-benefit considerations and illustrates the main ideas through a detailed case study. 相似文献
15.
Abstract: In this paper, we propose a method for integrating cognitive maps and neural networks to gain competitive advantage using qualitative information acquired from news information on the World Wide Web. We have developed the KBNMiner, which is designed to represent the knowledge of domain experts with cognitive maps, to search and retrieve news information on the Internet according to the knowledge and to apply the information to a neural network model. In addition, we investigate ways to train neural networks more effectively by separating the learning data into two groups on the basis of event information acquired from news information. To validate our proposed method, we applied 180,000 news articles to the KBNMiner. The experimental results are found to support our proposed method through tenfold cross‐validation. 相似文献
16.
《Behaviour & Information Technology》2012,31(3):389-401
The management of system development knowledge (SDK) is currently sub-optimal regarding the system developer's learning and use of the knowledge due to its inherently complex and cumbersome nature. In this work, we have identified and categorised different approaches to the management of SDK generally having instrumental and technical subject matter. To complement the current literature in this field of study, our approach to the management of SDK has taken into account the system developer's cognitive processing concerns. As such, we have proposed and successfully tested a strategic method for SDK management in a real working situation. In this empirical study, the implementation of an artificial knowledge structure has been shown to be useful as a means of decreasing the system developer's cognitive processing load as regards SDK. The first of two implications is such that cognitive consideration in relation to SDK management has further developmental potential. The second implication is that the system development environment can provide cognitive support to the system developer. 相似文献
17.
Lakshitha de SilvaAuthor Vitae Dharini Balasubramaniam Author Vitae 《Journal of Systems and Software》2012,85(1):132-151
Software architectures capture the most significant properties and design constraints of software systems. Thus, modifications to a system that violate its architectural principles can degrade system performance and shorten its useful lifetime. As the potential frequency and scale of software adaptations increase to meet rapidly changing requirements and business conditions, controlling such architecture erosion becomes an important concern for software architects and developers. This paper presents a survey of techniques and technologies that have been proposed over the years either to prevent architecture erosion or to detect and restore architectures that have been eroded. These approaches, which include tools, techniques and processes, are primarily classified into three generic categories that attempt to minimise, prevent and repair architecture erosion. Within these broad categories, each approach is further broken down reflecting the high-level strategies adopted to tackle erosion. These are: process-oriented architecture conformance, architecture evolution management, architecture design enforcement, architecture to implementation linkage, self-adaptation and architecture restoration techniques consisting of recovery, discovery and reconciliation. Some of these strategies contain sub-categories under which survey results are presented.We discuss the merits and weaknesses of each strategy and argue that no single strategy can address the problem of erosion. Further, we explore the possibility of combining strategies and present a case for further work in developing a holistic framework for controlling architecture erosion. 相似文献
18.
Rishad A. ShafikAuthor Vitae Bashir M. Al-Hashimi Author Vitae 《Microprocessors and Microsystems》2011,35(2):285-296
In this paper, we present reliability analysis and comparison between on-chip communication architectures: dominant shared-bus AMBA and emerging Network-on-Chip (NoC); in the presence of single-event upsets (SEUs) using MPEG-2 video decoder as a case study. Employing SystemC-based fault simulations, reliability of the decoders is studied in terms of SEUs experienced in the computation cores and communication interconnects. We show that for a given soft error rate (SER), NoC-based decoder experiences lower SEUs than AMBA-based decoder. Using peak signal-to-noise ratio (PSNR) and frame error ratio (FER) metrics to evaluate the impact of SEUs at application-level, we show that NoC-based decoder gives up to 4 dB higher PSNR, while AMBA experiences up to 3% lower FER. Furthermore, we investigate the impact of routing, application task mapping (distribution of tasks among computation cores) and architecture allocation (choice of number of computation cores) on the reliability of the decoders in the presence of SEUs. 相似文献
19.
Cognitive radio networks are envisioned to drive the next generation wireless networks that can dynamically optimize spectrum use. However, the deployment of such networks is hindered by the vulnerabilities that these networks are exposed to. Securing communications while exploiting the flexibilities offered by cognitive radios still remains a daunting challenge. In this survey, we put forward the security concerns and the vulnerabilities that threaten to plague the deployment of cognitive radio networks. We classify various types of vulnerabilities and provide an overview of the research challenges. We also discuss the various techniques that have been devised and analyze the research developments accomplished in this area. Finally, we discuss the open research challenges that must be addressed if cognitive radio networks were to become a commercially viable technology. 相似文献
20.