首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 601 毫秒
1.
A two-layer architecture for dynamic real-time optimization (or nonlinear modelpredictive control (NMPC) with an economic objective) is presented, where the solution of the dynamic optimization problem is computed on two time-scales. On the upper layer, a rigorous optimization problem is solved with an economic objective function at a slow time-scale, which captures slow trends in process uncertainties. On the lower layer, a fast neighboring-extremal controller is tracking the trajectory in order to deal with fast disturbances acting on the process. Compared to a single-layer architecture, the two-layer architecture is able to address control systems with complex models leading to high computational load, since the rigorous optimization problem can be solved at a slower rate than the process sampling time. Furthermore, solving a new rigorous optimization problem is not necessary at each sampling time if the process has rather slow dynamics compared to the disturbance dynamics. The two-layer control strategy is illustrated with a simulated case study of an industrial polymerization process.  相似文献   

2.
A task-appropriate hybrid architecture for explanation   总被引:2,自引:0,他引:2  
  相似文献   

3.
Automatic synthesis of fuzzy controllers for commercial microprocessors is already available. For dedicated controllers, however, automatic synthesis is still in development. The problem is that when designers create dedicated architectures to solve specific problems, they don't consider the possibility of expanding or reducing the internal functional units. The absence of a flexible architecture that fits different applications without the use of expensive solutions has been one of the barriers to making automatic synthesis feasible. In this article we present an architecture suitable for automatic synthesis of digital fuzzy controllers. The main parameters that define the dimensions of the internal units are the number of bits for input and output and the number of bits of input and output membership functions. Our architecture imposes no limitations on the number of rules comprising the knowledge base  相似文献   

4.
A new architecture for Multidisciplinary Design Optimization, called Generic Parameter Penalty Architecture, is introduced. This architecture can be configured through three parameters that manage the distribution of objective functions and consistencies between main and sub-level optimizations. The parameters can adopt values between zero and one. Five different configurations of these parameters were tested on three problems (an analytical problem, Golinski’s speed reducer, and the combustion of propane). All three parameters of four of these configurations have extreme values (either zero or one), and the fifth one has intermediate values. These values make some of configurations manage the main level and sub-level optimizations in a similar manner to that of All at Once, Collaborative Optimization, and Analytical Target Cascading, which were studied for benchmarking purposes. The convergence and relative error of the solutions obtained by the new architecture were studied and compared to those of the previously stated widespread architectures. Results show that the performance of the new architecture depends on the values of its three parameters. It adopted behaviors similar to those of the reference architectures. Finally, its convergence and relative error, in contrast to the reference architectures, increased with the complexity of the problem.  相似文献   

5.
RISC-V作为近年来最热门的开源指令集架构,被广泛应用于各个特定领域的微处理器,特别是机器学习领域的模块化定制.但是,现有的RISC-V应用需要将传统软件或模型在RISC-V指令集上重新编译或优化,故如何能快速地在RISC-V体系结构上部署、运行和测试机器学习框架是一个亟待解决的技术问题.使用虚拟化技术可以解决跨平台的模型部署和运行问题.但是,传统的虚拟化技术,例如虚拟机,对原生系统性能要求高,资源占用多,运行响应慢,往往不适用于RISC-V架构的应用场景.讨论在资源受限的RISC-V架构上的强化学习虚拟化问题.首先,通过采用容器化技术减少上层软件构建虚拟化代价,去除冗余中间件,定制命名空间隔离特定进程,有效提升学习任务资源利用率,实现模型训练快速执行;其次,利用RISC-V指令集的特征进一步优化上层神经网络模型,提高强化学习效率;最后,实现整体优化和容器化方法系统原型,并通过多种基准测试集完成系统原型性能评估.容器化技术和传统RISC-V架构下交叉编译深度神经网络模型的方法相比,仅付出相对较小的额外性能代价,能快速实现更多、更复杂的深度学习软件框架的部署及运行;与Hypervisor虚拟机方法相比,基于RISC-V的模型具有近似的部署时间,并大量减少了性能损失.初步实验结果表明,容器化及其上的优化方法是实现基于RISC-V架构的软件和学习模型快速部署的一种有效方法.  相似文献   

6.
The amount of digital data in the universe is growing at an exponential rate, doubling every 2 years, and changing how we live in the world. The information storage capacity and data requirement crossed the zettabytes. With this level of bombardment of data on machine learning techniques, it becomes very difficult to carry out parallel computations. Deep learning is broadening its scope and gaining more popularity in natural language processing, feature extraction and visualization, and almost in every machine learning trend. The purpose of this study is to provide a brief review of deep learning architectures and their working. Research papers and proceedings of conferences from various authentic resources (Institute of Electrical and Electronics Engineers, Wiley, Nature, and Elsevier) are studied and analyzed. Different architectures and their effectiveness to solve domain specific problems are evaluated. Various limitations and open problems of current architectures are discussed to provide better insights to help researchers and student to resume their research on these issues. One hundred one articles were reviewed for this meta‐analysis of deep learning. From this analysis, it is concluded that advanced deep learning architectures are combinations of few conventional architectures. For example, deep belief network and convolutional neural network are used to build convolutional deep belief network, which has higher capabilities than the parent architectures. These combined architectures are more robust to explore the problem space and thus can be the answer to build a general‐purpose architecture.  相似文献   

7.
In this paper we present a behaviour-based mobile robot system for task execution. The behaviour model of this system consists of a number of motion behaviours, including reflexes and voluntary motion behaviours, and knowledge acquisition modules providing supporting information. Execution of a task is regarded as a problem-solving process. A blackboard model is introduced to overcome some shortcomings of the behaviour-based architectures, especially concerning modularity and task execution capability. The concept of attention is introduced in the behaviour control, which is more advantageous than the behaviour control mechanisms presented in the literature. Its introduction results in situation-dependent behaviour coordination. For efficient task execution, environment knowledge is maintained in a memory. Task-achieving behaviour is designed to make use of the memory when available.  相似文献   

8.
Recently, an ever increasing number of e‐Commerce tools has been made available that are able to help customers by generating purposed recommendations. Many of them are centralized so that they have to face problems related to efficiency and scalability. A few of them are distributed, but in this case, the complexity of the e‐Commerce process implies computation overhead on the client side, which is often unsuitable if mobile devices are used by customers. In this paper, we study how the software distribution in recommender systems affects their performances, depending on the characteristics of the e‐Commerce population. To this end, we present a distributed testbed architecture for e‐Commerce recommender systems using a multi‐tiered agent‐based approach to generate effective recommendations without requiring such an onerous amount of computation per single client. We use such a testbed to study the main advantages and limitations associated with the problem of distributing the computation of recommendations. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
Algorithms for planning under uncertainty require accurate action models that explicitly capture the uncertainty of the environment. Unfortunately, obtaining these models is usually complex. In environments with uncertainty, actions may produce countless outcomes and hence, specifying them and their probability is a hard task. As a consequence, when implementing agents with planning capabilities, practitioners frequently opt for architectures that interleave classical planning and execution monitoring following a replanning when failure paradigm. Though this approach is more practical, it may produce fragile plans that need continuous replanning episodes or even worse, that result in execution dead‐ends. In this paper, we propose a new architecture to relieve these shortcomings. The architecture is based on the integration of a relational learning component and the traditional planning and execution monitoring components. The new component allows the architecture to learn probabilistic rules of the success of actions from the execution of plans and to automatically upgrade the planning model with these rules. The upgraded models can be used by any classical planner that handles metric functions or, alternatively, by any probabilistic planner. This architecture proposal is designed to integrate off‐the‐shelf interchangeable planning and learning components so it can profit from the last advances in both fields without modifying the architecture.  相似文献   

10.
Autonomous agent architectures are design methodologies—collections of knowledge and strategies which are applied to the problem of creating situated intelligence. This article attempts to integrate this knowledge across several architectural traditions, paying particular attention to features which have tended to be selected under the pressure of extensive use in real-world systems. We determine that the following strategies provide significant assistance in the design of autonomous intelligent agents: (i) modularity, which simplifies both design and control; (ii) hierarchically organized action selection, which focusses attention and provides prioritization when different modules conflict; and (iii) parallel environment monitoring which allows a system to be responsive and opportunistic by allowing attention to shift and priorities to be re-evaluated. We offer a review of four architectural paradigms: behaviour-based AI; two- and three-layered systems; belief, desire and intention architectures (particularly PRS); and Soar/ACT-R. By documenting trends within each of these communities towards establishing the components above, we argue that this convergent evolution is strong evidence for the components' utility. We then use this information to recommend specific strategies for researchers working under each paradigm to further exploit the knowledge and experience of the field as a whole.  相似文献   

11.
In this paper, we propose a new framework for architecture prototyping via the concept of 3D collage, that is, a combination of geometrically transformed components segmented from multiple source architectures. In short, our proposed framework makes its contribution by featuring three desired functionalities as follows. First, during the construction process, two components can be snapped together through the most matched faces. Second, deformation can be performed arbitrarily on any face of the resulting mesh. Third, the assignment of color and texture attributes on the resulting mesh is intuitively and flexibly done in a user-friendly manner. By enjoying a simpler user interface, our system strikes a good balance between efficiency and expressiveness, thus making it particularly appropriate for the purpose of rapid architecture prototyping. Results are shown to demonstrate that our framework is not only good at simulating a wide variety of existing architectures, but also capable of creating even avant-garde architectural styles.  相似文献   

12.
Service Oriented Architectures, which allow for the integration of different subsystems and of applications running on different devices and platforms, may be very suitable to solve the problem of service personalization in large smart environments like cities, where the number of potential users and potentially available services is rapidly growing. Taking this into account, we have designed an agent-based service oriented architecture for smart spaces. Two of the greatest challenges in the design of such a solution are providing effective device, service and context federation and composition mechanisms and handling user mobility. For the first challenge, we have designed a hierarchical architecture and developed a set of inheritance, aggregation and access mechanisms for devices, services and context. To handle user mobility, we have followed three different strategies, using stationary, mobile or nomadic agents. In this paper, we describe the main aspects of our architecture and perform an experimental evaluation to determine the advantages and drawbacks of the different strategies.  相似文献   

13.
Predominantly, control theory deals with centralized (unrestricted) architectures. However, in practice, decentralized architectures are often preferred. The reasons for this preference are manyfold and include ease of understanding, maintainability, cabling issues and others. The aim of the current paper is to gain insight into the fundamental performance limitations that arise from the use of a decentralized architecture. These fundamental limitations can guide the design of decentralized controllers and offer insight into the performance loss incurred by the use of a restricted architecture. An interesting feature of the results is that they depend, inter-alia, on the relative gain array (RGA). This gives new insight into this standard tool for assessing input–output pairings in decentralized control architectures.  相似文献   

14.
Architectures depict design principles: paradigms that can be understood by all, allow thinking on a higher plane and avoiding low-level mistakes. They provide means for ensuring correctness by construction by enforcing global properties characterizing the coordination between components. An architecture can be considered as an operator A that, applied to a set of components \({\mathcal{B}}\), builds a composite component \({A(\mathcal{B})}\) meeting a characteristic property \({\Phi}\). Architecture composability is a basic and common problem faced by system designers. In this paper, we propose a formal and general framework for architecture composability based on an associative, commutative and idempotent architecture composition operator \({\oplus}\). The main result is that if two architectures A 1 and A 2 enforce respectively safety properties \({\Phi_{1}}\) and \({\Phi_{2}}\), the architecture \({A_{1} \oplus A_{2}}\) enforces the property \({\Phi_{1} \land \Phi_{2}}\), that is both properties are preserved by architecture composition. We also establish preservation of liveness properties by architecture composition. The presented results are illustrated by a running example and a case study.  相似文献   

15.
Simulation is indispensable in computer architecture research. Researchers increasingly resort to detailed architecture simulators to identify performance bottlenecks, analyze interactions among different hardware and software components, and measure the impact of new design ideas on the system performance. However, the slow speed of conventional execution‐driven architecture simulators is a serious impediment to obtaining desirable research productivity. This paper describes a novel fast multicore processor architecture simulation framework called Two‐Phase Trace‐driven Simulation (TPTS), which splits detailed timing simulation into a trace generation phase and a trace simulation phase. Much of the simulation overhead caused by uninteresting architectural events is only incurred once during the cycle‐accurate simulation‐based trace generation phase and can be omitted in the repeated trace‐driven simulations. We report our experiences with tsim, an event‐driven multicore processor architecture simulator that models detailed memory hierarchy, interconnect, and coherence protocol based on the TPTS framework. By applying aggressive event filtering, tsim achieves an impressive simulation speed of 146 millions of simulated instructions per second, when running 16‐thread parallel applications. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

16.
The 3DP (3-Dimensional Processor), a parallel-computing architecture that targets problems that have a 3-D numerical structure and require numerous calculations on 3-D vectors, is described. The 3DP architecture differs from traditional scalar architectures in that it operates directly on vectors. It differs from general parallel architectures in that it can solve problems that predict the behavior of highly coupled systems, and it differs from vector architectures in that it runs efficiently on length-3 vectors. Object-oriented programming on the 3DP and programming the 3DP in C++ are discussed. 3DP performance is reviewed, and the current implementation of the 3DP architecture, as an attached processor that plugs directly into Sun host VMEbus, is described  相似文献   

17.
18.
Software architectures capture the most significant properties and design constraints of software systems. Thus, modifications to a system that violate its architectural principles can degrade system performance and shorten its useful lifetime. As the potential frequency and scale of software adaptations increase to meet rapidly changing requirements and business conditions, controlling such architecture erosion becomes an important concern for software architects and developers. This paper presents a survey of techniques and technologies that have been proposed over the years either to prevent architecture erosion or to detect and restore architectures that have been eroded. These approaches, which include tools, techniques and processes, are primarily classified into three generic categories that attempt to minimise, prevent and repair architecture erosion. Within these broad categories, each approach is further broken down reflecting the high-level strategies adopted to tackle erosion. These are: process-oriented architecture conformance, architecture evolution management, architecture design enforcement, architecture to implementation linkage, self-adaptation and architecture restoration techniques consisting of recovery, discovery and reconciliation. Some of these strategies contain sub-categories under which survey results are presented.We discuss the merits and weaknesses of each strategy and argue that no single strategy can address the problem of erosion. Further, we explore the possibility of combining strategies and present a case for further work in developing a holistic framework for controlling architecture erosion.  相似文献   

19.
The increased transistor count resulting from ever-decreasing feature sizes has enabled the design of architectures containing many small but efficient processing units (cores). At the same time, many new applications have evolved with varying performance requirements. The fixed architecture of multiCore platforms often fails to accommodate the inherent diverse requirements of different applications. We present a dynamically reconfigurable multiCore architecture that detects program phase change at runtime and adapts to the changing program behavior by reconfiguring itself. We introduce simple but efficient performance counters to monitor vital parameters of reconfigurable architectures. We also present static, dynamic and adaptive reconfiguration techniques for reconfiguring the architecture. Our evaluation of the proposed reconfigurable architecture using an adaptive reconfiguration technique shows an improvement of up to 23% for multi-threaded applications and up to 27% for multiprogrammed workloads over that on statically chosen architectures, and up to 41% over the baseline SMP configuration.  相似文献   

20.
A method is presented for the stepwise refinement of an abstract architecture into a relatively correct lower-level architecture that is intended to implement it. A refinement step involves the application of a predefined refinement pattern that provides a routine solution to a standard architectural design problem. A pattern contains an abstract architecture schema and a more detailed schema intended to implement it. The two schemas usually contain very different architectural concepts (from different architectural styles). Once a refinement pattern is proven correct, instances of it can be used without proof in developing specific architectures. Individual refinements are compositional, permitting incremental development and local reasoning. A special correctness criterion is defined for the domain of software architecture, as well as an accompanying proof technique. A useful syntactic form of correct composition is defined. The main points are illustrated by means of familiar architectures for a compiler. A prototype implementation of the method has been used successfully in a real application  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号