首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Software systems are seen more and more as evolutive systems. At the design phase, software is constantly in adaptation by the building process itself, and at runtime, it can be adapted in response to changing conditions in the executing environment such as location or resources. Adaptation is generally difficult to specify because of its cross-cutting impact on software. This article introduces an approach to unify adaptation at design and at runtime based on Aspect Oriented Modeling. Our approach proposes a unified aspect metamodel and a platform that realizes two different weaving processes to achieve design and runtime adaptations. This approach is used in a Dynamic Software Product Line which derives products that can be configured at design time and adapted at runtime in order to dynamically fit new requirements or resource changes. Such products are implemented using the Service Component Architecture and Java. Finally, we illustrate the use of our approach based on an adaptive e-shopping scenario. The main advantages of this unification are: a clear separation of concerns, the self-contained aspect model that can be weaved during the design and execution, and the platform independence guaranteed by two different types of weaving.  相似文献   

2.
Software systems are seen more and more as evolutive systems. At the design phase, software is constantly in adaptation by the building process itself, and at runtime, it can be adapted in response to changing conditions in the executing environment such as location or resources. Adaptation is generally difficult to specify because of its cross-cutting impact on software. This article introduces an approach to unify adaptation at design and at runtime based on Aspect Oriented Modeling. Our approach proposes a unified aspect metamodel and a platform that realizes two different weaving processes to achieve design and runtime adaptations. This approach is used in a Dynamic Software Product Line which derives products that can be configured at design time and adapted at runtime in order to dynamically fit new requirements or resource changes. Such products are implemented using the Service Component Architecture and Java. Finally, we illustrate the use of our approach based on an adaptive e-shopping scenario. The main advantages of this unification are: a clear separation of concerns, the self-contained aspect model that can be weaved during the design and execution, and the platform independence guaranteed by two different types of weaving.  相似文献   

3.

以ChatGPT为代表的大模型在文字生成、语义理解等任务上表现卓越,引起了工业界和学术界的广泛关注. 大模型的参数量在3年内增长数万倍,且仍呈现增长的趋势. 首先分析了大模型训练的存储挑战,指出大模型训练的存储需求大,且具有独特的计算模式、访存模式、数据特征,这使得针对互联网、大数据等应用的传统存储技术在处理大模型训练任务时效率低下,且容错开销大. 然后分别阐述了针对大模型训练的3类存储加速技术与2类存储容错技术. 针对大模型训练的存储加速技术包括:1)基于大模型计算模式的分布式显存管理技术,依据大模型计算任务的划分模式和计算任务间的依赖关系,设计模型数据在分布式集群中的划分、存储和传输策略;2)大模型训练访存感知的异构存储技术,借助大模型训练中的访存模式可预测的特性,设计异构设备中的数据预取和传输策略;3)大模型数据缩减技术,针对大模型数据的特征,对模型训练过程中的数据进行缩减. 针对大模型训练的存储容错技术包括:1)参数检查点技术,将大模型参数存储至持久化存储介质;2)冗余计算技术,在多张GPU中重复计算相同版本的参数. 最后给出了总结和展望.

  相似文献   

4.
Dynamic Takagi-Sugeno fuzzy models are not always easy to interpret, in particular when they are identified from experimental data. It is shown that there exists a close relationship between dynamic Takagi-Sugeno fuzzy models and dynamic linearization when using affine local model structures, which suggests that a solution to the multiobjective identification problem exists. However, it is also shown that the affine local model structure is a highly sensitive parametrization when applied in transient operating regimes. Due to the multiobjective nature of the identification problem studied here, special considerations must be made during model structure selection, experiment design, and identification in order to meet both objectives. Some guidelines for experiment design are suggested and some robust nonlinear identification algorithms are studied. These include constrained and regularized identification and locally weighted identification. Their usefulness in the present context is illustrated by examples  相似文献   

5.
Business processes are a key aspect of modern organization. In recent years, business process management and optimization has been applied to different cross-cutting concerns such as security, compliance, or Green IT, for example. Based on the ecological characteristics of a business process, proper environmentally sustainable adaptation strategies can be chosen to improve the total environmental impact of the business process. We use ecological sustainable adaptation strategies that are described as green business process patterns. The application of such a green business process pattern, however, affects the business process layer, the application component and the infrastructure layer. This implies that changes in the application infrastructure also need to be considered. Hence, we use best practices of cloud application architectures which are described as Cloud patterns. To guide developers through the adaptation process we propose a pattern-based approach in this work. We correlate Cloud patterns relevant for sustainable business processes to green business process patterns and organize them within a classification. To provide concrete implementation support we further annotate these Cloud patterns to application component models that are described with the topology and orchestration specification for cloud applications (TOSCA). Using these annotations, we describe a method that provides the means to optimize business processes based on green business process patterns through adapting the implementation of application components with concrete TOSCA implementation models.  相似文献   

6.
The contribution of this paper is twofold. First, we exploit copula methodology, with two threshold GARCH models as marginals, to construct a bivariate copula-threshold-GARCH model, simultaneously capturing asymmetric nonlinear behaviour in univariate stock returns of spot and futures markets and bivariate dependency, in a flexible manner. Two elliptical copulas (Gaussian and Student's-t) and three Archimedean copulas (Clayton, Gumbel and the Mixture of Clayton and Gumbel) are utilized. Second, we employ the presenting models to investigate the hedging performance for five East Asian spot and futures stock markets: Hong Kong, Japan, Korea, Singapore and Taiwan. Compared with conventional hedging strategies, including Engle's dynamic conditional correlation GARCH model, the results show that hedge ratios constructed by a Gaussian or Mixture copula are the best-performed in variance reduction for all markets except Japan and Singapore, and provide close to the best returns on a hedging portfolio over the sample period.  相似文献   

7.
Ensuring that service-oriented systems can adapt quickly and effectively to changes in service quality, business needs and their runtime environment is an increasingly important research problem. However, while considerable research has focused on developing runtime adaptation frameworks for service-oriented systems, there has been little work on assessing how effective the adaptations are. Effective adaptation ensures the system remains relevant in a changing environment and is an accurate reflection of user expectations. One way to address the problem is through validation. Validation allows us to assess how well a recommended adaptation addresses the concerns for which the system is reconfigured and provides us with insights into the nature of problems for which different adaptations are suited. However, the dynamic nature of runtime adaptation and the changeable contexts in which service-oriented systems operate make it difficult to specify appropriate validation mechanisms in advance. This paper describes a novel consumer-centered approach that uses machine learning to continuously validate and refine runtime adaptation in service-oriented systems, through model-based clustering and deep learning.  相似文献   

8.
We present an approach to adapt dynamically the language models (LMs) used by a speech recognizer that is part of a spoken dialogue system. We have developed a grammar generation strategy that automatically adapts the LMs using the semantic information that the user provides (represented as dialogue concepts), together with the information regarding the intentions of the speaker (inferred by the dialogue manager, and represented as dialogue goals). We carry out the adaptation as a linear interpolation between a background LM, and one or more of the LMs associated to the dialogue elements (concepts or goals) addressed by the user. The interpolation weights between those models are automatically estimated on each dialogue turn, using measures such as the posterior probabilities of concepts and goals, estimated as part of the inference procedure to determine the actions to be carried out. We propose two approaches to handle the LMs related to concepts and goals. Whereas in the first one we estimate a LM for each one of them, in the second one we apply several clustering strategies to group together those elements that share some common properties, and estimate a LM for each cluster. Our evaluation shows how the system can estimate a dynamic model adapted to each dialogue turn, which helps to significantly improve the performance of the speech recognition, which leads to an improvement in both the language understanding and the dialogue management tasks.  相似文献   

9.
10.
11.
ContextAs the use of Domain-Specific Modeling Languages (DSMLs) continues to gain popularity, we have developed new ways to execute DSML models. The most popular approach is to execute code resulting from a model-to-code transformation. An alternative approach is to directly execute these models using a semantic-rich execution engine – Domain-Specific Virtual Machine (DSVM). The DSVM includes a middleware layer responsible for the delivery of services in a given domain.ObjectiveWe will investigate an approach that performs the dynamic combination of constructs in the middleware layer of DSVMs to support the delivery of domain-specific services. This middleware should provide: (a) a model of execution (MoE) that dynamically integrates decoupled domain-specific knowledge (DSK) for service delivery, (b) runtime adaptability based on context and available resources, and (c) the same level of operational assurance as any DSVM middleware.MethodOur approach will involve (1) defining a framework that supports the dynamic combination of MoE and DSK and (2) demonstrating the applicability of our framework in the DSVM middleware for user-centric communication. We will measure the overhead of our approach and provide a cost-benefit analysis factoring in its runtime adaptability using appropriate experimentation.ResultsOur experiments show that combining the DSK and MoE for a DSVM middleware allow us to realize efficient specialization while maintaining the required operability. We also show that the overhead introduced by adaptation is not necessarily deleterious to overall performance in a domain as it may result in more efficient operation selection.ConclusionThe approach defined for the DSVM middleware allows for greater flexibility in service delivery while reducing the complexity of application development for the user. These benefits are achieved at the expense of increased execution times, however this increase may be negligible depending on the domain.  相似文献   

12.
13.
Computational radio frequency identification (CRFID) sensors present a new frontier for pervasive sensing and computing. They exploit ambient light or radio frequency (RF) for energy and use backscatter communication with an RFID reader for data transfer. Unlike conventional RFID tags that only transmit identifiers, CRFID sensors need to transfer potentially large amounts of data to a reader during each contact. Existing EPC Gen2 protocol is inefficient in dealing with a small number of CRFID sensors transferring a large amount of buffered data to the RFID reader and it has no specific design for adaptation to dynamic energy harvesting and channel conditions. In this article, we propose to adopt dynamic frame length and charging time for CRFID backscatter communication, aiming to adapt to the changing energy harvesting and channel conditions and improve the system goodput. First, optimal frame length and charging time that maximizes the goodput are obtained by solving the formulated goodput optimization problem. Then we propose a dynamic frame length and charging time adaptation scheme (DFCA) that increase or decrease the frame length and charging time at runtime based on the goodput measurement. Simulations show that our proposed DFCA scheme outperforms current fixed-frame-length approach and can converge to theoretically optimal under different energy harvesting and channel conditions.  相似文献   

14.
In transactional systems, the objectives of quality of service regarding are often specified by Service Level Objectives (SLOs) that stipulate a response time to be achieved for a percentile of the transactions. Usually, there are different client classes with different SLOs. In this paper, we extend a technique that enforces the fulfilment of the SLOs using admission control. The admission control of new user sessions is based on a response-time model. The technique proposed in this paper dynamically adapts the model to changes in workload characteristics and system configuration, so that the system can work autonomically, without human intervention. The technique requires no knowledge about the internals of the system; thus, it is easy to use and can be applied to many systems. Its utility is demonstrated by a set of experiments on a system that implements the TPC-App benchmark. The experiments show that the model adaptation works correctly in very different situations that include large and small changes in response times, increasing and decreasing response times, and different patterns of workload injection. In all this scenarios, the technique updates the model progressively until it adjusts to the new situation and in intermediate situations the model never experiences abnormal behaviour that could lead to a failure in the admission control component.  相似文献   

15.
This paper presents a time stamp algorithm for runtime parallelization of general DOACROSS loops that have indirect access patterns. The algorithm follows the INSPECTOR/EXECUTOR scheme and exploits parallelism at a fine-grained memory reference level. It features a parallel inspector and improves upon previous algorithms of the same generality by exploiting parallelism among consecutive reads of the same memory element. Two variants of the algorithm are considered: One allows partially concurrent reads (PCR) and the other allows fully concurrent reads (FCR). Analyses of their time complexities derive a necessary condition with respect to the iteration workload for runtime parallelization. Experimental results for a Gaussian elimination loop, as well as an extensive set of synthetic loops on a 12-way SMP server, show that the time stamp algorithms outperform iteration-level parallelization techniques in most test cases and gain speedups over sequential execution for loops that have heavy iteration workloads. The PCR algorithm performs best because it makes a better trade-off between maximizing the parallelism and minimizing the analysis overhead. For loops with light or unknown iteration loads, an alternative speculative runtime parallelization technique is preferred  相似文献   

16.
虚拟技术的最新进展为网格计算提供了封装资源的新方式,其封装性、隔离性和安全性能够有效屏蔽底层资源的异构性,根据用户应用需求定制执行环境,更好地适应于网格环境的复杂性和应用的多样性。为了满足当前服务网格的需求发展,基于新的虚拟机技术,研究适合于服务网格的虚拟环境部署运行管理系统,该系统为用户提供可视化、易操作的远程虚拟环境部署和运行管理功能;并实现一个标准的网格服务,结合服务网格平台CROWN,该服务可根据用户应用的特定需求动态透明地部署虚拟执行环境,并根据资源状态自适配调度执行用户任务。并对系统进行了实验分析,实验结果验证了系统的良好可用性和运行性能。  相似文献   

17.

The design of gas turbines is a challenging area of cyber-physical systems where complex model-based simulations across multiple disciplines (e.g., performance, aerothermal) drive the design process. As a result, a continuously increasing amount of data is derived during system design. Finding new insights in such data by exploiting various machine learning (ML) techniques is a promising industrial trend since better predictions based on real data result in substantial product quality improvements and cost reduction. This paper presents a method that generates data from multi-paradigm simulation tools, develops and trains ML models for prediction, and deploys such prediction models into an active control system operating at runtime with limited computational power. We explore the replacement of existing traditional prediction modules with ML counterparts with different architectures. We validate the effectiveness of various ML models in the context of three (real) gas turbine bearings using over 150,000 data points for training, validation, and testing. We introduce code generation techniques for automated deployment of neural network models to industrial off-the-shelf programmable logic controllers.

  相似文献   

18.
Robust object tracking via online dynamic spatial bias appearance models   总被引:1,自引:0,他引:1  
This paper presents a robust object tracking method via a spatial bias appearance model learned dynamically in video. Motivated by the attention shifting among local regions of a human vision system during object tracking, we propose to partition an object into regions with different confidences and track the object using a dynamic spatial bias appearance model (DSBAM) estimated from region confidences. The confidence of a region is estimated to re ect the discriminative power of the region in a feature space, and the probability of occlusion. We propose a novel hierarchical Monte Carlo (HAMC) algorithm to learn region confidences dynamically in every frame. The algorithm consists of two levels of Monte Carlo processes implemented using two particle filtering procedures at each level and can efficiently extract high confidence regions through video frames by exploiting the temporal consistency of region confidences. A dynamic spatial bias map is then generated from the high confidence regions, and is employed to adapt the appearance model of the object and to guide a tracking algorithm in searching for correspondences in adjacent frames of video images. We demonstrate feasibility of the proposed method in video surveillance applications. The proposed method can be combined with many other existing tracking systems to enhance the robustness of these systems.  相似文献   

19.
As is known, many of the attributes of intelligent control in a biological process are due to the interactions of billions of neurons. Changing the weights of neurons alter the behavior of the entire neural network. Learning in a neutral network is accomplished by adjusting the weights, typically to minimize some objective function, and storing these weights as the actual strengths of the interconnections. The authors believe, therefore, that a control technique designed on the principles of neural networks will exhibit a learn-while-performing capability. In this paper such a neuro-controller, called the Inverse-Dynamics Adaptive Control (IDAC), for a class of unknown linear plants with structural perturbations is presented. Algorithms necessary to implement the IDAC technique are derived in detail. Simulation results show that the IDAC scheme exhibits dynamic learning and adaptation capabilities in the control of unknown complex systems. Noa-priori knowledge of the process to be controlled is necessary for the implementation of this scheme. Furthermore, the plant parameter variations due to the structural or environmental perturbations may be investigated by studying the IDAC parameter trajectories.  相似文献   

20.
The Multi-Agent Distributed Goal Satisfaction (MADGS) system facilitates distributed mission planning and execution in complex dynamic environments with a focus on distributed goal planning and satisfaction and mixed-initiative interactions with the human user. By understanding the fundamental technical challenges faced by our commanders on and off the battlefield, we can help ease the burden of decision-making. MADGS lays the foundations for retrieving, analyzing, synthesizing, and disseminating information to commanders. In this paper, we present an overview of the MADGS architecture and discuss the key components that formed our initial prototype and testbed. Eugene Santos, Jr. received the B.S. degree in mathematics and Computer science and the M.S. degree in mathematics (specializing in numerical analysis) from Youngstown State University, Youngstown, OH, in 1985 and 1986, respectively, and the Sc.M. and Ph.D. degrees in computer science from Brown University, Providence, RI, in 1988 and 1992, respectively. He is currently a Professor of Engineering at the Thayer School of Engineering, Dartmouth College, Hanover, NH, and Director of the Distributed Information and Intelligence Analysis Group (DI2AG). Previously, he was faculty at the Air Force Institute of Technology, Wright-Patterson AFB and the University of Connecticut, Storrs, CT. He has over 130 refereed technical publications and specializes in modern statistical and probabilistic methods with applications to intelligent systems, multi-agent systems, uncertain reasoning, planning and optimization, and decision science. Most recently, he has pioneered new research on user and adversarial behavioral modeling. He is an Associate Editor for the IEEE Transactions on Systems, Man, and Cybernetics: Part B and the International Journal of Image and Graphics. Scott DeLoach is currently an Associate Professor in the Department of Computing and Information Sciences at Kansas State University. His current research interests include autonomous cooperative robotics, adaptive multiagent systems, and agent-oriented software engineering. Prior to coming to Kansas State, Dr. DeLoach spent 20 years in the US Air Force, with his last assignment being as an Assistant Professor of Computer Science and Engineering at the Air Force Institute of Technology. Dr. DeLoach received his BS in Computer Engineering from Iowa State University in 1982 and his MS and PhD in Computer Engineering from the Air Force Institute of Technology in 1987 and 1996. Michael T. Cox is a senior scientist in the Intelligent Distributing Computing Department of BBN Technologies, Cambridge, MA. Previous to this position, Dr. Cox was an assistant professor in the Department of Computer Science & Engineering at Wright State University, Dayton, Ohio, where he was the director of Wright State’s Collaboration and Cognition Laboratory. He received his Ph.D. in Computer Science from the Georgia Institute of Technology, Atlanta, in 1996 and his undergraduate from the same in 1986. From 1996 to 1998, he was a postdoctoral fellow in the Computer Science Department at Carnegie Mellon University in Pittsburgh working on the PRODIGY project. His research interests include case-based reasoning, collaborative mixed-initiative planning, intelligent agents, understanding (situation assessment), introspection, and learning. More specifically, he is interested in how goals interact with and influence these broader cognitive processes. His approach to research follows both artificial intelligence and cognitive science directions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号