首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Advanced application domains such as computer-aided design, computer-aided software engineering, and office automation are characterized by their need to store, retrieve, and manage large quantities of data having complex structures. A number of object-oriented database management systems (OODBMS) are currently available that can effectively capture and process the complex data. The existing implementations of OODBMS outperform relational systems by maintaining and querying cross-references among related objects. However, the existing OODBMS still do not meet the efficiency requirements of advanced applications that require the execution of complex queries involving the retrieval of a large number of data objects and relationships among them. Parallel execution can significantly improve the performance of complex OO queries. In this paper, we analyze the performance of parallel OO query processing algorithms for various benchmark application domains. The application domains are characterized by specific mixes of queries of different semantic complexities. The performance of the application domains has been analyzed for various system and data parameters by running parallel programs on a 32-node transputer based parallel machine developed at the IBM Research Center at Yorktown Heights. The parallel processing algorithms, data routing techniques, and query management and control strategies have been implemented to obtain accurate estimation of controlling and processing overheads. However, generation of large complex databases for the study was impractical. Hence, the data used in the simulation have been parameterized. The parallel OO query processing algorithms analyzed in this study are based on a query graph approach rather than the traditional query tree approach. Using the query graph approach, a query is processed by simultaneously initiating the execution at several object classes, thereby, improving the parallelism. During processing, the algorithms avoid the execution of time-consuming join operations by making use of the object references among the objects. Further, the algorithms do not generate any temporary data, thereby, reducing disk accesses. This is accomplished by marking the selected objects and by employing a two-phase query processing strategy.  相似文献   

2.
动态系统建模工具可以按照设定的仿真步长对控制器的行为动态模拟,也可以在仿真环境下模拟控制器所在的系统架构和动态数据交互,因此传统的卫星控制系统方案设计时一般采用在同一模型建模体系进行,并进行相应的控制算法设计。但是由于动态系统建模工具其自身的时钟步长和数据流处理逻辑,不能完全模拟目标机的内部ALU逻辑和真实外围设备工作行为,可能与真实物理环境要求的系统有一定的出入,造成对承载卫星控制器功能的目标机CPU处理系统存在一定程度的失真,影响仿真效果。提出了一种基于虚拟化技术的卫星控制系统软件构件库可信验证技术,使用虚拟化技术实现对真实物理目标机功能的完全模拟,运用软件非干涉运行监控技术,获取可信的开发证据和应用证据,利用协同仿真组件和卫星控制系统方案设计的控制算法模型对各个软件构件进行动态同步仿真验证。  相似文献   

3.
This paper describes a comprehensive prototype of large-scale fault adaptive embedded software developed for the proposed Fermilab BTeV high energy physics experiment. Lightweight self-optimizing agents embedded within Level 1 of the prototype are responsible for proactive and reactive monitoring and mitigation based on specified layers of competence. The agents are self-protecting, detecting cascading failures using a distributed approach. Adaptive, reconfigurable, and mobile objects for reliablility are designed to be self-configuring to adapt automatically to dynamically changing environments. These objects provide a self-healing layer with the ability to discover, diagnose, and react to discontinuities in real-time processing. A generic modeling environment was developed to facilitate design and implementation of hardware resource specifications, application data flow, and failure mitigation strategies. Level 1 of the planned BTeV trigger system alone will consist of 2500 DSPs, so the number of components and intractable fault scenarios involved make it impossible to design an ‘expert system’ that applies traditional centralized mitigative strategies based on rules capturing every possible system state. Instead, a distributed reactive approach is implemented using the tools and methodologies developed by the Real-Time Embedded Systems group.  相似文献   

4.
Data processing complexity, partitionability, locality and provenance play a crucial role in the effectiveness of distributed data processing. Dynamics in data processing necessitates effective modeling which allows the understanding and reasoning of the fluidity of data processing. Through virtualization, resources have become scattered, heterogeneous, and dynamic in performance and networking. In this paper, we propose a new distributed data processing model based on automata where data processing is modeled as state transformations. This approach falls within a category of declarative concurrent paradigms which are fundamentally different than imperative approaches in that communication and function order are not explicitly modeled. This allows an abstraction of concurrency and thus suited for distributed systems. Automata give us a way to formally describe data processing independent from underlying processes while also providing routing information to route data based on its current state in a P2P fashion around networks of distributed processing nodes. Through an implementation, named Pumpkin, of the model we capture the automata schema and routing table into a data processing protocol and show how globally distributed resources can be brought together in a collaborative way to form a processing plane where data objects are self-routable on the plane.  相似文献   

5.
An approach for the modeling and evaluation of reliability and availability of systems using the knowledge of the reliability growth of their components is presented. Detailed models of reliability and availability for single-component systems are derived under much weaker assumption than usually considered. These models, termed knowledge models, enable phenomena to be precisely characterized, and a number of properties to be deduced. Since the knowledge models are too complex to be applied in real life for performing predictions, simplified models for practical purposes (action models) are discussed. The hyperexponential model is presented and applied to field data of software and hardware failures. This model is shown to be comparable to other models as far as reliability of single-component systems is concerned: in addition, it enables estimating and predicting the reliability of multicomponent systems, as well as their availability. The transformation approach enables classical Markov models to be transformed into other Markov models which account for reliability growth. The application of the transformation to multicomponent systems is described  相似文献   

6.
Networked Control Systems (NCS) are becoming increasingly ubiquitous in a growing number of applications, such as groups of unmanned aerial vehicles and industrial control systems. The evaluation of NCS properties such as stability and performance is very important given that these systems are typically deployed in critical settings. This paper presents the Networked Control Systems Wind Tunnel (NCSWT), an integrated modeling and simulation tool for the evaluation of Networked Control Systems (NCS). NCSWT integrates Matlab/Simulink and ns-2 for modeling and simulation of NCS using the High Level Architecture (HLA) standard. The tool is composed of two parts, the design-time models and the run-time components. The design-time models use Model Integrated Computing (MIC) to define HLA-based model constructs such as federates representing the simulators and interactions representing the communication between the simulators. MIC techniques facilitate the modeling and design of complex systems by using abstractions defined in domain-specific modeling languages (DSMLs) to describe the systems. The design-time models represent the control system dynamics and networking system behaviors in order to facilitate the run-time simulation of a NCS. The run-time components represent the main software components and interfaces for the actual realization of a NCS simulation using the HLA framework. Our implementation of the NCSWT based on HLA guarantees accurate time synchronization and data communication. Two case studies are presented to demonstrate the capabilities of the tool as well as evaluate the impact of network effects on NCS.  相似文献   

7.
For more than 20 years, researchers have designed models in order to describe swarm intelligence and apply the resulting techniques to complex problems. However, there is still a gap between these models and current MAS methodologies. The goal of this paper is to propose a principled and methodological approach for the engineering of systems based upon swarm intelligence. The constraints are, on the one hand, to enable the analysis, design and implementation of such systems; and, on the other hand, to formally analyze and verify properties of resulting systems. The principles of the approach are based, on the one hand, on requirement driven activities that produce goals to be fulfilled by the system of interest and, on the other, hand on an ontological modeling of the problem domain. This ontological modeling conceptualizes the phenomenon one seek to imitate and thus allows it understanding. The produced ontology is refined through the methodology activities down to organizational models.  相似文献   

8.
Abstract

To enable effective and safe operations of autonomous robots in environments with unknowns and unpredictability, a key practical problem is how to test the functionality and assess the performance of real-time motion planning systems. This is a challenge because the underlying algorithms are real-time, sensing-based, and often non-deterministic. These systems’ performance depends on task environments, which can vary in countless ways. Existing testing techniques are designed heavily based on testers’ experience and hardly provide a good coverage of possible test scenarios. This paper introduces a systematic model-based testing (MBT) approach to evaluate the functionality and performance of a real-time adaptive motion planning (RAMP) system. The MBT approach uses the formal communicating extended finite state machine model to model RAMP’s concurrent components and leverage graph traversal algorithms to systematically generate behavioral test cases. First, component integration is considered by modeling the RAMP components and their interactions. Next, system-level testing is considered by modeling mobile obstacles of unpredictable motion behavior. The behavior models are leveraged to generate Abstract Behavioral Test Cases, which are transformed by test data into executable test cases. The test results demonstrate the effectiveness of applying the systematic MBT approach to the evaluation of real-time robotic systems.  相似文献   

9.
The standard approach to stochastic control is dynamic programming. In this paper, we introduce an alternative approach based on direct comparison of the performance of any two policies. This is achieved by modeling the state process as a continuous-time and continuous-state Markov process and applying the same ideas as for the discrete-time and discrete-state case. This approach is simple and intuitively clear; it applies to different problems with, finite and infinite horizons, discounted and long-run-average performance, continuous and jump diffusions, in the same way. Discounting is not needed when dealing with long-run average performance. The approach provides a unified framework for stochastic control and other optimization theory and methodologies, including Markov decision processes, perturbation analysis, and reinforcement learning.  相似文献   

10.
This paper presents an automated approach for processing the Soil Survey Geographic (SSURGO) Database and the National Land Cover Database (NLCD), and deriving gridded a priori parameters for the National Weather Service (NWS) Sacramento Soil Moisture Accounting (SAC-SMA) model from these data sets. Our approach considerably extends methods previously used in the NWS and offers automated and geographically invariant ways of extracting soil information, interpreting soil texture, and spatially aggregating SAC-SMA parameters. The methodology is composed of four components. The first and second components are SSURGO and NLCD preprocessors. The third component is a parameter generator producing SAC-SMA parameters for each soil survey area on an approximately 30-m grid mesh. The last component is a postprocessor creating parameters for user-specified areas of interest on the Hydrologic Rainfall Analysis Project (HRAP) grid. Implemented in open-source software, this approach was employed by creating a set of SAC-SMA parameter and related soil property grids spanning 25 states, wherein it was shown to greatly reduce the derivation time and meanwhile yield results comparable to those based on the State Soil Geographic Database (STATSGO). The broad applicability of the methodologies and associated intermediate products to hydrologic modeling is discussed.  相似文献   

11.
Response surface modeling is an essential technique for identifying the optimal input parameters in a process, especially when the physical knowledge about the process is limited. It explores the relationships between the process input variables and the response variables through a sequence of designed experiments. Conventional response surface models typically rely on a large number of experiments to achieve reliable modeling performance, which can be cost prohibitive and time-consuming. Furthermore, nonlinear input-output relationships in some processes may not be sufficiently accounted for by existing modeling methods. To address these challenges, this paper develops a new response surface modeling approach based on hybrid multi-task learning (H-MTL). This approach decomposes the variability in process responses into two components–a global trend and a residual term, which are estimated through self-learning and MTL of Gaussian process (GP), respectively. MTL leverages the similarities between multiple similar-but-not-identical GPs, thus achieving superior modeling performance without increasing experimental cost. The effectiveness of the proposed method is demonstrated by a case study using experimental data collected from real-world ultrasonic metal welding processes with different material combinations. In addition, the hyperparameter selection, the effects of the number of tasks, and the determination of the stopping criterion are discussed in detail.  相似文献   

12.
13.
This paper discusses methodologies for the specification and analysis of performance related properties of components and assemblies of components, and outlines an original approach, called the CB-SPE, for component-based software performance engineering. The proposed approach relies on, and adapts to a CB framework, the concepts and steps of the SPE technology, and uses for modeling the standard RT-UML profile, reshaped according to the CB principles.  相似文献   

14.
Data‐intensive applications process large volumes of data using a parallel processing method. MapReduce is a programming model designed for data‐intensive applications for massive data sets and an execution framework for large‐scale data processing on clusters of commodity servers. While fault tolerance, easy programming structure, and high scalability are considered strong points of MapReduce; however its configuration parameters must be fine‐tuned to the specific deployment, which makes it more complex in configuration and performance. This paper explains tuning of the Hadoop configuration parameters, which directly affect MapReduce's job workflow performance under various conditions to achieve maximum performance. On the basis of the empirical data we collected, it became apparent that three main methodologies can affect the execution time of MapReduce running on cluster systems. Therefore, in this paper, we present a model that consists of three main modules: (1) Extending a data redistribution technique in order to find the high‐performance nodes, (2) Utilizing the number of map/reduce slots in order to make it more efficient in terms of execution time, and (3) Developing a new hybrid routing schedule shuffle phase in order to define the scheduler task while memory management level is reduced.  相似文献   

15.
Application developers utilizing event-based middleware have sought to leverage domain-specific modeling for the advantages of intuitive specification, code synthesis, and support for design evolution. For legacy and cyber-physical systems, the use of event-based middleware may mean that changes in computational platform can result anomalous system behavior, due to the presence of implicit temporal dependencies. These anomalies are a function not of the component implementation, but of the model of computation employed for supporting system composition. In order to address these behavioral anomalies, the paper presents an approach where time-based blocks are inserted into the system to account for the temporal dependencies. An advantage of capturing the system composition in a domain-specific modeling language is the ability to efficiently refactor an application to include time-triggered, event-based schedulers. This paper describes how an existing event-based component topology can be modified to permit a time-triggered model of computation, with no changes to the existing component software. Further, the time-triggered components can be deployed alongside standard publish/subscribe methodologies. This strategy is beneficial to the maintenance of existing legacy systems upon upgrade, since the current operational mode could be maintained with minimal changes to the legacy software even under changes to the target platform which alter execution speed. These time-triggered layers are discussed in three permutations: fully triggered, start triggered, and release triggered. A discussion is provided regarding the limitations of each approach, and a brief example is given. The example shows how to apply these triggering approaches without the modification of existing components, but instead through the insertion of triggered buffers between legacy components.  相似文献   

16.
The recent data deluge needing to be processed represents one of the major challenges in the computational field. This fact led to the growth of specially-designed applications known as data-intensive applications. In general, in order to ease the parallel execution of data-intensive applications input data is divided into smaller data chunks that can be processed separately. However, in many cases, these applications show severe performance problems mainly due to the load imbalance, inefficient use of available resources, and improper data partition policies. In addition, the impact of these performance problems can depend on the dynamic behavior of the application.This work proposes a methodology to dynamically improve the performance of data-intensive applications based on: (i) adapting the size and the number of data partitions to reduce the overall execution time; and (ii) adapting the number of processing nodes to achieve an efficient execution. We propose to monitor the application behavior for each exploration (query) and use gathered data to dynamically tune the performance of the application. The methodology assumes that a single execution includes multiple related queries on the same partitioned workload.The adaptation of the workload partition factor is addressed through the definition of the initial size for the data chunks; the modification of the scheduling policy to send first data chunks with large processing times; dividing of the data chunks with the biggest associated computation times; and joining of data chunks with small computation times. The criteria for dividing or gathering chunks are based on the chunks’ associated execution time (average and standard deviation) and the number of processing elements being used. Additionally, the resources utilization is addressed through the dynamic evaluation of the application performance and the estimation and modification of the number of processing nodes that can be efficiently used.We have evaluated our strategy using as cases of study a real and a synthetic data-intensive application. Analytical expressions have been analyzed through simulation. Applying our methodology, we have obtained encouraging results reducing total execution times and efficient use of resources.  相似文献   

17.
《Information Systems》1987,12(3):243-254
The organizational control systems model (OCSM) is presented and its value as a tool for the analysis of user-defined information needs discussed. The model is based on the view of goal-oriented organizations as adaptive systems with a hierarchical control structure. With the use of this model, the structure of an organization's control system is characterized in terms of functional and control hierarchies. The logical information processing network (LIPN) model is then used in the identification and specification of the information processing structure underlying the organization's control system. The interplay of these two models in the analysis and specification of information system requirements is also discussed in the context of the abstraction-synthesis methodology of information systems development. This methodology aims at the development of information systems that support the organization's control system with due regard to the modes of interaction required by the information system users. The approach presented here is also discussed in relation to current methodologies of information systems development.  相似文献   

18.
Today many enterprises are forced to produce products which fulfill individual customer requirements. This situation results in an exploding number of variant parts of a product again resulting in a huge number of product data. To meet these new requirements, many enterprises introduced powerful CIM systems. Normally, this leaded to heterogeneous EDP concepts which are characterized by autonomous CIM components, each with its own database. This paper illustrates how rule-based management of product data on the one hand significantly reduces the amount of product data and how rule-based management of product data on the other hand allows to automate data exchange between several CIM components. One important consequence of this approach is, that the integration of business oriented and technical components within a CIM system is significantly improved.  相似文献   

19.
The paper examines two existing approaches to software reliability analysis, time domain reliability growth modeling and input domain reliability analysis, and presents a new approach that combines some of their individual strengths. An analysis method called tree-based modeling is used to build models based on the combined measurement data. This new approach can be used to assess the reliability of software systems, to track reliability change over time, and to identify problematic subparts characterized by certain input states or time periods. The results can also be used to guide various remedial actions aimed at reliability improvement. This approach has been demonstrated to be applicable and effective in the testing of several large commercial software systems developed in the IBM Software Solutions Toronto Laboratory  相似文献   

20.
The recent technological advances in the field of computer and communication lead to distributed data base (DDB) architectures based on the Client/Server paradigm. Available DDB performance prediction methodologies are not sufficiently adequate being too expensive both in the model definition and in the model analysis because of the structural complexity and the large system dimension.

This paper concentrates on a new approach to performance modeling of DDB systems called “independent modelling approach”. This approach separates the DDB software model from the DDB machinery model, and models from solution techniques. This way the aspects related to data contention can be analysed as a pure software characteristic and separated from resource contention, considered as a machinery characteristic. Furthermore, it is possible to analytically model some system aspects (e.g., the software only, or part of it, without involving the machinery model), and apply combined analytical/simulation methods for the remaining system aspects. This results in higher flexibility at lower computational cost in the performance analysis of complex DDB architectures.

A case study is developed and experimental results are discussed to show the effectiveness of the approach.  相似文献   


设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号