首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Discrete event simulations (DES) provide a powerful means for modeling complex systems and analyzing their behavior. DES capture all possible interactions between the entities they manage, which makes them highly expressive but also compute-intensive. These computational requirements often impose limitations on the breadth and/or depth of research that can be conducted with a discrete event simulation.This work describes our approach for leveraging the vast quantity of computing and storage resources available in both private organizations and public clouds to enable real-time exploration of discrete event simulations. Rather than directly targeting simulation execution speeds, we autonomously generate and execute novel scenario variants to explore a representative subset of the simulation parameter space. The corresponding outputs from this process are analyzed and used by our framework to produce models that accurately forecast simulation outcomes in real time, providing interactive feedback and facilitating exploratory research.Our framework distributes the workloads associated with generating and executing scenario variants across a range of commodity hardware, including public and private cloud resources. Once the models have been created, we evaluate their performance and improve prediction accuracy by employing dimensionality reduction techniques and ensemble methods. To make these models highly accessible, we provide a user-friendly interface that allows modelers and epidemiologists to modify simulation parameters and see projected outcomes in real time.  相似文献   

2.
This article describes and reflects upon practical experiences in using simulations to increase the crisis preparedness of policy makers and government agencies in the Netherlands. Basic issues in crisis simulation design are discussed in view of the various functions that simulations may perform in crisis management planning. A concrete example of simulation is provided to explain its design and modus operandi. The article concludes with a number of practical recommendations.  相似文献   

3.
It is shown that any one-dimensional cellular automata system can be simulated in real time by a propagating context-dependent Lindenmayer system, PD(m, n)L. Further, any PD(m, n)L system can be simulated by a one-dimensional cellular system. Although real-time simulations exist, there are PD(m, n)L systems for which a linear-time simulation is impossible.  相似文献   

4.
Reproduction of the exact structure of real turbulent flows is crucial in many applications. Four Dimensional variation (4D-VAR) is widely used in numerical weather forecasting, but it requires huge computational power to repeatedly solve flow dynamics and its adjoint, and, therefore, is not suitable to apply to problems of real-time flow reproduction such as feedback flow control. Kalman filter and observer, in which numerical solution converges to the real state asymptotically by means of the feedback signal proportional to the difference between the calculated state and the real state, requiring much less computational load than the variational method, are potential candidates to solve the problem. By comparing Kalman filter and observer, the latter has simpler structure retaining essential part of the state estimation. This study deals with a special type of observer, or measurement-integrated simulation (MI simulation), in which a SIMPLER-based flow solver is used as the mathematical model of the system in place of approximate small dimensional linear differential equations usually used in observers. Reproduction of the exact structure of a turbulent flow was investigated by a MI simulation. A numerical experiment was performed for a fully developed turbulent flow in a pipe with a square cross section. The MI simulation was performed with the feedback from the standard solution in the flow domain for the cases using: (1) all velocity components at all grid points, (2) partial velocity components at all grid points, or (3) all velocity components at partial grid points. Convergence of the MI simulation to the standard solution was investigated using the steady error norm for the convergent state and the time constant for the transient state. The result of the MI simulation using all the velocity information exponentially converges to the standard solution with a steady state error reduced from that of the ordinary simulation in a range of the feedback gain. Decreasing the feedback gain reduces the effect of feedback, and a feedback gain which is too large destabilizes the closed loop system, resulting in large error. The time constant decreases almost inversely proportional to the feedback gain as long as the feedback system is stable. For the MI simulation with the feedback using limited information, feedback using two velocity components by omitting one transverse velocity component showed a good result, although the other results were not satisfactory. For the MI simulation with the feedback using limited grid points, the result of the MI simulation applying the feedback at the grid points on every 20th plane in the x1 direction was almost the same as that using all grid points at some feedback gain, while the result with the feedback on the planes skipped in the x2 direction requires 10 times more planes to achieve the same reduction rate.  相似文献   

5.
Computer simulation of real system behaviour is increasingly used in research and development. As simulation models become more reliable, they also often become more complex to capture the progressive complexity of the real system. Calculation time can be a limiting factor for using simulation models in optimisation studies, for example, which generally require multiple simulations. Instead of using these time-consuming simulation models, the use of metamodels can be considered. A metamodel approximates the original simulation model with high confidence via a simplified mathematical model. A series of simulations then only takes a fraction of the original simulation time, hence allowing significant computational savings.In this paper, a strategy that is both reliable and time-efficient is provided in order to guide users in their metamodelling problems. Furthermore, polynomial regression (PR), multivariate adaptive regression splines (MARS), kriging (KR), radial basis function networks (RBF), and neural networks (NN) are compared on a building energy simulation problem. We find that for the outputs of this example and based on Root Mean Squared Error (RMSE), coefficient of determination (R2), and Maximal Absolute Error (MAE), KR and NN are the overall best techniques. Although MARS perform slightly worse than KR and NN, it is preferred because of its simplicity. For different applications, other techniques might be optimal.  相似文献   

6.
《Artificial Intelligence》2006,170(14-15):1101-1122
To successfully embed statistical machine learning models in real world applications, two post-deployment capabilities must be provided: (1) the ability to solicit user corrections and (2) the ability to update the model from these corrections. We refer to the former capability as corrective feedback and the latter as persistent learning. While these capabilities have a natural implementation for simple classification tasks such as spam filtering, we argue that a more careful design is required for structured classification tasks.One example of a structured classification task is information extraction, in which raw text is analyzed to automatically populate a database. In this work, we augment a probabilistic information extraction system with corrective feedback and persistent learning components to assist the user in building, correcting, and updating the extraction model. We describe methods of guiding the user to incorrect predictions, suggesting the most informative fields to correct, and incorporating corrections into the inference algorithm. We also present an active learning framework that minimizes not only how many examples a user must label, but also how difficult each example is to label. We empirically validate each of the technical components in simulation and quantify the user effort saved. We conclude that more efficient corrective feedback mechanisms lead to more effective persistent learning.  相似文献   

7.
8.
This work describes a framework that combines techniques from Adaptive Hypermedia and Natural Language processing in order to create, in a fully automated way, on-line information systems from linear texts in electronic format, such as textbooks. The process is divided into two steps: an off-line processing step, which analyses the source text, and an on-line step, which executes when a user connects to the system with a web browser, moment at which the contents and hyperlinks are generated. The framework has been implemented as the Welkin system, which has been used to build three adaptive on-line information sites in a quick and easy way. Some controlled experiments have been performed with real users aimed to provide positive feedback on the implementation of the system.  相似文献   

9.
Multimedia content adaption strategies are becoming increasingly important for effective video streaming over the actual heterogeneous networks. Thus, evaluation frameworks for adaptive video play an important role in the designing and deploying process of adaptive multimedia streaming systems. This paper describes a novel simulation framework for rate-adaptive video transmission using the Scalable Video Coding standard (H.264/SVC). Our approach uses feedback information about the available bandwidth to allow the video source to select the most suitable combination of SVC layers for the transmission of a video sequence. The proposed solution has been integrated into the network simulator NS-2 in order to support realistic network simulations. To demonstrate the usefulness of the proposed solution we perform a simulation study where a video sequence was transmitted over a three network scenarios. The experimental results show that the Adaptive SVC scheme implemented in our framework provides an efficient alternative that helps to avoid an increase in the network congestion in resource-constrained networks. Improvements in video quality, in terms of PSNR (Peak Signal to Noise Ratio) and SSIM (Structural Similarity Index) are also obtained.  相似文献   

10.

To enhance the performance and dynamics of a direct current (DC) motor drive, this paper proposes a new alternative based on recently introduced powerful symbiotic organisms search (SOS) algorithm for tuning proportional integral parameters. While imitating the symbiotic behavior that is seen among organisms in an ecosystem, SOS has important features such that it does not require tuning parameters, and its implementation is very easy with efficient three phases. After obtaining the optimized values of K p  − K i pair within the accurately prepared simulation software, they are used in real time. By managing the DC motor speed-controlled system with DSP of TMS320F28335, several simulations and experimental results confirming the performance of our proposal are presented along with comparisons against those of particle swarm optimization (PSO), genetic algorithm (GA), and Ziegler–Nichols (Z–N) tuning method. Results explicitly show that SOS is the pioneer in yielding better tracking performance and load disturbance rejection capability of the concerned drive system, which is followed by PSO, GA, and Z–N method, respectively. This has been achieved due to the fact that the gains obtained by SOS are more performant than those obtained by other applied methods.

  相似文献   

11.
《Control Engineering Practice》2003,11(10):1127-1142
In this paper, we apply a dynamic anti-windup scheme for improving the performance of a conventional proportional–integral (PI) controller for active queue management (AQM) supporting TCP flows. When a PI controller is used for AQM, the windup phenomenon of the integral action can cause performance degradation because the packet drop probability is limited between 0 and 1. Therefore we suggest a TCP/AQM model with a saturating actuator and apply a dynamic anti-windup method for improving the performance of the conventional PI AQM scheme. The proposed scheme not only provides graceful performance degradation, but also guarantees the stability of the overall system with the linearized TCP model. We verify the performance of the proposed scheme through ns-2 simulations. The simulation results show that our scheme outperforms the conventional PI controller when the traffic load is not stationary, which is always the case in real network environment.  相似文献   

12.
With more and more real deployments of wireless sensor network applications, we envision that their success is nonetheless determined by whether the sensor networks can provide a high quality stream of data over a long period. In this paper, we propose a consistency-driven data quality management framework called Orchis that integrates the quality of data into an energy efficient sensor system design. Orchis consists of four components, data consistency models, adaptive data sampling and process protocols, consistency-driven cross-layer protocols and flexible APIs to manage the data quality, to support the goals of high data quality and energy efficiency. We first formally define a consistency model, which not only includes temporal consistency and numerical consistency, but also considers the application-specific requirements of data and data dynamics in the sensing field. Next, we propose an adaptive lazy energy efficient data collection protocol, which adapts the data sampling rate to the data dynamics in the sensing field and keeps lazy when the data consistency is maintained. Finally, we conduct a comprehensive evaluation to the proposed protocol based on both a TOSSIM-based simulation and a real prototype implementation using MICA2 motes. The results from both simulation and prototype show that our protocol reduces the number of delivered messages, improves the quality of collected data, and in turn extends the lifetime of the whole network. Our analysis also implies that a tradeoff should be carefully set between data consistency requirements and energy saving based on the specific requirements of different applications.  相似文献   

13.
A matrix framework that based on the Petri nets theory (PNs) and the standard industrial engineering (IE) techniques were adopted into this paper. In this paper, the operational times are modeled and introduced into the matrix system model to activate the matrix system functioning. Different types of time include parts input, for operations and processing, for resources arriving, and time for finished goods or products output are developed for designing manufacturing systems. Therefore, different time matrices Ts (i.e., Tu, Tv, Tr, Ty) are constructed in this paper. Those time matrices are the key factors for integrating the manufacturing systems to approach the real time manufacturing world. Here, the key procedure for developing those time matrices is to develop and integrate the manufacturing matrix framework with the techniques of max/plus algebra. The result of this paper is to introduce and establish time into the matrix manufacturing system to approach the real world production situation. System simulation is also provided in this paper to demonstrate the integration between matrix formulation and time matrices.  相似文献   

14.
基于RTX的地形跟随实时仿真系统   总被引:1,自引:0,他引:1  
为了对地形跟随系统进行仿真验证,并进行模拟训练,开发了一种基于RTX的实时仿真系统,该系统采用Windows RTX Simulink 双核CPU PC的实时仿真架构。基于此种架构,设计、实现了一套地形跟随实时仿真系统,给出了该系统的系统结构和组成,并对Simulink模型的实时代码生成、进程间通信与数据交换、地形数据的实时更新等关键技术进行了详细说明。试验结果和使用经验表明该系统工作可靠、使用方便、易于开发与维护,完全能够满足地形跟随实时仿真的需要。  相似文献   

15.
Manual robot guidance is an intuitive approach to teach robots with human's skills in the loop. It is particularly useful to manufacturers because of its high flexibility and low programming effort. However, manual robot guidance requires compliance control that is generally not available in position-controlled industrial robots. We address this issue from a simulation-driven approach. We systematically capture the interactive dynamic behavior of intelligent robot manipulators within physics-based virtual testbeds, regardless of the type of application. On this basis, we develop structures to equip and employ simulated robots with motion control capabilities that include soft physical interaction control driven in real-time with real external guidance forces. We then transfer the virtual compliant behavior of the simulated robots to their physical counterparts to enable manual guidance. The simulator provides assistance to operators through timely and insightful robot monitoring, as well as meaningful performance indexes. The testbed allows us to swiftly assess guidance within numerous interaction scenarios. Experimental case studies illustrate the practical usefulness of the symbiotic transition between 3D simulation and reality, as pursued by the eRobotics framework to address challenging issues in industrial automation.  相似文献   

16.
Recent advances in computers, networking, and telecommunications offer new opportunities for using simulation and gaming as methodological tools for improving crisis management. It has become easy to develop virtual environments to support games, to have players at distributed workstations interacting with each other, to have automated controllers supply exogenous events to the players, to enable players to query online data files during the game, and to prepare presentation graphics for use during the game and for post-game debriefings. Videos can be used to present scenario updates to players in “newscast” format and to present pre-taped briefings by experts to players. Organizations responsible for crisis management are already using such technologies in constructing crisis management systems (CMSs) to coordinate response to a crisis, provide decision support during a crisis, and support activities prior to the crisis and after the crisis. If designed with gaming in mind, those same CMSs could be easily used in a simulation mode to play a crisis management game. Such a use of the system would also provide personnel with opportunities to rehearse for real crises using the same tools they would have available to them in a real crisis. In this paper, we provide some background for the use of simulation and gaming in crisis management training, describe an architecture for simulation and gaming, and present a case study to illustrate how virtual environments can be used for crisis management training.  相似文献   

17.
Active queue management (AQM) is an effective method used in Internet routers for congestion avoidance, and to achieve a tradeoff between link utilization and delay. The de facto standard, the random early detection (RED) AQM scheme, and most of its variants use average queue length as a congestion indicator to trigger packet dropping. This paper proposes a novel packet dropping scheme, called self-tuning proportional and integral RED (SPI-RED), as an extension of RED. SPI-RED is based on a self-tuning proportional and Integral feedback controller, which considers not only the average queue length at the current time point, but also the past queue lengths during a round-trip time to smooth the impact caused by short-lived traffic dynamics. Furthermore, we give theoretical analysis of the system stability and give guidelines for selection of feedback gains for the TCP/RED system to stabilize the average queue length at a desirable level. The proposed method can also be applied to the other variants of RED. Extensive simulations have been conducted with ns2. The simulation results have demonstrated that the proposed SPI-RED algorithm outperforms the existing AQM schemes in terms of drop probability and stability.  相似文献   

18.
This paper proposes a general architecture for testing, validating and verifying Ambient Intelligence (AmI) environments: AmISim. The development of AmI is a very complex task because this technology must often adapt to contextual information as well as unpredictable behaviours and environmental features. The architecture presented deals with AmI applications in order to cover the different components of these kinds of systems: environment, users, context and adaptation. This architecture is the first one that is able to cover all these features, which are needed in a full AmI system. The paper shows that AmISim is able to cover a complete AmI system and to provide a framework which can test scenarios that would be impossible to test in real environments or even with previous simulation approaches. Simulated and real elements coexist in AmISim for a robust testing, validation and verification of the AmI systems, which provide an easier and less costly deployment.  相似文献   

19.
Transient simulation in circuit simulation tools, such as SPICE and Xyce, depend on scalable and robust sparse LU factorizations for efficient numerical simulation of circuits and power grids. As the need for simulations of very large circuits grow, the prevalence of multicore architectures enable us to use shared memory parallel algorithms for such simulations. A parallel factorization is a critical component of such shared memory parallel simulations. We develop a parallel sparse factorization algorithm that can solve problems from circuit simulations efficiently, and map well to architectural features. This new factorization algorithm exposes hierarchical parallelism to accommodate irregular structure that arise in our target problems. It also uses a hierarchical two-dimensional data layout which reduces synchronization costs and maps to memory hierarchy found in multicore processors. We present an OpenMP based implementation of the parallel algorithm in a new multithreaded solver called Basker in the Trilinos framework. We present performance evaluations of Basker on the Intel SandyBridge and Xeon Phi platforms using circuit and power grid matrices taken from the University of Florida sparse matrix collection and from Xyce circuit simulation. Basker achieves a geometric mean speedup of 5.91× on CPU (16 cores) and 7.4× on Xeon Phi (32 cores) relative to state-of-the-art solver KLU. Basker outperforms Intel MKL Pardiso solver (PMKL) by as much as 30× on CPU (16 cores) and 7.5× on Xeon Phi (32 cores) for low fill-in circuit matrices. Furthermore, Basker provides 5.4× speedup on a challenging matrix sequence taken from an actual Xyce simulation.  相似文献   

20.
Fireball is an ab initio technique for fast local orbital simulations of nanotechnological, solid state, and biological systems. We have implemented a convenient interface for new users and software architects in the platform-independent Java language to access Fireball's unique and powerful capabilities. The graphical user interface can be run directly from a web server or from within a larger framework such as the Computational Science and Engineering Online (CSE-Online) environment or the Distributed Analysis of Neutron Scattering Experiments (DANSE) framework. We demonstrate its use for high-throughput electronic structure calculations and a multi-100 atom quantum molecular dynamics (MD) simulation.

Program summary

Program title: FireballUICatalogue identifier: AECF_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECF_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 279 784No. of bytes in distributed program, including test data, etc.: 12 836 145Distribution format: tar.gzProgramming language: JavaComputer: PC and workstationOperating system: The GUI will run under Windows, Mac and Linux. Executables for Mac and Linux are included in the package.RAM: 512 MBWord size: 32 or 64 bitsClassification: 4.14Nature of problem: The set up and running of many simulations (all of the same type), from the command line, is a slow process. But most research quality codes, including the ab initio tight-binding code FIREBALL, are designed to run from the command line. The desire is to have a method for quickly and efficiently setting up and running a host of simulations.Solution method: We have created a graphical user interface for use with the FIREBALL code. Once the user has created the files containing the atomic coordinates for each system that they are going to run a simulation on, the user can set up and start the computations of up to hundreds of simulations.Running time: 3 to 5 minutes on a 2 GHz Pentium IV processor.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号