首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we present a tool for the simulation of fluid models of high-speed telecommunication networks. The aim of such a simulator is to evaluate measures which cannot be obtained through standard tools in reasonable time or through analytical approaches. We follow an event-driven approach in which events are associated with rate changes in fluid flows. We show that under some loose restrictions on the sources, this suffices to efficiently simulate the evolution in time of fairly complex models. Some examples illustrate the utilization of this approach and the gain that can be observed over standard simulation tools.  相似文献   

2.
The adoption of the cloud paradigm to access IT resources and services has posed many security issues which need to be cared of. Security becomes even a much bigger concern when services built on top of many commercial clouds have to interoperate. Among others, the value of the service delivered to end customers is strongly affected by the security of network which providers are able to build in typical SOA contexts. Currently, every provider advertises its own security strategy by means of proprietary policies, which are sometimes ambiguous and very often address the security problem from a non-uniform perspective. Even policies expressed in standardized languages do not appear to fit a dynamic scenario like the SOA’s, where services need to be sought and composed on the fly in a way that is compatible with the end-to-end security requirements. We then propose an approach that leverages on the semantic technology to enrich standardized security policies with an ad-hoc content. The semantic annotation of policies enables machine reasoning which is then used for both the discovery and the composition of security-enabled services. In the presented approach the semantic enrichment of policies is enforced by an automatic procedure. We further developed a semantic framework capable of matchmaking in a smart way security capabilities of providers and security requirements of customers, and tested it on a use case scenario.  相似文献   

3.
The Synthetic BattleBridge gives users interface tools that help them navigate, analyze, and comprehend a complex, active, distributed virtual battlespace environment. A primary objective for the SBB project is to develop an observatory for real time monitoring and assessment of the activities of intelligently behaving autonomous actors and manned simulators within a VE. Therefore, the SBB uses both environment and computation distribution to let users monitor and assess in real time the activity within a VE and to provide cognitive support for situation analysis. The SBB also addresses the development and evaluation of advanced user interfaces, information aggregation techniques, and information presentation techniques. For the SBB to function as we require, it must present to the user the spatial orientation, type, motion, and distribution of actors in a VE. Key issues regarding these capabilities are updating and displaying the VE at interactive display rates and providing a very large scale environment containing a wide variety of actor types, sizes, and speeds, The SBB provides these capabilities by computing vehicle position, motion, and velocity data for all actors in the battlespace and presenting this information in real time using a 3D rendering of the battlespace and its contents. The user controls the SBB and information presentation with an interface consisting of a combination of visual icons and text, which we describe  相似文献   

4.
Bershad  B.N. Levy  H.M. 《Computer》1988,21(5):50-60
The authors describe THERE (the heterogeneous environment for remote execution), a general-purpose facility designed to simplify the adaptation of nonnetworked, nonheterogeneous applications to a distributed heterogeneous environment. Existing applications made accessible with THERE become heterogeneous network services, and users of those services become heterogeneous network clients. They discuss the problems of remote computation, provide an overview of THERE, and describe the building of THERE applications. They present TPL, THERE programming language, which is tailored toward building execution environments for application programs (on the server sided), defining client interfaces to remote services (on the client side), and providing system-independent communication between client and server interfaces  相似文献   

5.
Verifying who someone is on the Internet is a prerequisite for online services that process sensitive or valuable information. While this has been solved with national or sectorial electronic identification (eID) schemes, general, cross-border solutions are rare. Cross-border eID difficulties have several origins: (i) incompatible national eID models; (ii) different legislations with incompatible objectives; (iii) lack of common language and semantics; (iv) different common procedures, specially in what concerns mandates and delegation; (v) different implementations of the same eID models. These have been addressed by STORK, a project that developed a federated cross-border eID system that was piloted in about twenty European Union Member States in service sectors as sensitive as eBanking and eHealth. STORK designed and implemented a large-scale interoperability framework, allowing different systems of different models to coexist, using a common language with a common semantics and satisfying national privacy legislations. The experience gained from this large-scale pilot fed into EU policy-making, in particular, the recently enacted eIDAS Regulation requiring mutual recognition of eID by 2018 has been directly influenced by STORK and its lessons learned.  相似文献   

6.
One of the functions of a portfolio management system is to return quickly an efficient frontier. However, in the large-scale problems (1000 to 3000 securities) that are beginning to appear with greater frequency, the task of computing the mean-variance efficient frontier, even when all constraints are linear, can range from the significant to the prohibitive. For ease of reference, we call mean-variance problems with all linear constraints Markowitz problems. With little on the time to compute a Markowitz-problem efficient frontier in the literature, we conduct experiments that involve varying problem sizes, methods employed, and optimizers used to present an overall picture of the situation and establish benchmarks in the large-scale arena. One of the conclusions of the experiments is the superiority of the class of techniques that would fall under the title of parametric quadratic programming.  相似文献   

7.
介绍一个可在经典计算机上模拟量子计算的工具Qsimulation。该工具由4个主要部分组成:一个命令式的量子编程语言,一个量子计算解释器,一个用于模拟量子程序执行的图形用户界面以及错误处理模块,它能帮助教师和新手设计并测试简单的量子电路和量子程序。  相似文献   

8.
We introduce a new performance metric, called load balancing factor (LBF), to assist programmers when evaluating different tuning alternatives. The LBF metric differs from traditional performance metrics since it is intended to measure the performance implications of a specific tuning alternative rather than quantifying where time is spent in the current version of the program. A second unique aspect of the metric is that it provides guidance about moving work within a distributed or parallel program rather than reducing it. A variation of the LBF metric can also be used to predict the performance impact of changing the underlying network. The LBF metric is computed incrementally and online during the execution of the program to be tuned. We also present a case study that shows that our metric can accurately predict the actual performance gains for a test suite of six programs  相似文献   

9.
10.
In order to study the dynamics of protein and nucleic acid conformations, a molecular folding-unfolding system (FUS written in Lisp) has been developed. Secondary structure features of protein and nucleic acids are graphically represented by cubes in a modified 'Blocks World' paradigm. Modeling of protein and nucleic acid unfolding (denaturation) and folding of their three-dimensional structure is possible by the use of high level 'block' operators which allow displacement of these structural features in space. Due to the flexible nature of this program, FUS is a useful tool for the rapid evaluation of user-defined rules governing conformational changes. The use of FUS to unfold three common proteins (prealbumin, flavodoxin and triose phosphate isomerase) and a tRNA is presented.  相似文献   

11.
12.
Optimization of data-parallel applications for modern HPC platforms requires partitioning the computations between the heterogeneous computing devices in proportion to their speed. Heterogeneous data partitioning algorithms are based on computation performance models of the executing platforms. Their implementation is not trivial as it requires: accurate and efficient benchmarking of computing devices, which may share resources and/or execute different codes; appropriate interpolation methods to predict performance; and advanced mathematical methods to solve the data partitioning problem. In this paper, we present FuPerMod, a software tool that addresses these implementation issues and automates the development of data partitioning code in data-parallel applications for heterogeneous HPC platforms.  相似文献   

13.
Biological olfaction outperforms chemical instrumentation in specificity, response time, detection limit, coding capacity, time stability, robustness, size, power consumption, and portability. This biological function provides outstanding performance due, in a large extent, to the unique architecture of the olfactory pathway, which combines a high degree of redundancy and efficient combinatorial coding, with unmatched chemical information processing mechanisms. The last decade has seen important advances in the understanding of the computational primitives underlying the functioning of the olfactory system. The EU-funded Project NEUROCHEM (Bio-ICT-FET- 216916) developed novel computing paradigms and biologically motivated artefacts for chemical sensing, taking its inspiration from the biological olfactory pathway. To demonstrate this approach, a biomimetic demonstrator has been built that features a very large-scale sensor array (65,536 elements) using conducting polymer technology which mimics the olfactory receptor neuron layer. It implements derived computational neuroscience algorithms in an embedded system that interfaces the chemical sensors and processes their signals in real-time. This embedded system integrates abstracted computational models of the main anatomic building blocks in the olfactory pathway: the olfactory bulb, and olfactory cortex in vertebrates (respectively, antennal lobe and mushroom bodies in the insect). For implementation in the embedded processor, an abstraction phase has been carried out in which their processing capabilities are captured by algorithmic solutions implemented in software. Finally, the algorithmic models are tested in mixed chemical plumes with an odour robot having navigation capabilities.  相似文献   

14.
Hybrid predictive dynamics: a new approach to simulate human motion   总被引:1,自引:0,他引:1  
A new methodology, called hybrid predictive dynamics (HPD), is introduced in this work to simulate human motion. HPD is defined as an optimization-based motion prediction approach in which the joint angle control points are unknowns in the equations of motion. Some of these control points are bounded by the experimental data. The joint torque and ground reaction forces are calculated by an inverse algorithm in the optimization procedure. Therefore, the proposed method is able to incorporate motion capture data into the formulation to predict natural and subject-specific human motions. Hybrid predictive dynamics includes three procedures, and each is a sub-optimization problem. First, the motion capture data are transferred from Cartesian space into joint space by using optimization-based inverse kinematics (IK) methodology. Secondly, joint profiles obtained from IK are interpolated by B-spline control points by using an error-minimization algorithm. Third, boundaries are built on the control points to represent specific joint profiles from experiments, and these boundaries are used to guide the predicted human motion. To predict more accurate motion, the boundaries can also be built on the kinetic variables if the experimental data are available. The efficiency of the method is demonstrated by simulating a box-lifting motion. The proposed method takes advantage of both prediction and tracking capabilities simultaneously, so that HPD has more applications in human motion prediction, especially towards clinical applications.  相似文献   

15.
A modified J2-flow theory is proposed which mimics through a nonassociative flow rule the effects induced by the presence of a comer at the loading point on the yield surface, as assumed in J2-corner theories. In particular, the nonassociative part of the plastic strain rate is a nonlinear rate-independent function of the total strain rate. Computationally, the proposed approach results in a simple modification of the classical radial return algorithm; hence it is well suited for large-scale inelastic calculations.  相似文献   

16.
The concept of green storage in cluster computing has recently attracted enormous interest among researchers. Consequently, several energy‐efficient solutions, such as multi‐speed disks and disk spin down methods, have been proposed to conserve power in storage systems and improve disk access. Some researchers have assessed their proposed solutions via simulations, while others have used real‐world experiments. Both methods have advantages and disadvantages. Simulations can more swiftly assess the benefits of energy‐efficient solutions, but various measurement errors can arise from procedural shortcomings. For instance, many power simulation tools fail to consider how heat increases the power overhead of disk operations. Some researchers claim that their modeling methods reduce the measurement error to 5% in the single disk model. However, the demand for large‐scale storage systems is growing rapidly. Traditional power measurement using a single disk model is unsuited to such systems because of their complex storage architecture and the unpredictability of numerous disks. Consequently, a number of studies have conducted real machine experiments to assess the performance of their solutions in terms of power conservation, but such experiments are time consuming. To address this problem, this study proposes an efficient simulation tool called Benchmark Analysis Software for Energy‐efficient Solution (BASE), which can accurately estimate disks' power consumption in large‐scale storage systems. We evaluate the performance of BASE on real‐world traces of Academia Sinica (Taiwan) and Florida International University. BASE incorporates an analytical method for assessing the reliability of energy‐efficient solutions. The analytical results demonstrate that the measurement error of BASE is 2.5% lower than that achieved in real‐world experiments involving energy‐estimation experiments. Moreover, the results of simulations to assess solution reliability are identical to those obtained through real‐world experiments. Copyright © 2015 Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
18.
The Journal of Supercomputing - The presentation of Table 2 was incorrect in the original article. The correct Table 2 is given below. The original article has been corrected.  相似文献   

19.
Security is increasingly critical for various scientific workflows that are big data applications and typically take quite amount of time being executed on large-scale distributed infrastructures. Cloud computing platform is such an infrastructure that can enable dynamic resource scaling on demand. Nevertheless, based on pay-per-use and hourly-based pricing model, users should pay attention to the cost incurred by renting virtual machines (VMs) from cloud data centers. Meanwhile, workflow tasks are generally heterogeneous and require different instance series (i.e., computing optimized, memory optimized, storage optimized, etc.). In this paper, we propose a security and cost aware scheduling (SCAS) algorithm for heterogeneous tasks of scientific workflow in clouds. Our proposed algorithm is based on the meta-heuristic optimization technique, particle swarm optimization (PSO), the coding strategy of which is devised to minimize the total workflow execution cost while meeting the deadline and risk rate constraints. Extensive experiments using three real-world scientific workflow applications, as well as CloudSim simulation framework, demonstrate the effectiveness and practicality of our algorithm.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号