共查询到20条相似文献,搜索用时 15 毫秒
1.
Future-generation distributed multimedia applications are expected to be highly scalable to a wide variety of heterogeneous
devices, and highly adaptive across wide-area distributed environments. This demands multiple stages of run-time support in
QoS-aware middleware architectures, particularly, probing the performance of QoS parameters, instantiating the initial component
configurations, and adapting to on-the-fly variations. However, few of the past experiences in related work have shown comprehensive
run-time support in all of the above stages – they often design and build a middleware framework by focusing on only one of
the run-time issues. In this paper, we argue that distributed multimedia applications need effective run-time middleware support
in all these stages to be highly scalable and adaptive across a wide variety of execution environments. Nevertheless, the
design of such a middleware framework should be kept as streamlined and simple as possible, leading to a novel and integrated
run-time middleware platform to unify the probing, instantiation and adaptation stages. In addition, for each stage, the framework
should enable the interaction of peer middleware components across host boundaries, so that the corresponding middleware function
can be performed in a coordinated and coherent fashion. We present the design of such an integrated architecture, with a case
study to illustrate how it is simple yet effective to monitor and configure complex multimedia applications. 相似文献
2.
Agrawal G. Sussman A. Saltz J. 《Parallel and Distributed Systems, IEEE Transactions on》1995,6(7):747-754
In compiling applications for distributed memory machines, runtime analysis is required when data to be communicated cannot be determined at compile-time. One such class of applications requiring runtime analysis is block structured codes. These codes employ multiple structured meshes, which may be nested (for multigrid codes) and/or irregularly coupled (called multiblock or irregularly coupled regular mesh problems). In this paper, we present runtime and compile-time analysis for compiling such applications on distributed memory parallel machines in an efficient and machine-independent fashion. We have designed and implemented a runtime library which supports the runtime analysis required. The library is currently implemented on several different systems. We have also developed compiler analysis for determining data access patterns at compile time and inserting calls to the appropriate runtime routines. Our methods can be used by compilers for HPF-like parallel programming languages in compiling codes in which data distribution, loop bounds and/or strides are unknown at compile-time. To demonstrate the efficacy of our approach, we have implemented our compiler analysis in the Fortran 90D/HPF compiler developed at Syracuse University. We have experimented with a multi-bloc Navier-Stokes solver template and a multigrid code. Our experimental results show that our primitives have low runtime communication overheads and the compiler parallelized codes perform within 20% of the codes parallelized by manually inserting calls to the runtime library 相似文献
3.
Andrew W. Appel 《LISP and Symbolic Computation》1990,3(4):343-380
The runtime data structures of the Standard ML of New Jersey compiler are simple yet general. As a result, code generators are easy to implement, programs execute quickly, garbage collectors are easy to implement and work efficiently, and a variety of runtime facilities can be provided with ease.Supported in part by NSF Grants DCR-8603453 and CCR-880-6121 相似文献
4.
Andrew W. Appel 《Higher-Order and Symbolic Computation》1990,3(4):343-380
The runtime data structures of the Standard ML of New Jersey compiler are simple yet general. As a result, code generators are easy to implement, programs execute quickly, garbage collectors are easy to implement and work efficiently, and a variety of runtime facilities can be provided with ease. 相似文献
5.
This paper describes an experimental message-driven programming system for fine-grain multicomputers. The initial target architecture is the J-machine designed at MIT. This machine combines a unique collection of architectural features that include fine-grain processes, on-chip associative memory; and hardware support for process synchronization. The programming system uses these mechanisms via a simple message-driven process model that blurs the distinction between processes and messages: messages correspond to processes that are executed elsewhere in the network. This model allows code and data to be distributed across the computers in the machine, and is supported at every stage of the program development cycle. The prototype system we have developed includes a basic set of programming tools to support the model; these include a compiler, linker, archiver, loader and microkernel. Although the concepts are language independent, our prototype system is based on GNU-C. 相似文献
6.
《Computer Networks》1999,31(11-16):1391-1401
Interactive Web services are increasingly replacing traditional static Web pages. Producing Web services seems to require a tremendous amount of laborious low-level coding due to the primitive nature of CGI programming. We present ideas for an improved runtime system for interactive Web services built on top of CGI running on virtually every combination of browser and HTTP/CGI server. The runtime system has been implemented and used extensively in <bigwig>, a tool for producing interactive Web services. 相似文献
7.
Mauro Pezzé Jochen Wuttke 《International Journal on Software Tools for Technology Transfer (STTT)》2016,18(1):1-19
Creating runtime monitors for interesting properties is an important research problem. Existing approaches to runtime verification require specifications that not only define the property to monitor, but also contain details of the implementation, sometimes even requiring the implementation to add special variables or methods for monitoring. Often intuitive properties such as “event X should only happen when objects A and B agree” have to be translated by developers into complex specifications, for example, pre- and post-conditions on several methods that only in concert express this simple property. In most specification languages, the result of this manual translation are specifications that are so strongly tailored to the program at hand and the objects involved that, even if the property occurs again in a similar program, the whole translation process has to be repeated to create a new specification. In this paper, we introduce the concept of property templates. Property templates are pre-defined constraints that can be easily reused in specifications. They are part of a model-driven framework that translates high-level specifications into runtime monitors specialized to the problem at hand. The framework is extensible: Developers can define property templates for constraints they often need and can specialize the code generation when the default implementation is not satisfactory. We demonstrate the use of the framework in some case studies using a set of functional and structural constraints that we developed through an extensive study of existing software specifications. The key innovations of the approach we present are three. First, the properties developed with this approach are reusable and apply to a wide range of software systems, rather than being ad hoc and tailored to one particular program. Second, the properties are defined at a relatively high level of abstraction, so that no detailed knowledge of the implementation is needed to decide whether a given property applies. Third, we separate the definition of precise assertions for properties, and the use of properties. That way, experts can determine which assertions are needed to assure properties, and other developers can easily use these definitions to annotate systems. 相似文献
8.
一种嵌入式操作系统运行时验证方法 总被引:2,自引:0,他引:2
作为测试、模型检验等开发阶段所用技术的有效补充,运行时验证技术越来越受到广泛的关注。然而,当前的运行时验证技术主要用于应用软件,很少专门针对操作系统进行研究。对面向嵌入式操作系统的运行时验证框架和关键技术进行了研究,并结合一个开源嵌入式操作系统FreeRTOS进行了设计与实现。首先提出了一种面向嵌入式操作系统的运行时验证和反馈调整框架,然后针对框架中的关键技术部分,完成了规约语言的设计、三值语义监控器的生成、FreeRTOS嵌入式操作系统相关接口的实现等主要工作。 相似文献
9.
Chuan-Jun Su Tien-Lung Sun Chang-Nien Wu Richard J. Mayer 《Journal of Intelligent Manufacturing》1995,6(5):277-290
Much of the knowledge that is applied in or communicated between design and manufacturing activities is primarily shape based or shape indexed. Previous attempts to acquire and organize shape knowledge have been mostly concentrated on feature recognition from solid models, group technology (GT) coding schemes, and feature-based modeling. This paper presents the development of an efficient form-feature-based modeling system, and addresses the important issue of utilizing feature information for manufacturing, which has not been extensively discussed by previous work. In this paper we first present a Euler operator-based approach for efficient and effective form-feature encoding and manipulation in a feature-based design environment. Subsequently, a hybrid representation scheme called enhanced CSG tree of feature (ECTOF), which integrates feature model with solid model in a tree structure, is discussed. A feature interference resolution methodology to maintain the correct and consistent feature information in an ECTOF is also deliberated. Finally, we present a machinability-checking module, which employs global accessibility criteria to analyze a feature's machinability on a three-axis machining center. By developing feature interference resolving and machinability testing techniques and integrating with an efficient feature-based design system, this research makes the development of an integrated feature-based design and manufacturing system possible. 相似文献
10.
Decision rules for inventory control parameters are combined in a microcomputer based information and decision system. Decision rules implemented in IDSIM cover a wide variety of models for deterministic and stochastic demand cases. The system consists of four modules: The determination of economic order and production quantities and the evaluation of any arbitrary ordering rule in terms of carrying and ordering costs are accomplished in the first module. The second module deals with aggregate level decisions for deterministic demand systems, generates Total Cycle Stock curves, and addresses the problems of group replenishment and group discounts for deterministic systems. The third module calculates the optimum safety stock levels and optimum values of control parameters for order/point quantity as well as periodic review/order-up-to-level parameters in stochastic systems with normal and Laplace distributions. Other sophisticated algorithms, which utilize iterative procedures and yield near optimal solutions, are incorporated in the fourth module for the allocation of total safety stock to minimize either the expected number of stockouts or the total value of shortages. 相似文献
11.
用图书的出版信息和用户生成的社会信息从社会媒体中搜索出相关的图书已成为信息检索系统的一个研究热点。然而大部分的信息检索系统都是由单一的检索方法构成,随着用户需求的不断增加,这些系统难以满足用户需求。针对上述问题,提出了一种基于重排序融合的图书检索系统。首先,使用伪相关反馈技术对用户查询内容进行扩展,并将检索结果作为初排序结果;其次,使用用户生成的社会信息特征对初排序结果进行重排序;最后,采用排序学习模型对多种重排序策略得到的结果进行融合。在INEX 2012-2014 Social Book Search公开数据集上针对其它先进检索系统进行了对比实验,实验结果表明,系统的性能(NDCG@10)优于其它方法构成的图书检索系统。 相似文献
12.
13.
Jeffery S. Horsburgh David G. Tarboton Michael Piasecki David R. Maidment Ilya Zaslavsky David Valentine Thomas Whitenack 《Environmental Modelling & Software》2009,24(8):879-888
Over the next decade, it is likely that science and engineering research will produce more scientific data than has been created over the whole of human history. The successful use of these data to achieve new scientific breakthroughs will depend on the ability to access, integrate, and analyze these large datasets. Robust data organization and publication methods are needed within the research community to enable data discovery and scientific analysis by researchers other than those that collected the data. We present a new method for publishing research datasets consisting of point observations that employs a standard observations data model populated using controlled vocabularies for environmental and water resources data along with web services for transmitting data to consumers. We describe how these components have reduced the syntactic and semantic heterogeneity in the data assembled within a national network of environmental observatory test beds and how this data publication system has been used to create a federated network of consistent research data out of a set of geographically decentralized and autonomous test bed databases. 相似文献
14.
A. K. Wilson 《International journal of remote sensing》2013,34(9):1889-1901
This paper outlines the rationale and design for an enhanced Airborne Remote Sensing (ARS) facility, being implemented by the Natural Environment Research Council (NERC) in the U.K., to support environmental research and monitoring. It is based on the provision of a suite of optical remote sensing instruments, available for year-round deployment, and the development of an on-board integrated data acquisition, and ground-based data processing, system. Acquisition of the remotely sensed data will be synchronized with attitude/position information from a global positioning satellite (GPS) system to enable an accurate geometric rectification to be performed. A strategy has been devised for the endto-end ground processing of this data to enable the production of standard data products and distribution in a sensor-independent data format. The facility has been fully operational from Spring 1997 and is ready to support the needs of environmental scientists and organizations requiring high quality, and immediately usable, airborne remotely sensed data products. 相似文献
15.
Kenneth C. Hoffman 《Journal of Systems Integration》1995,5(2):91-105
The vitality of a nation or region is based on the effective use of material resources for public and private infrastructure. There are an abundance of technological options and policy choices that can be defined. A value chain approach based on the Reference Material System, using state-of-the-art information systems, can be used to provide an integrated framework for information on material resources and finished materials markets to support planning and analysis of the physical infrastructure that is essential to social and economic development. This framework also provides a model for tracking annual flows and stock levels for the capital account of a region or nation. 相似文献
16.
Most of the existing disaster assessment models are based on single method, such as expert system, or one of the multi-criteria decision making (MCDM) methods. This paper proposes an efficient disaster assessment expert system, which integrates fuzzy logic, survey questionnaire, Delphi method and MCDM methods. Two simulation experiments on typhoon and earthquake are introduced to validate the integrated expert system. The satisfaction degrees of the proposed model in both cases are 75% and 74.5%, respectively, which are close to the ideal rate (78%) of the proposed model. The experimental results show that the proposed expert system is not only efficient, fast and accurate, but also robust through self-adaptive study and has strong adaptability to different environments. 相似文献
17.
18.
Alan Duncan 《Computer Communications》1978,1(2):94-96
In 1974, the Barclays Bank computer network consisted of three separate computer centres, each serving 600 – 900 branches. In the event of a major catastrophe at any one centre, it would not have been possible to service the branches connected to it. The paper describes the Barclays Integrated Network System, which was developed to insure against such a situation. Further facilities that will be offered by the system when it is completed are also described. 相似文献
19.
In many-task computing (MTC), applications such as scientific workflows or parameter sweeps communicate via intermediate files; application performance strongly depends on the file system in use. The state of the art uses runtime systems providing in-memory file storage that is designed for data locality: files are placed on those nodes that write or read them. With data locality, however, task distribution conflicts with data distribution, leading to application slowdown, and worse, to prohibitive storage imbalance. To overcome these limitations, we present MemFS, a fully symmetrical, in-memory runtime file system that stripes files across all compute nodes, based on a distributed hash function. Our cluster experiments with Montage and BLAST workflows, using up to 512 cores, show that MemFS has both better performance and better scalability than the state-of-the-art, locality-based file system, AMFS. Furthermore, our evaluation on a public commercial cloud validates our cluster results. On this platform MemFS shows excellent scalability up to 1024 cores and is able to saturate the 10G Ethernet bandwidth when running BLAST and Montage. 相似文献
20.
Although high-performance computing has always been about efficient application execution, both energy and power consumption have become critical concerns owing to their effect on operating costs and failure rates of large-scale computing platforms. Modern processors provide techniques, such as dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (called throttling), to improve energy efficiency on-the-fly. Without careful application, however, DVFS and throttling may cause a significant performance loss due to system overhead. This paper proposes a novel runtime system that maximizes energy saving by selecting appropriate values for DVFS and throttling in parallel applications. Specifically, the system automatically predicts communication phases in parallel applications and applies frequency scaling considering both the CPU offload, provided by the network-interface card, and the architectural stalls during computation. Experiments, performed on NAS parallel benchmarks as well as on real-world applications in molecular dynamics and linear system solution, demonstrate that the proposed runtime system obtaining energy savings of as much as 14 % with a low performance loss of about 2 %. 相似文献