首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Advances in wireless and mobile computing environments allow a mobile user to access a wide range of applications. For example, mobile users may want to retrieve data about unfamiliar places or local life styles related to their location. These queries are called location-dependent queries. Furthermore, a mobile user may be interested in getting the query results repeatedly, which is called location-dependent continuous querying. This continuous query emanating from a mobile user may retrieve information from a single-zone (single-ZQ) or from multiple neighbouring zones (multiple-ZQ). We consider the problem of handling location-dependent continuous queries with the main emphasis on reducing communication costs and making sure that the user gets correct current-query result. The key contributions of this paper include: (1) Proposing a hierarchical database framework (tree architecture and supporting continuous query algorithm) for handling location-dependent continuous queries. (2) Analysing the flexibility of this framework for handling queries related to single-ZQ or multiple-ZQ and propose intelligent selective placement of location-dependent databases. (3) Proposing an intelligent selective replication algorithm to facilitate time- and space-efficient processing of location-dependent continuous queries retrieving single-ZQ information. (4) Demonstrating, using simulation, the significance of our intelligent selective placement and selective replication model in terms of communication cost and storage constraints, considering various types of queries. Manish Gupta received his B.E. degree in Electrical Engineering from Govindram Sakseria Institute of Technology & Sciences, India, in 1997 and his M.S. degree in Computer Science from University of Texas at Dallas in 2002. He is currently working toward his Ph.D. degree in the Department of Computer Science at University of Texas at Dallas. His current research focuses on AI-based software synthesis and testing. His other research interests include mobile computing, aspect-oriented programming and model checking. Manghui Tu received a Bachelor degree of Science from Wuhan University, P.R. China, in 1996, and a Master's Degree in Computer Science from the University of Texas at Dallas 2001. He is currently working toward the Ph.D. degree in the Department of Computer Science at the University of Texas at Dallas. Mr. Tu's research interests include distributed systems, wireless communications, mobile computing, and reliability and performance analysis. His Ph.D. research work focuses on the dependent and secure data replication and placement issues in network-centric systems. Latifur R. Khan has been an Assistant Professor of Computer Science department at University of Texas at Dallas since September 2000. He received his Ph.D. and M.S. degrees in Computer Science from University of Southern California (USC) in August 2000 and December 1996, respectively. He obtained his B.Sc. degree in Computer Science and Engineering from Bangladesh University of Engineering and Technology, Dhaka, Bangladesh, in November of 1993. Professor Khan is currently supported by grants from the National Science Foundation (NSF), Texas Instruments, Alcatel, USA, and has been awarded the Sun Equipment Grant. Dr. Khan has more than 50 articles, book chapters and conference papers focusing in the areas of database systems, multimedia information management and data mining in bio-informatics and intrusion detection. Professor Khan has also served as a referee for database journals, conferences (e.g. IEEE TKDE, KAIS, ADL, VLDB) and he is currently serving as a program committee member for the 11th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (SIGKDD2005), ACM 14th Conference on Information and Knowledge Management (CIKM 2005), International Conference on Database and Expert Systems Applications DEXA 2005 and International Conference on Cooperative Information Systems (CoopIS 2005), and is program chair of ACM SIGKDD International Workshop on Multimedia Data Mining, 2004. Farokh Bastani received the B.Tech. degree in Electrical Engineering from the Indian Institute of Technology, Bombay, and the M.S. and Ph.D. degrees in Computer Science from the University of California, Berkeley. He is currently a Professor of Computer Science at the University of Texas at Dallas. Dr. Bastani's research interests include various aspects of the ultrahigh dependable systems, especially automated software synthesis and testing, embedded real-time process-control and telecommunications systems and high-assurance systems engineering. Dr. Bastani was the Editor-in-Chief of the IEEE Transactions on Knowledge and Data Engineering (IEEE-TKDE). He is currently an emeritus EIC of IEEE-TKDE and is on the editorial board of the International Journal of Artificial Intelligence Tools, the International Journal of Knowledge and Information Systems and the Springer-Verlag series on Knowledge and Information Management. He was the program cochair of the 1997 IEEE Symposium on Reliable Distributed Systems, 1998 IEEE International Symposium on Software Reliability Engineering, 1999 IEEE Knowledge and Data Engineering Workshop, 1999 International Symposium on Autonomous Decentralised Systems, and the program chair of the 1995 IEEE International Conference on Tools with Artificial Intelligence. He has been on the program and steering committees of several conferences and workshops and on the editorial boards of the IEEE Transactions on Software Engineering, IEEE Transactions on Knowledge and Data Engineering and the Oxford University Press High Integrity Systems Journal. I-Ling Yen received her B.S. degree from Tsing-Hua University, Taiwan, and her M.S. and Ph.D. degrees in Computer Science from the University of Houston. She is currently an Associate Professor of Computer Science at University of Texas at Dallas. Dr. Yen's research interests include fault-tolerant computing, security systems and algorithms, distributed systems, Internet technologies, E-commerce and self-stabilising systems. She has published over 100 technical papers in these research areas and received many research awards from NSF, DOD, NASA and several industry companies. She has served as Program Committee member for many conferences and Program Chair/Cochair for the IEEE Symposium on Application-Specific Software and System Engineering & Technology, IEEE High Assurance Systems Engineering Symposium, IEEE International Computer Software and Applications Conference, and IEEE International Symposium on Autonomous Decentralized Systems. She has also served as a guest editor for a theme issue of IEEE Computer devoted to high-assurance systems.  相似文献   

2.
Component middleware provides dependable and efficient platforms that support key functional, and quality of service (QoS) needs of distributed real-time embedded (DRE) systems. Component middleware, however, also introduces challenges for DRE system developers, such as evaluating the predictability of DRE system behavior, and choosing the right design alternatives before committing to a specific platform or platform configuration. Model-based technologies help address these issues by enabling design-time analysis, and providing the means to automate the development, deployment, configuration, and integration of component-based DRE systems. To this end, this paper applies model checking techniques to DRE design models using model transformations to verify key QoS properties of component-based DRE systems developed using Real-time CORBA. We introduce a formal semantic domain for a general class of DRE systems that enables the verification of distributed non-preemptive real-time scheduling. Our results show that model-based techniques enable design-time analysis of timed properties and can be applied to effectively predict, simulate, and verify the event-driven behavior of component-based DRE systems. This research was supported by the NSF Grants CCR-0225610 and ACI-0204028 Gabor Madl is a Ph.D. student and a graduate student researcher at the Center for Embedded Computer Systems at the University of California, Irvine. His advisor is Nikil Dutt. His research interests include the formal verification, optimization, component-based composition, and QoS management of distributed real-time embedded systems. He received his M.S. in computer science from Vanderbilt University and in computer engineering from the Budapest University of Technology and Economics. Dr. Sherif Abdelwahed received his Ph.D. degree in Electrical and Computer Engineering from the University of Toronto, Canada, in 2001. During 2000–2001, he was a research scientist with the system diagnosis group at the Rockwell Scientific Company. Since 2001 he has been with the Department of Electrical Engineering and Computer Science at Vanderbilt University as a Research Assistant Professor. His research interests include verification and control of distributed real-time systems, and model-based diagnosis of discrete-event and hybrid systems. Dr. Douglas C. Schmidt is a Professor of Computer Science, Associate Chair of the Computer Science and Engineering program, and a Senior Researcher in the Institute for Software Integrated Systems (ISIS) all at Vanderbilt University. He has published over 300 technical papers and 6 books that cover a range of research topics, including patterns, optimization techniques, and empirical analyses of software frameworks and domain-specific modeling environments that facilitate the development of distributed real-time and embedded (DRE) middleware and applications. Dr. Schmidt has served as a Deputy Office Director and a Program Manager at DARPA, where he lead the national R&D effort on middleware for DRE systems. In addition to his academic research and government service, Dr. Schmidt has over fifteen years of experience leading the development of ACE, TAO, CIAO, and CoSMIC, which are widely used, open-source DRE middleware frameworks and model-driven tools that contain a rich set of components and domain-specific languages that implement patterns and product-line architectures for high-performance DRE systems.  相似文献   

3.
The VLISP project showed how to produce a comprehensively verified implementation for a programming language, namely Scheme. This paper introduces two more detailed studies on VLISP [13, 21]. It summarizes the basic techniques that were used repeatedly throughout the effort. It presents scientific conclusions about the applicability of the these techniques as well as engineering conclusions about the crucial choices that allowed the verification to succeed.The work reported here was carried out as part of The MITRE Corporation's Technology Program, under funding from Rome Laboratory, Electronic Systems Command, United States Air Force, through contract F19628-89-C-0001. Preparation of this paper was generously supported by The MITRE Corporation. Mitchell Wand's participation was partly supported by NSF and DARPA under NSF grants CCR-9002253 and CCR-9014603.  相似文献   

4.
The Configuration Toolkit (CTK) is a library for constructing configurable object based abstractions that are part of multiprocessor programs or operating systems. The library is unique in its exploration of runtime configuration for attaining performance improvements: 1) its programming model facilitates the expression and implementation of program configuration; and 2) its efficient runtime support enables performance improvements by the configuration of program components during their execution. Program configuration is attained without compromising the encapsulation or the reuse of software abstractions. CTK programs are configured using attributes associated with object classes, object instances, state variables, operations, and object invocations. At runtime, such attributes are interpreted by policy classes, which may be varied separately from the abstractions with which they are associated. Using policies and attributes, an object's runtime behavior may be varied by: 1) changing its performance or reliability while preserving the implementation of its functional behavior, or 2) changing the implementation of its internal computational strategy. CTK's multiprocessor implementation is layered on a Cthreads-compatible programming library, which results in its portability to a wide variety of uni- and multiprocessor machines, including a Kendall Square KSR-2 Supercomputer, SGI machines, various SUN workstations, and as a native kernel on the GP1000 BBN Butterfly multiprocessor. The platforms evaluated in the paper are the KSR and SGI machines  相似文献   

5.
ABC:基于体系结构、面向构件的软件开发方法   总被引:125,自引:11,他引:125       下载免费PDF全文
梅宏  陈锋  冯耀东  杨杰 《软件学报》2003,14(4):721-732
基于构件的软件复用和开发被认为是提高软件开发效率和质量的有效途径,并在分布式系统中得到了广泛的应用.但是,目前的软件构件技术主要还是着眼于构件实现模型和运行时互操作,缺乏一套系统的方法以指导整个开发过程.近年来,以构件为基本单元的软件体系结构研究取得了较大的发展.它通过对软件系统整体结构和特性的描述,为面向构件的软件开发提供了一个自顶向下的途径.介绍了一种以软件体系结构为指导,面向构件的软件开发方法,试图为基于构件的软件复用提供一种有效的解决方案.这种方法主要是将软件体系结构引入到软件开发的各个阶段,作为系统开发的蓝图,利用工具支持的自动转换机制缩小从高层设计到实现的距离,而后在构件平台的运行支持下实现自动的系统组装生成.  相似文献   

6.
The granularity of shared data is one of the key factors affecting the performance of distributed shared memory machines (DSM). Given that programs exhibit quite different sharing patterns, providing only one or two fixed granularities cannot result in an efficient use of resources. On the other hand, supporting arbitrarily granularity sizes significantly increases not only hardware complexity but software overhead as well. Furthermore. the efficient use of arbitrarily granularities put the burden on users to provide information about program behavior to compilers and/or runtime systems. These kind of requirements tend to restrict the programmability of the shared memory model. In this paper, we present a new communication scheme, calledAdaptive Granularity (AG). Adaptive Granularity makes it possible to transparently integrate bulk transfer into the shared memory model by supporting variable-size granularity and memory replication. It consists of two protocols: one for small data and another for large data. For small size data, the standard hardware DSM protocol is used and the granularity is fixed to the size of a cache line. For large array data, the protocol for bulk data is used instead, and the granularity varies depending on the runtime sharing behavior of the applications. Simulation results show that AG improves performance up to 43% over the hardware implementation of DSM (e.g., DASH, Alewife). Compared with an equivalent architecture that supports fine-grain memory replication at the fixed granularity of a cache line (e.g., Typhoon), AG reduces execution time up to 35%. This research was supported in part by NSF under grant CCR-9308981 by ARPA under Rome Laboratories Contract F30602-91-C-0146, and by the USC Zumberge Fund. Computing resources were provided in part by NSF infrastructure grant CDA-9216321.  相似文献   

7.
Lehmann  J.R. 《Computer》1978,11(7):10-13
Interestingly enough, much of the computer architecture research currently supported by the National Science Foundation is related to the theme of this issue. Below, John R. Lehmann, program director of NSF's Computer Systems Design Program, has provided a list of principal investigators and the titles of their research, as well as a brief statement on how to submit a research proposal.  相似文献   

8.
移动代理安全机制的研究   总被引:4,自引:0,他引:4  
移动代理的安全性成为了移动代理系统在Internet及其它现代网络技术中推广应用的瓶颈。论文详细分析了移动代理系统中在数据传递和通信链接、服务器系统资源、移动代理运行环境以及移动代理自身安全等方面存在的种种安全威胁和隐患。然后分别对传输中的移动代理、服务器资源、执行环境中移动代理的保护方案进行了系统的研究,创造性地将JavaCard技术用于构建安全的移动代理执行环境。并在此研究成果基础之上设计并开发出SMMA2002移动代理安全模型系统。  相似文献   

9.
Complexity results for HTN planning   总被引:1,自引:0,他引:1  
Most practical work on AI planning systems during the last fifteen years has been based on Hierarchical Task Network (HTN) decomposition, but until now, there has been very little analytical work on the properties of HTN planners. This paper describes how the complexity of HTN planning varies with various conditions on the task networks, and how it compares to STRIPS-style planning.This work was supported in part by NSF Grant NSFD CDR-88003012 to the Institute for Systems Research, and NSF grant IRI9306580 and ONR grant N00014-91-J-1451 to the Computer Science Department.  相似文献   

10.
This paper concerns the exploitation of user transparent inherent parallelism of pure Prolog programs using program transformation. We describe a novel paradigmenumerate-and-filter for transforming generate-and-test programs for execution under the committed-choice model extended to incorporate multiple solutions based on set enumeration. The paradigm simulates OR-parallelism by stream AND-parallelism integrating OR-parallelism, AND-parallelism, and stream parallelism. Generate-and-test programs are classified into three categories:simple generate-and-test, recursively embedded generate-and-test, and deeply intertwined generate-and-test. The intermediate programs are further transformed to reduce structure copying and metacalls. Algorithms are presented and demonstrated by transforming the representative examples of different classes of generate-and-test programs to Flat Concurrent Prolog equivalents. Statistics show that the techniques are efficient.Funded in part by Cleveland Advanced Manufacturing Program through the State of Ohio as a part of its core research program grant to Center of Automation and Intelligent Systems Research, Case Western Reserve University and NSF equipment grant CDA-8820390 to Kent State University.  相似文献   

11.
Constructing the Voronoi diagram of a set of line segments in parallel   总被引:1,自引:1,他引:0  
In this paper we give a parallel algorithm for constructing the Voronoi diagram of a polygonal scene, i.e., a set of line segments in the plane such that no two segments intersect except possibly at their endpoints. Our algorithm runs inO(log2 n) time usingO(n) processors in the CREW PRAM model.The research of M. T. Goodrich was supported by NSF under Grants CCR-8810568 and CCR-9003299 and by NSF/DARPA under Grant CCR-8908092. C. K. Yap's research was supported in part by NSF Grants DCR-8401898 and CCR-9002819.  相似文献   

12.
For any controllable, linear system it is clear that the minimum control energy must increase unboundedly as the available time for exact control decreases to 0. This is made precise, obtaining asymptoticallyO(T −(K+1/2) behavior for the norm of the control operator whereK is the order of the “least controllable” modes (the minimal exponent for the rank condition). This research was partially supported under Grant AFOSR-82-0271. Portions of this were done while the author was visiting at the Systems Research Center (University of Maryland) with NSF support under CDR-85-00108 and at the Centre for Mathematical Analysis (Australian National University).  相似文献   

13.
We study strategies for converting randomized algorithms of the Las Vegas type into randomized algorithms with small tail probabilities.Supported by ESPRIT U Basic Research Actions Program of the EC under Contract No. 3075 (project ALCOM).Supported by ESPRIT II Basic Research Actions Program of the EC under Contract No. 3075 (Project ALCOM).Research supported by NSF Grant No. CCR-9005448.Partially supported by a Wolfson Research Award administered by the Israel Academy of Sciences and Humanities.  相似文献   

14.
本文设计并实现了一种用于系统生物学研究的交互式解释型语言——B语言及其运行环境。根据系统生物学标记语言(SBML)的标准定义设计B语言的建模语法,介绍了B语言解释器和运行环境的开发过程,最后通过仿真实例对B语言与Matlab语言的仿真方式进行了比较,结果表明B语言可更快速获得仿真结果。  相似文献   

15.
Planners, policy-makers and their technicians have the difficult task to intervene in complex human-natural systems. It is not enough for them to focus on individual processes; rather it is necessary to address the system as a complex integral whole. In the given circumstances, integrated models as part of Policy Support Systems (PSS) can provide support. The MedAction PSS incorporates socio-economic and physical processes in a strongly coupled manner. It is implemented with the GEONAMICA® application framework and is intended to support planning and policy making in the fields of land degradation, desertification, water management and sustainable farming. The objective of this paper is to provide some insight in the individual models, the model integration achieved, as well as the actual use of the MedAction PSS. For the latter an application example is developed. The paper also argues that technical and scientific aspects of Policy Support Systems are not the sole elements deciding on their use in practice and concludes with some lessons learned during the development and use of the MedAction PSS and similar systems.  相似文献   

16.
The error dynamics of the extended Kalman filter (EKF), employed as an observer for a general nonlinear, stochastic discrete time system, are analyzed. Sufficient conditions for the boundedness of the errors of the EKF are determined. An expression for the bound on the errors is given in terms of the size of the nonlinearities of the system and the error covariance matrices used in the design of the EKF. The results are applied to the design of a stable EKF frequency tracker for a signal with time-varying frequency.This research was supported by the Co-operative Research Centre for Robust and Adaptive Systems ((CR)2 ASys). The authors wish to acknowledge the funding of the activities of (CR)2 ASys by the Australian Commonwealth Government under the Co-operative Research Centre Program.  相似文献   

17.
Operator scheduling in data stream systems   总被引:5,自引:0,他引:5  
In many applications involving continuous data streams, data arrival is bursty and data rate fluctuates over time. Systems that seek to give rapid or real-time query responses in such an environment must be prepared to deal gracefully with bursts in data arrival without compromising system performance. We discuss one strategy for processing bursty streams - adaptive, load-aware scheduling of query operators to minimize resource consumption during times of peak load. We show that the choice of an operator scheduling strategy can have significant impact on the runtime system memory usage as well as output latency. Our aim is to design a scheduling strategy that minimizes the maximum runtime system memory while maintaining the output latency within prespecified bounds. We first present Chain scheduling, an operator scheduling strategy for data stream systems that is near-optimal in minimizing runtime memory usage for any collection of single-stream queries involving selections, projections, and foreign-key joins with stored relations. Chain scheduling also performs well for queries with sliding-window joins over multiple streams and multiple queries of the above types. However, during bursts in input streams, when there is a buildup of unprocessed tuples, Chain scheduling may lead to high output latency. We study the online problem of minimizing maximum runtime memory, subject to a constraint on maximum latency. We present preliminary observations, negative results, and heuristics for this problem. A thorough experimental evaluation is provided where we demonstrate the potential benefits of Chain scheduling and its different variants, compare it with competing scheduling strategies, and validate our analytical conclusions.Received: 18 October 2003, Accepted: 16 April 2004, Published online: 14 September 2004Edited by: J. Gehrke and J. HellersteinBrian Babcock: Supported in part by a Rambus Corporation Stanford Graduate Fellowship and NSF Grant IIS-0118173.Shivnath Babu: Supported in part by NSF Grants IIS-0118173 and IIS-9817799.Mayur Datar: Supported in part by Siebel Scholarship and NSF Grant IIS-0118173.Rajeev Motwani: Supported in part by NSF Grant IIS-0118173, an Okawa Foundation Research Grant, an SNRC grant, and grants from Microsoft and Veritas.Dilys Thomas: Supported by NSF Grant EIA-0137761 and NSF ITR Award Number 0331640.  相似文献   

18.
In this paper, stability and disturbance attenuation issues for a class of Networked Control Systems (NCSs) under uncertain access delay and packet dropout effects are considered. Our aim is to find conditions on the delay and packet dropout rate, under which the system stability and H∞ disturbance attenuation properties are preserved to a desired level. The basic idea in this paper is to formulate such Networked Control System as a discrete-time switched system. Then the NCSs’ stability and performance problems can be reduced to the corresponding problems for switched systems, which have been studied for decades and for which a number of results are available in the literature. The techniques in this paper are based on recent progress in the discrete-time switched systems and piecewise Lyapunov functions.  相似文献   

19.
In the late 1990s, the US Office of the Secretary of Defense (OSD) established a joint research program with the Semiconductor Industry Association (SIA) through the Semiconductor Research Corp. (SRC), known as the Focus Center Research Program. One of these FCRP centers is the Gigascale Systems Research Center (GSRC), whose focus is on the systems architecture and design aspects of electronics technology. The research of the GSRC is of great value to the US Department of Defense, since design remains an important challenge for DoD systems. This partnership is an excellent example of government and industry collaboration on precompetitive research of common interest and benefit.  相似文献   

20.
基于COM组件的网络专家系统推理机的研究和实现   总被引:4,自引:2,他引:4  
网络专家系统推理机是在异构平台环境下运行的,这就要求该系统在设计时就要考虑到软件的跨平台性,能够屏蔽网络硬件平台的差异和操作系统与网络协议的异构性。文章就此进行了研究,提出了把中间件技术运用到网络农业专家系统推理机当中去,这种中间件技术就是COM组件。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号