排序方式: 共有17条查询结果,搜索用时 15 毫秒
1.
Summary Byzantine Agreement is important both in the theory and practice of distributed computing. However, protocols to reach Byzantine Agreement are usually expensive both in the time required as well as in the number of messages exchanged. In this paper, we present a self-adjusting approach to the problem. The Mostly Byzantine Agreement is proposed as a more restrictive agreement problem that requires that in the consecutive attempts to reach agreement, the number of disagreements (i.e., failures to reach Byzantine Agreement) is finite. Fort faulty processes, we give an algorithm that has at mostt disagreements for 4t or more processes. Another algorithm is given forn3t+1 processes with the number of disagreements belowt
2/2. Both algorithms useO(n
3) message bits for binary value agreement.
Yi Zhao is currently working on his Ph.D. degree in Computer Science at University of Houston. His research interests include fault tolerance, distributed computing, parallel computation and neural networks. He obtained his M.S. from University of Houston in 1988 and B.S. from Beijing University of Aeronautics and Astronautics in 1984, both in computer science.
Farokh B. Bastani received the B. Tech. degree in electrical engineering from the Indian Institute of Technology, Bombay, India, and the M.S. and Ph.D. degrees in electrical engineering and computer science from the University of California, Berkeley. He joined the University of Houston in 1980, where he is currently an Associate Professor of Computer Science. His research interests include software design and validation techniques, distributed systems, and fault-tolerant systems. He is a member of the ACM and the IEEE and is on the editorial board of theIEEE Transactions on Software Engineering. 相似文献
2.
Hachem Moussa Tong Gao I-Ling Yen Farokh Bastani Jun-Jang Jeng 《Service Oriented Computing and Applications》2010,4(1):17-31
Many application domains are increasingly leveraging service-oriented architecture (SOA) techniques to facilitate rapid system
deployment. Many of these applications are time-critical and, hence, real-time assurance is an essential step in the service
composition process. However, there are gaps in existing service composition techniques for real-time systems. First, admission
control is an essential technique to assure the time bound for service execution, but most of the service composition techniques
for real-time systems do not take admission control into account. A service may be selected for a workflow during the composition
phase, but then during the grounding phase, the concrete service may not be able to admit the workload. Thus, the entire composition
process may have to be repeated. Second, communication time is an important factor in real-time SOA, but most of the existing
works do not consider how to obtain the communication latencies between services during the composition phase. It is clear
that maintaining a full table of communication latencies for all pairs of services is infeasible. Obtaining communication
latencies between candidate services during the composition phase can also be costly, since many candidate services may not
be used for grounding. Thus, some mechanism is needed for estimating the communication latency for composite services. In
this paper, we propose a three-phase composition approach to address the above issues. In this approach, we first use a highly
efficient but moderately accurate algorithm to eliminate most of the candidate compositions based on estimated communication
latencies and assured service response latency. Then, a more accurate timing prediction is performed on a small number of
selected compositions in the second phase based on confirmed admission and actual communication latency. In the third phase,
specific concrete services are selected for grounding, and admissions are actually performed. The approach is scalable and
can effectively achieve service composition for satisfying real-time requirements. Experimental studies show that the three-phase
approach does improve the effectiveness and time for service composition in SOA real-time systems. In order to support the
new composition approach, it is necessary to effectively specify the needed information. In this paper, we also present the
specification model for timing-related information and the extension of OWL-S to support this specification model. 相似文献
3.
The use of abstractions enhances several aspects of a software system, especially its maintainability, reusability, and comprehensibility. However, it decreases the performance of the software. Context dependent transformations can effectively remove the performance loss of abstractions while preserving all their advantages. We state the conditions which the transformations should satisfy and develop four general transformation rules. Language mechanisms are proposed which permit the transformation directives to be embedded in the source code. This can be used to automate the transformations. It also facilitates an approach to incremental performance improvement. 相似文献
4.
Manghui Tu Hui Ma Liangliang Xiao I.-Ling Yen Farokh Bastani Dianxiang Xu 《Journal of Grid Computing》2013,11(1):103-127
Data dependability is an important issue in data Grids. Replication schemes have been widely used in distributed systems to ensure availability and improve access performance. Alternatively, data partitioning schemes (secret sharing, erasure coding with encryption) can be used to provide availability and, in addition, to offer confidentiality protection. In peer-to-peer data Grids, such confidentiality protection is essential since the nodes hosting the data shares may not be trustworthy or may be compromised. However, difficulties in generating new shares and potential security concerns for share reallocation make a pure data partitioning scheme not easily adaptable to dynamic user access patterns. In this paper, we consider combining replication and data partitioning to assure data availability, confidentiality, load balance, and efficient access for data Grid applications. Data are partitioned and shares are dispersed. The shares may be replicated to achieve better performance, load balance, and availability. Models for assessing confidentiality, availability, load balance, and communication cost are developed and used as the metrics to guide placement decisions. Due to the nature of contradicting goals, we model the placement decision problem as a multi-objective problem and use a genetic algorithm to determine solutions that are approximate to the Pareto optimal placement solutions. 相似文献
5.
Manish?GuptaEmail author Jicheng?Fu Farokh?B.?Bastani Latifur?R.?Khan I.-Ling?Yen 《Software Quality Journal》2007,15(3):241-263
With the rapid growth in the development of sophisticated modern software applications, the complexity of the software development
process has increased enormously, posing an urgent need for the automation of some of the more time-consuming aspects of the
development process. One of the key stages in the software development process is system testing. In this paper, we evaluate
the potential application of AI planning techniques in automated software testing. The key contributions of this paper include
the following: (1) A formal model of software systems from the perspective of software testing that is applicable to important
classes of systems and is amenable to automation using AI planning methods. (2) The design of a framework for an automated
planning system (APS) for applying AI planning techniques for testing software systems. (3) Assessment of the test automation
framework and a specific AI Planning algorithm, namely, MEA-Graphplan (Means-Ends Analysis Graphplan), algorithm to automatically
generate test data. (4) A case study is presented to evaluate the proposed automated testing method and compare the performance
of MEA-Graphplan with that of Graphplan. The empirical results show that for software testing, the MEA-Graphplan algorithm
can perform computationally more efficiently and effectively than the basic Graph Planning algorithm.
相似文献
I.-Ling YenEmail: |
6.
A service-oriented architecture (SOA) is an ideal vehicle for achieving reconfigurable systems that can select and compose services statically or dynamically to satisfy changing system requirements. The authors have developed a rule-based parameterization technique to convert components into reconfigurable services. 相似文献
7.
Manish Gupta Manghui Tu Latifur Khan Farokh Bastani I-Ling Yen 《Knowledge and Information Systems》2005,8(4):414-437
Advances in wireless and mobile computing environments allow a mobile user to access a wide range of applications. For example,
mobile users may want to retrieve data about unfamiliar places or local life styles related to their location. These queries
are called location-dependent queries. Furthermore, a mobile user may be interested in getting the query results repeatedly,
which is called location-dependent continuous querying. This continuous query emanating from a mobile user may retrieve information
from a single-zone (single-ZQ) or from multiple neighbouring zones (multiple-ZQ). We consider the problem of handling location-dependent
continuous queries with the main emphasis on reducing communication costs and making sure that the user gets correct current-query
result. The key contributions of this paper include: (1) Proposing a hierarchical database framework (tree architecture and
supporting continuous query algorithm) for handling location-dependent continuous queries. (2) Analysing the flexibility of
this framework for handling queries related to single-ZQ or multiple-ZQ and propose intelligent selective placement of location-dependent
databases. (3) Proposing an intelligent selective replication algorithm to facilitate time- and space-efficient processing
of location-dependent continuous queries retrieving single-ZQ information. (4) Demonstrating, using simulation, the significance
of our intelligent selective placement and selective replication model in terms of communication cost and storage constraints,
considering various types of queries.
Manish Gupta received his B.E. degree in Electrical Engineering from Govindram Sakseria Institute of Technology & Sciences, India, in
1997 and his M.S. degree in Computer Science from University of Texas at Dallas in 2002. He is currently working toward his
Ph.D. degree in the Department of Computer Science at University of Texas at Dallas. His current research focuses on AI-based
software synthesis and testing. His other research interests include mobile computing, aspect-oriented programming and model
checking.
Manghui Tu received a Bachelor degree of Science from Wuhan University, P.R. China, in 1996, and a Master's Degree in Computer Science
from the University of Texas at Dallas 2001. He is currently working toward the Ph.D. degree in the Department of Computer
Science at the University of Texas at Dallas. Mr. Tu's research interests include distributed systems, wireless communications,
mobile computing, and reliability and performance analysis. His Ph.D. research work focuses on the dependent and secure data
replication and placement issues in network-centric systems.
Latifur R. Khan has been an Assistant Professor of Computer Science department at University of Texas at Dallas since September 2000. He
received his Ph.D. and M.S. degrees in Computer Science from University of Southern California (USC) in August 2000 and December
1996, respectively. He obtained his B.Sc. degree in Computer Science and Engineering from Bangladesh University of Engineering
and Technology, Dhaka, Bangladesh, in November of 1993. Professor Khan is currently supported by grants from the National
Science Foundation (NSF), Texas Instruments, Alcatel, USA, and has been awarded the Sun Equipment Grant. Dr. Khan has more
than 50 articles, book chapters and conference papers focusing in the areas of database systems, multimedia information management
and data mining in bio-informatics and intrusion detection. Professor Khan has also served as a referee for database journals,
conferences (e.g. IEEE TKDE, KAIS, ADL, VLDB) and he is currently serving as a program committee member for the 11th ACM SIGKDD
International Conference on Knowledge Discovery and Data Mining (SIGKDD2005), ACM 14th Conference on Information and Knowledge
Management (CIKM 2005), International Conference on Database and Expert Systems Applications DEXA 2005 and International Conference
on Cooperative Information Systems (CoopIS 2005), and is program chair of ACM SIGKDD International Workshop on Multimedia
Data Mining, 2004.
Farokh Bastani received the B.Tech. degree in Electrical Engineering from the Indian Institute of Technology, Bombay, and the M.S. and Ph.D.
degrees in Computer Science from the University of California, Berkeley. He is currently a Professor of Computer Science at
the University of Texas at Dallas. Dr. Bastani's research interests include various aspects of the ultrahigh dependable systems,
especially automated software synthesis and testing, embedded real-time process-control and telecommunications systems and
high-assurance systems engineering.
Dr. Bastani was the Editor-in-Chief of the IEEE Transactions on Knowledge and Data Engineering (IEEE-TKDE). He is currently
an emeritus EIC of IEEE-TKDE and is on the editorial board of the International Journal of Artificial Intelligence Tools,
the International Journal of Knowledge and Information Systems and the Springer-Verlag series on Knowledge and Information
Management. He was the program cochair of the 1997 IEEE Symposium on Reliable Distributed Systems, 1998 IEEE International
Symposium on Software Reliability Engineering, 1999 IEEE Knowledge and Data Engineering Workshop, 1999 International Symposium
on Autonomous Decentralised Systems, and the program chair of the 1995 IEEE International Conference on Tools with Artificial
Intelligence. He has been on the program and steering committees of several conferences and workshops and on the editorial
boards of the IEEE Transactions on Software Engineering, IEEE Transactions on Knowledge and Data Engineering and the Oxford
University Press High Integrity Systems Journal.
I-Ling Yen received her B.S. degree from Tsing-Hua University, Taiwan, and her M.S. and Ph.D. degrees in Computer Science from the University
of Houston. She is currently an Associate Professor of Computer Science at University of Texas at Dallas. Dr. Yen's research
interests include fault-tolerant computing, security systems and algorithms, distributed systems, Internet technologies, E-commerce
and self-stabilising systems. She has published over 100 technical papers in these research areas and received many research
awards from NSF, DOD, NASA and several industry companies. She has served as Program Committee member for many conferences
and Program Chair/Cochair for the IEEE Symposium on Application-Specific Software and System Engineering & Technology, IEEE
High Assurance Systems Engineering Symposium, IEEE International Computer Software and Applications Conference, and IEEE International
Symposium on Autonomous Decentralized Systems. She has also served as a guest editor for a theme issue of IEEE Computer devoted
to high-assurance systems. 相似文献
8.
In this paper, interpolation formulas are derived for the recovery of a two-dimensional (2-D) bandlimited signal from its isolated zeros. Different constellations of zero locations are considered. The interpolation formulas, based on Lagrange interpolation, are given both in Cartesian and in polar coordinates. 相似文献
9.
Amiya Singh Poonam Singh Arash Amini Farokh Marvasti 《Wireless Communications and Mobile Computing》2016,16(17):3070-3088
Overloaded code division multiple access being the only means of the capacity extension for conventional code division multiple access accommodates more number of signatures than the spreading gain. Recently, ternary Signature Matrices with Orthogonal Subsets (SMOS) has been proposed, where the capacity maximization is 200%. The proposed multi‐user detector using matched filter exploits the twin tree hierarchy of correlation among the subsets to guarantee the errorless recovery. In this paper, we feature the non‐ternary version of SMOS (i.e., 2k‐ary SMOS) of same capacity, where the binary alphabets in all the k constituent (orthogonal) subsets are unique. Unlike ternary, the tree hierarchy for 2k‐ary SMOS is non‐uniform. However, the errorless detection of the multi‐user detector remains undeviated. For noisy transmission, simulation results show the error performance of the right child for each subset of 2k‐ary to be significantly improved over the left. The optimality of the right child of the largest (Hadamard) subset is also discovered. At higher loading, for larger and smaller subsets the superiority is reported for the 2k‐ary and ternary, respectively, and the counter‐intuitive deviations observed for the lower loading scenarios are logically explained. For the overall capacity maximization being 150%, superiority is featured by the 2k‐ary, but beyond, it becomes a conditional entity. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
10.
Wei Hao Jicheng Fu Jiang He I-Ling Yen Farokh Bastani Ing-Ray Chen 《World Wide Web》2006,9(3):253-275
Proxy caching is an effective approach to reduce the response latency to client requests, web server load, and network traffic.
Recently there has been a major shift in the usage of the Web. Emerging web applications require increasing amount of server-side
processing. Current proxy protocols do not support caching and execution of web processing units. In this paper, we present
a weblet environment, in which, processing units on web servers are implemented as weblets. These weblets can migrate from
web servers to proxy servers to perform required computation and provide faster responses. Weblet engine is developed to provide
the execution environment on proxy servers as well as web servers to facilitate uniform weblet execution. We have conducted
thorough experimental studies to investigate the performance of the weblet approach. We modify the industrial standard e-commerce
benchmark TPC-W to fit the weblet model and use its workload model for performance comparisons. The experimental results show
that the weblet environment significantly improves system performance in terms of client response latency, web server throughput,
and workload. Our prototype weblet system also demonstrates the feasibility of integrating weblet environment with current
web/proxy infrastructure. 相似文献