首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We address the design of architectures and protocols for providing mobile users with integrated Personal Information Services and Applications (PISA), such as personalized news and financial information, and mobile database access. We present a system architecture for delivery of PISA based on replicated distributed servers connected to users via a personal communications services (PCS) network. The PISA architecture partitions the geographical coverage area into service areas, analogous to PCS registration areas, each of which is served by a single local server. When a user moves from one service area to another, the service is provided by the new local server. This is accomplished by a service handoff, analogous to a PCS call handoff, which entails some context information transfer from the old to the new server. We focus on the mobile sales and inventory application as an example of a PISA with a well‐defined market segment. We design a database management protocol for supporting both mobile and stationary salespersons. Our design uses the site‐transaction escrow method, thus allowing faster responses to mobile clients, minimizing the amount of context information which must be transferred during a service handoff, and allowing mobile clients to operate in disconnected mode by escrowing items on their local disks. We develop a formal model for reasoning about site‐transaction escrow, and develop a scheme for performing dynamic resource reconfiguration which avoids the need for time‐consuming and costly database synchronization operations (i.e., a two‐phase commit) when the mobile sales transaction completes. A further refinement to the scheme avoids an n‐way two‐phase commit during resource reconfiguration operations, replacing it with several simpler two‐phase commits. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

2.
The k-out-of-n secret sharing schemes are effective, reliable, and secure methods to prevent a secret or secrets from being lost, stolen, or corrupted. The circular sequential k-out-of-n congestion (CSknC) system , based upon this type of secret sharing scheme, is presented for reconstructing secret(s) from any k servers among n servers in circular, sequential order. When a server is connected successfully, it will not be reconnected in later rounds until the CSknC system has k distinct, successfully connected servers. An optimal server arrangement in a CSknC system is determined in where n servers have known network connection probabilities for two states, i.e., congested, and successful. In this paper, we present: i) a generalized access structure congestion (GGammaC) system that is based upon the generalized secret sharing scheme, and ii) an efficient connection procedure for the GGammaC system in terms of the minimal number of server connection attempts. The k-out-of-n secret sharing schemes are considered as simple cases of the generalized secret sharing schemes. It implies that the GGammaC system is a more general system than the CSknC system. We established an iterative connection procedure for the new system. Simulation results are used to demonstrate that the iterative connection procedure is more efficient in terms of minimizing the number of connection attempts  相似文献   

3.
Proxy-assisted techniques for delivering continuous multimedia streams   总被引:1,自引:0,他引:1  
We present a proxy-assisted video delivery architecture that can simultaneously reduce the resources requirements at the central server and the service latency experienced by clients (i.e., end users). Under the proposed video delivery architecture, we develop and analyze two novel proxy-assisted video streaming techniques for on-demand delivery of video objects to a large number of clients. By taking advantage of the resources available at the proxy servers, these techniques not only significantly reduce the central server and network resource requirements, but are also capable of providing near-instantaneous service to a large number of clients. We optimize the performance of our video streaming architecture by carefully selecting video delivery techniques for videos of various popularity and intelligently allocating resources between proxy servers and the central server. Through empirical studies, we demonstrate the efficacy of the proposed proxy-assisted video streaming techniques.  相似文献   

4.
Analysis and Design of Distributed Key Distribution Centers   总被引:1,自引:0,他引:1  
A Key Distribution Center of a network is a server who generates anddistributes secret keys to groups of users for secure communication. A Distributed Key Distribution Center is a set of servers that jointly realizes a Key Distribution Center. In this paper we describe in terms of information theory a model for Distributed Key Distribution Centers, and we present lower bounds holding in the model for the main resources needed to set up and manage a distributed center, i.e., memory storage, randomness, and bandwidth. Then we show that a previously proposed protocol which uses a bidimensional extension of Shamir’s secret sharing scheme meets the bounds and is, hence, optimal.  相似文献   

5.
隐私信息检索(PIR)是为了保障个人隐私在公共网络平台上的私密性而采用的一种阻止数据库知晓用户查询信息的策略。通过研究已经被提出的PIR方案,分析其优缺点,试图探求一种能够综合考虑多方面安全性的可行PIR方案模型。文章采用Shamir秘密共享协议、乘法同态加密和其他相关的技术,结合当前网络环境的复杂性,并考虑可能存在的对用户检索信息的恶意攻击,设计了一种基于多服务器的隐私信息检索方案。  相似文献   

6.
Smart Dust is a set of a vast number of ultra-small fully autonomous computing and communication devices, with very restricted energy and computing capabilities, that co-operate to quickly and efficiently accomplish a large sensing task. Smart Dust can be very useful in practice, i.e., in the local detection of a remote crucial event and the propagation of data reporting its realization. In this work we make an effort towards the research on smart dust from an algorithmic point of view. We first provide a simple but realistic model for smart dust and present an interesting problem, which is how to propagate efficiently information on an event detected locally. Then we present various smart dust protocols for local detection and propagation that are simple enough to be implemented on real smart dust systems, and perform, under some simplifying assumptions, a rigorous average case analysis of their efficiency and energy consumption (and their interplay). This analysis leads to concrete results showing that our protocols are very efficient and robust. We also validate the analytical results by extensive experiments.  相似文献   

7.
Outsourcing computing allows users with resource-constrained devices to outsource their complex computation workloads to cloud servers that may not be honest. In this paper, we propose a new algorithm for securing the outsourcing of composite modular exponentiation, which is one of the most complex computing tasks in discrete-log based cryptographic protocols. Unlike algorithms based on two untrusted servers, we outsource modular exponentiation operation to only a single server, which eliminates the potential for a collusion attack when using two servers. Moreover, our proposed algorithm can hide the base and exponent of the outsourced data, which prevents the exposure of sensitive information to cloud servers. In addition, compared with the state-of-the-art algorithms, our scheme has remarkably better checkability. The user could detect any misbehavior with a probability of one if the server returns a fault result.  相似文献   

8.
Utility computing provides a pay-as-you-go approach to information systems in which application providers (e.g., web sites) can better manage their costs by adding capacity in response to increased demands and shedding capacity when it is no longer needed. This paper addresses application providers who use clusters of servers. Our work develops a framework to determine the number of servers that minimizes the sum of quality-of-service (QoS) costs resulting from service level penalties and server holding costs for the server cluster. The server characteristics considered are service rate, failure rates, repair rates, and costs. The contributions of this paper are: 1) a model for the performance and availability of an e-Commerce system that is consistent with data from a multisystem testbed with an e-Commerce workload; 2) a business-oriented cost model for resource allocation for application providers; 3) a closed form approximation for the optimal allocation of servers for an application provider based on the performance model in 1) and the cost model in 2); and 4) a simple criteria for utility owners and server manufacturers to make tradeoffs between server characteristics.  相似文献   

9.
Mike  Tri Van  Alec   《Ad hoc Networks》2007,5(3):313-323
Many ad hoc routing algorithms rely on broadcast flooding for location discovery or, more generally, for secure routing applications. Flooding is a robust algorithm but because of its extreme redundancy, it is impractical in dense networks. Indeed in large wireless networks, the use of flooding algorithms may lead to broadcast storms where the number of collisions is so large that it causes system failure. To prevent broadcast storms, many mechanisms that reduce redundant transmissions have been proposed that reduce retransmission overhead either deterministically or probabilistically.Gossip is a probabilistic algorithm in which packet retransmission is based on the outcome of coin tosses. The retransmission probability can be fixed, dynamic or adaptive. With dynamic gossip, local information is used to determine the retransmission probability. With adaptive gossip, the decision to relay is adjusted adaptively based on the outcome of coin tosses, the local network structure, and the local response to the flooding call. The goal of gossip is to minimize the number of retransmissions, while retaining the main benefits of flooding, e.g., universal coverage, minimal state retention, and path length preservation.In this paper we consider ways to reduce the number of redundant transmissions in flooding while guaranteeing security. We present several new gossip protocols that exploit local connectivity to adaptively correct propagation failures and protect against Byzantine attacks. A main contribution of this work is that we introduce a cell-grid approach that allows us to analytically prove performance and security protocol properties. The last two gossip protocols that we give are fully adaptive, i.e., they automatically correct all faults and guarantee delivery, the first such protocols to the best of our knowledge.  相似文献   

10.
Authenticated key agreement protocols play an important role for network‐connected servers to authenticate remote users in Internet environment. In recent years, several authenticated key agreement protocols for single‐server environment have been developed based on chaotic maps. In modern societies, people usually have to access multiple websites or enterprise servers to accomplish their daily personal matters or duties on work; therefore, how to increase user's convenience by offering multi‐server authentication protocol becomes a practical research topic. In this study, a novel chaotic map‐based anonymous multi‐server authenticated key agreement protocol using smart card is proposed. In this protocol, a legal user can access multiple servers using only a single secret key obtained from a trusted third party, known as the registration center. Security analysis shows this protocol is secure against well‐known attacks. In addition, protocol efficiency analysis is conducted by comparing the proposed protocol with two recently proposed schemes in terms of computational cost during one authentication session. We have shown that the proposed protocol is twice faster than the one proposed by Khan and He while preserving the same security properties as their protocol has. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
Using name-based mappings to increase hit rates   总被引:1,自引:0,他引:1  
Clusters of identical intermediate servers are often created to improve availability and robustness in many domains. The use of proxy servers for the World Wide Web (WWW) and of rendezvous points in multicast routing are two such situations. However, this approach can be inefficient if identical requests are received and processed by multiple servers. We present an analysis of this problem, and develop a method called the highest random weight (HRW) mapping that eliminates these difficulties. Given an object name and a set of servers, HRW maps a request to a server using the object name, rather than any a priori knowledge of server states. Since HRW always maps a given object name to the same server within a given cluster, it may be used locally at client sites to achieve consensus on object-server mappings. We present an analysis of HRW and validate it with simulation results showing that it gives faster service times than traditional request allocation schemes such as round-robin or least-loaded, and adapts well to changes in the set of servers. HRW is particularly applicable to domains in which there are a large number of requestable objects, there is a significant probability that a requested object will be requested again, and the CPU load due to any single object can be handled by a single server. HRW has now been adopted by the multicast routing protocols PIMv2 and CBTv2 as its mechanism for routers to identify rendezvous points/cores  相似文献   

12.
Distributed servers architecture offers storage and streaming scalabilities for video services. In this letter, we propose and analyze an on-demand scheme in which the local servers store the beginning portion (i.e., the “prefix”) of videos and deliver it by means of unicast streams to the clients. The clients are able to merge onto on-going multicast streams delivered from the repository by means of their set-top buffers. Given a certain limited repository (and thereof multicast) bandwidth, we investigate how the total cost of the local servers can be minimized. We show that if the local storage is the main cost, the size of the prefixes is likely to increase with the video popularity. On the other hand, if the server cost mainly comes from streaming capacity, the size of the prefixes is likely to decrease asymptotically with the video popularity  相似文献   

13.
The explosive growth of the Web requires servers to be extensible and configurable. This article describes our adaptive Web server, OpenWebServer, which uses the Reflection architectural pattern. The server supports the dynamic adoption of functionality, such as introducing additional protocols, modifying execution policies, and tuning system performance. This is achieved by specifying and coordinating metaobjects that represent various aspects within the Web server. We present a Java version of OpenWebServer, and describe its design using Reflection and other design patterns: Singleton, Bridge, Mediator, Observer, and Decorator. These patterns provide a better-factored design and allow the Web server to evolve continually beyond static and monolithic servers  相似文献   

14.
This paper evaluates techniques for improving operating system and network protocol software support for high-performance World Wide Web servers. We study approaches in three categories: i.e., new socket functions, per-byte optimizations, and per-connection optimizations. We examine two proposed socket functions, i.e., acceptex( ) and send-file( ), comparing send-file( )'s effectiveness with a combination of mmap( ) and writev( ). We show how send-file( ) provides the necessary semantic support to eliminate copies and checksums in the kernel, and quantify the benefit of the function's header and close options. We also present mechanisms to reduce the number of packets exchanged in an HTTP transaction, both increasing server performance and reducing network utilization, without compromising interoperability. Results using WebStone show that our combination of mechanisms can improve server throughput by up to 64%, and can eliminate up to 33% of the packets in an HTTP exchange. Results with SURGE show an aggregate increase in server throughput of 25%  相似文献   

15.
A ternary state circular sequential k-out-of-n congestion (TSCSknC) system is presented. The system is an extension of the circular sequential k-out-of-n congestion (CSknC) system which consists of two connection states: a) congestion (server busy), and b) successful. In contrast, a TSCSknC system considers three connection states: i) congestion, ii) break down, and iii) successful. It finds applications in some reliable systems to prevent single-point failures, such as the ones used in (k,n) secret key sharing systems. The system further assumes that each of the n servers has known connection probabilities in congestion, break-down, and successful states. These n servers are arranged in a circle, and are made with connection attempts sequentially round after round. If a server is not congested, the connection can be either successful, or a failure. Previously connected servers are blocked from reconnecting if they were in either states ii), or iii). Congested servers are attempted repeatedly until k servers are connected successfully, or (n-k+1) servers have a break-down status. In other words, the system works when k servers are successfully connected, but fails when (n-k+1) servers are in the break-down state. In this paper, we present the recursive, and marginal formulas for the system successful probability, the system failure probability, as well as the average stop length, i.e. number of connections needed to terminate the system to a successful or failure state, and its computational complexity.  相似文献   

16.
As different types of wireless networks are converging into an all-IP network, i.e., the Internet, it can be expected that in the near future video-on-demand (VoD) will be widely applied to many interesting services, and users can access these services using heterogeneous terminals via heterogeneous wired/wireless access networks. Many periodic broadcasting protocols have been proposed to reduce the implementation cost of VoD systems. However, most of the protocols assumed homogeneity for user terminals, while in practice, user terminals are usually quite different in their processing power, buffer space, and power. To address this problem, a few periodic broadcasting protocols providing the same video quality for all heterogeneous clients have been proposed recently. In this paper, we proposed a novel heterogeneous VoD broadcasting technique called Catch and Rest (CAR) to accommodate bandwidth heterogeneity without sacrificing user video quality. Then, we provide mathematic analysis to calculate the client bandwidth and buffer space requirements of CAR. Finally, we present our performance evaluation results for CAR. Our results show that under the same system resources (i.e., server and network bandwidth), CAR provides more uniform and acceptable service latency for all heterogeneous clients compared to previous works.  相似文献   

17.
We consider power and server allocation in a multibeam satellite downlink which transmits data to N different ground locations over N time-varying channels. Packets destined for each ground location are stored in separate queues and the server rate for each queue, i, depends on the power, p/sub i/(t), allocated to that server and the channel state, c/sub i/(t), according to a concave rate-power curve /spl mu//sub i/(p/sub i/,c/sub i/). We establish the capacity region of all arrival rate vectors (/spl lambda//sub 1/,...,/spl lambda//sub N/) which admit a stabilizable system. We then develop a power-allocation policy which stabilizes the system whenever the rate vector lies within the capacity region. Such stability is guaranteed even if the channel model and the specific arrival rates are unknown. Furthermore, the algorithm is shown to be robust to arbitrary variations in the input rates and a bound on average delay is established. As a special case, this analysis verifies stability and provides a performance bound for the choose-the-K-largest-connected-queues policy when channels can be in one of two states (ON or OFF ) and K servers are allocated at every timestep (K相似文献   

18.
We consider several distributed collaborative key agreement and authentication protocols for dynamic peer groups. There are several important characteristics which make this problem different from traditional secure group communication. They are: 1) distributed nature in which there is no centralized key server; 2) collaborative nature in which the group key is contributory (i.e., each group member will collaboratively contribute its part to the global group key); and 3) dynamic nature in which existing members may leave the group while new members may join. Instead of performing individual rekeying operations, i.e., recomputing the group key after every join or leave request, we discuss an interval-based approach of rekeying. We consider three interval-based distributed rekeying algorithms, or interval-based algorithms for short, for updating the group key: 1) the Rebuild algorithm; 2) the Batch algorithm; and 3) the Queue-batch algorithm. Performance of these three interval-based algorithms under different settings, such as different join and leave probabilities,is analyzed. We show that the interval-based algorithms significantly outperform the individual rekeying approach and that the Queue-batch algorithm performs the best among the three interval-based algorithms. More importantly, the Queue-batch algorithm can substantially reduce the computation and communication workload in a highly dynamic environment. We further enhance the interval-based algorithms in two aspects: authentication and implementation. Authentication focuses on the security improvement, while implementation realizes the interval-based algorithms in real network settings. Our work provides a fundamental understanding about establishing a group key via a distributed and collaborative approach for a dynamic peer group.  相似文献   

19.
A growing and important class of traffic in the Internet is so-called “streaming media,” in which a server transmits a packetized multimedia signal to a receiver that buffers the packets for playback. This playback buffer, if adequately sized, counteracts the adverse impact of delay jitter and reordering suffered by packets as they traverse the network, and if large enough also allows lost packets to be retransmitted before their playback deadline expires. We call this framework for retransmitting lost streaming-media packets “soft ARQ” since it represents a relaxed form of Automatic Repeat reQuest (ARQ). While state-of-the-art media servers employ such strategies, no work to date has proposed an optimal strategy for delay-constrained retransmissions of streaming media—specifically, one which determines what is the optimal packet to transmit at any given point in time. In this paper, we address this issue and present a framework for streaming media retransmission based on layered media representations, in which a signal is decomposed into a discrete number of layers and each successive layer provides enhanced quality. In our approach, the source chooses between transmitting (1) newer but critical coarse information (e.g., a first approximation of the media signal) and (2) older but less important refinement information (e.g., added details) using a decision process that minimizes the expected signal distortion at the receiver. To arrive at the proper mix of these two extreme strategies, we derive an optimal strategy for transmitting layered data over a binary erasure channel with instantaneous feedback. To provide a quantitative performance comparison of different transmission policies, we conduct a Markov-chain analysis, which shows that the best transmission policy is time-invariant and thus does not change as the frames' layers approach their expiration times.  相似文献   

20.
Thanks to the availability of various mobile applications, lots of users shift from desktop environments, e.g., PCs and laptops, to mobile devices, e.g., smartphones and tablets. However, there are still some desktop applications without counterparts on mobile devices, such as some integrated development environments (e.g., eclipse) and automatic industry control systems. In this paper, we propose Modeap, a platform-independent mobile cloud service that can push all desktop applications developed for various operating systems from cloud servers to mobile devices. Modeap follows a design principle of complete detachment and regeneration of desktop user interface, i.e., the essential graphical primitives of the original desktop applications will be intercepted and then translated into standard web-based graphical primitives such that the interactions between users and remote cloud applications become possible via mobile web browsers. In this way, all desktop applications built upon the same set of graphical primitives can be used on mobile devices in great flexibility without installing any new software. We have developed a proof-of-concept prototype that provides Windows applications from cloud server to mobile web browsers. The results of extensive experiments show that the proposed framework can achieve our design goals with low latency and bandwidth consumption.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号