首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
There are many methods to maintain consistency in the distributed computing environment. Ideally, efficient schemes for maintaining consistency should take into account the following factors: lease duration of replicated data, data access pattern and system parameters. One method used to supply strong consistency in the web environment is the lease method. During the proxy’s lease time from a web server, the web server can notify the modification to the proxy by invalidation or update. In this paper, we analyze lease protocol performance by the varying update/invalidation scheme, lease duration and read rates. By using these analyses, we can choose the adaptive lease time and proper protocol (invalidation or update scheme of the modification for each proxy in the web environment). As the number of proxies for web caching increases exponentially, a more efficient method for maintaining consistency needs to be designed. We also present three-tier hierarchies on which each group and node independently and adaptively choose the proper lease time and protocol for each proxy cache. These classifications of the scheme make proxy caching adaptive to client access pattern and system parameters.  相似文献   

2.
Volume leases for consistency in large-scale systems   总被引:2,自引:0,他引:2  
This article introduces volume leases as a mechanism for providing server-driven cache consistency for large-scale, geographically distributed networks. Volume leases retain the good performance, fault tolerance, and server scalability of the semantically weaker client-driven protocols that are now used on the Web. Volume leases are a variation of object leases, which were originally designed for distributed file systems. However, whereas traditional object leases amortize overheads over long lease periods, volume leases exploit spatial locality to amortize overheads across multiple objects in a volume. This approach allows systems to maintain good write performance even in the presence of failures. Using trace-driven simulation, we compare three volume lease algorithms against four existing cache consistency algorithms and show that our new algorithms provide strong consistency while maintaining scalability and fault-tolerance. For a trace-based workload of Web accesses, we find that volumes can reduce message traffic at servers by 40 percent compared to a standard lease algorithm, and that volumes can considerably reduce the peak load at servers when popular objects are modified  相似文献   

3.
Name resolution using the Domain Name System (DNS) is integral to today’s Internet. The resolution of a domain name is often dependent on namespace outside the control of the domain’s owner. In this article we review the DNS protocol and several DNS server implementations. Based on our examination, we propose a formal model for analyzing the name dependencies inherent in DNS. Using our name dependency model we derive metrics to quantify the extent to which domain names affect other domain names. It is found that under certain conditions, more than half of the queries for a domain name are influenced by namespaces not expressly configured by administrators. This result serves to quantify the degree of vulnerability of DNS due to dependencies that administrators are unaware of. When we apply metrics from our model to production DNS data, we show that the set of domains whose resolution affects a given domain name is much smaller than previously thought. However, behaviors such as using cached addresses for querying authoritative servers and chaining domain name aliases increase the number and diversity of influential domains, thereby making the DNS infrastructure more vulnerable.  相似文献   

4.
Adaptive leases: a strong consistency mechanism for the World Wide Web   总被引:2,自引:0,他引:2  
We argue that weak cache consistency mechanisms supported by existing Web proxy caches must be augmented by strong consistency mechanisms to support the growing diversity in application requirements. Existing strong consistency mechanisms are not appealing for Web environments due to their large state space or control message overhead. We focus on the lease approach that balances these trade-offs and present analytical models and policies for determining the optimal lease duration. We present extensions to the HTTP protocol to incorporate leases and then implement our techniques in the Squid proxy cache and the Apache Web server. Our experimental evaluation of the leases approach shows that: 1) our techniques impose modest overheads even for long leases (a lease duration of 1 hour requires state to be maintained for 1030 leases and imposes an per-object overhead of a control message every 33 minutes), 2) leases yields a 138-425 percent improvement over existing strong consistency mechanisms, and 3) the implementation overhead of leases is comparable to existing weak consistency mechanisms.  相似文献   

5.
High-performance Web sites rely on Web server `farms', hundreds of computers serving the same content, for scalability, reliability, and low-latency access to Internet content. Deploying these scalable farms typically requires the power of distributed or clustered file systems. Building Web server farms on file systems complements hierarchical proxy caching. Proxy caching replicates Web content throughout the Internet, thereby reducing latency from network delays and off-loading traffic from the primary servers. Web server farms scale resources at a single site, reducing latency from queuing delays. Both technologies are essential when building a high-performance infrastructure for content delivery. The authors present a cache consistency model and locking protocol customized for file systems that are used as scalable infrastructure for Web server farms. The protocol takes advantage of the Web's relaxed consistency semantics to reduce latencies and network overhead. Our hybrid approach preserves strong consistency for concurrent write sharing with time-based consistency and push caching for readers (Web servers). Using simulation, we compare our approach to the Andrew file system and the sequential consistency file system protocols we propose to replace  相似文献   

6.
The Domain Name System (DNS) is an essential part of the Internet infrastructure and provides fundamental services, such as translating host names into IP addresses for Internet communication. The DNS is vulnerable to a number of potential faults and attacks. In particular, false routing announcements can deny access to the DNS service or redirect DNS queries to a malicious impostor. Due to the hierarchical DNS design, a single fault or attack against the routes to any of the top-level DNS servers can disrupt Internet services to millions of users. We propose a path-filtering approach to protect the routes to the critical top-level DNS servers. Our approach exploits both the high degree of redundancy in top-level DNS servers and the observation that popular destinations, including top-level DNS servers, are well-connected via stable routes. Our path-filter restricts the potential top-level DNS server route changes to be within a set of established paths. Heuristics derived from routing operations are used to adjust the potential routes over time. We tested our path-filtering design against BGP routing logs and the results show that the design can effectively ensure correct routes to top-level DNS servers without impacting DNS service availability.  相似文献   

7.
王文通  胡宁  刘波  刘欣  李树栋 《软件学报》2020,31(7):2205-2220
DNS为互联网应用提供名字解析服务,是互联网的重要基础服务设施.近年发生的互联网安全事件表明DNS正面临严峻的安全威胁.DNS的安全脆弱性主要包括:协议设计脆弱性、技术实现脆弱性和体系结构脆弱性.针对上述脆弱性,对DNS协议设计、系统实现、检测监控和去中心化等方面的最新研究成果进行了归纳和总结,并且对未来可能的热点研究方向进行了展望.  相似文献   

8.
为了挖掘出域名服务器一段时间内最大查询量的若干个域名,针对很多负荷重的域名服务器一般都不打开查询日志开关,从而不能采用统计日志记录方法的情况,提出了一个内存记录置换统计算法,在内存中近似统计出一段时间内最大查询量的若干个域名。实验和实践表明该算法效果良好。  相似文献   

9.
DNS服务中的Internet访问行为测量研究   总被引:2,自引:0,他引:2       下载免费PDF全文
借助于中国互联网络信息中心负责管理的国家顶级域名系统资源,对当前CN顶级域名DNS服务请求进行了宏观测量和分析。研究发现,CN国家域名整体查询频度特征服从类Zipf’s分布,递归服务器域名查询量遵从广延指数分布,即CN国家域名的DNS服务请求具有整体集中分布的特征和域名查询模式的时间局部特征。这些关于国家域名服务的整体认识和全局性描述对于深入了解我国国家域名系统的整体运行状况,科学认识我国宏观网络发展特征具有重要意义。  相似文献   

10.
IPv6是下一代互联网核心网络层通信协议,其长达128位的IPv6地址迫切需要域名动态更新功能来简化通信的复杂性。在探讨国内外动态域名系统解决方案的基础上,提出一个更易于实现和使用的、功能更加完善的、基于BIND9(Berkeley Internet Name Domainversion 9)的动态域名系统的设计方案,并详细讨论了其中所涉及的一些关键实现技术,如IPv6地址自动检测方法、域名更新模块的编程实现等。最后对系统进行了总结并就下一步工作进行了展望。  相似文献   

11.
12.
随着互联网技术的飞速发展,越来越多的人正享受着互联网提供的服务。互联网中绝大多数的服务都是基于Web页面的服务,而这类服务存在巨大的网络安全隐患。网络中的DNS(Domain Name System)数据内容中包含有大量的与Web应用相关的信息,通过监测DNS数据内容可以高效地发现、定位、解决相关网络安全问题。该文通过对DNS域名重定向,有效的对可疑域名进行拦截。  相似文献   

13.
对弹性分布式缓存动态扩展机制实现中的关键问题进行了研究。针对动态扩展时的数据重均衡问题,提出了一种适用于异构环境的热点感知的数据重均衡算法(hotspot sensitive data rebalancing algorithm,HSDRA)。该算法同时考虑内存占用和网络流量的均衡,在线识别热点分区,优先确保其在各缓存节点间均衡分布。针对动态扩展时缓存服务的数据一致性和持续可用性保障问题,分别提出了一种基于两阶段请求的数据访问协议和一种受控的数据迁移算法。实验结果表明,该方法能够在保障数据一致性和持续可用性的要求下实现缓存系统的动态扩展,HSDRA算法与未考虑各分区实际负载的加权静态数据重均衡算法相比响应时间更短。  相似文献   

14.
中国域名服务器配置错误的测量与分析   总被引:1,自引:0,他引:1       下载免费PDF全文
域名服务器的正确配置是提供优质鲁棒的网络服务的基础。大量工程实践表明,错误配置将严重影响网络应用。中国因特网是世界互联网的重要组成部分,发展日新月异,但管理水平参差不齐。该文采用多种实验方法对中国域名系统的配置状况进行了测量和分析。实验结果表明,.CN域中30%以上的区都存在配置问题,是中国互联网高效、安全运行的重大隐患,应引起足够重视。  相似文献   

15.
DNS(domain name system)作为网络的重要基础服务设施, 是终端访问互联网必要的一环. 近年来, 越来越多尝试将用户通过DNS系统引入恶意服务器的攻击, 对互联网安全产生重要威胁. 防范与化解针对恶意域名或IP的访问, 如钓鱼网站、垃圾邮件、勒索软件、色情网站等, 无论是对于运营商还是网络监管机构都具...  相似文献   

16.
The domain name system is a critical piece of the Internet and supports most Internet applications. Because it's organized in a hierarchy, its correct operation depends on the availability of just a few servers at the hierarchy's upper levels. These backbone servers are vulnerable to routing attacks in which adversaries controlling part of the routing system try to hijack the server address space. Using routing attacks in this way, an adversary can compromise the Internet's availability and integrity at a global scale. In this article, the authors evaluate the relative resilience to routing attacks of two alternative anycast DNS implementations. The first operates at the network layer and the second at the application layer. The evaluation informs fundamental DNS design decisions and an important debate on the routing architecture of the Internet.  相似文献   

17.
Two main security threats exist for DNS in the context of query/response transactions. Attackers can spoof authoritative name servers responding to DNS queries and alter DNS responses in transit through man-in-the-middle attacks, and alter the DNS responses stored in caching name servers. The IETF has defined the digital signature-based DNSSEC for protecting DNS query/response transactions through a series of requests for comments.  相似文献   

18.
Iyer  Ravi 《World Wide Web》2004,7(3):259-280
As Internet usage continues to expand rapidly, careful attention needs to be paid to the design of Internet servers for achieving high performance and end-user satisfaction. Currently, the memory system continues to remain a significant performance bottleneck for Internet servers employing multi-GHz processors. In this paper, our aim is two-fold: (1) to characterize the cache/memory performance of web server workloads and (2) to propose and evaluate cache design alternatives for future web servers. We chose SPECweb99 as the representative web server workload and our entire characterization and evaluation methodology is based on our CASPER simulation framework. We begin by exploring the processor cache design space for single and dual-processor servers. Based on our observations, we then evaluate other cache hierarchy alternatives such as chipset caches, coherence filters and decompressed page stores. We show the sensitivity of these components to basic organization parameters such as cache size, line size and degree of associativity. We also present the performance implications of routing memory requests initiated by I/O devices through these caches. Based on detailed simulation data and its implications on system level performance, this paper shows that chipset caches have significant potential for improving future web server performance.  相似文献   

19.
The Internet today is a highly dynamic environment which frequently requires secure communication between peers that do not have a direct trust relationship. Current solutions for establishing trust often require static and application-specific Public Key Infrastructures (PKIs). This paper presents trusted directory services as a key infrastructural technology for setting up secure Internet connections, providing an alternative to application-specific PKIs. The directory securely binds public keys to peers through their names in a flexible way that matches the dynamic nature of the Internet. We elaborate on this concept by showing how the Domain Name System (DNS) and its security extensions (DNSSEC) can be leveraged for establishing secure Transport Layer Security (TLS) connections in a dynamic way. A simple enhancement of the TLS protocol, called Extended TLS (E-TLS), required for this purpose, is proposed. We describe our E-TLS implementation and we conclude with an evaluation of our results.  相似文献   

20.
位于因特网骨干网和同一接入网之间的流媒体缓存代理服务器相互协作,可以提高缓存命中率,保持负载平衡。该文提出了一种共享缓存空间的紧耦合的多代理服务器协作机制,给出了多代理协作的缓存替换策略和负载平衡算法。通过NS2模拟验证,该机制可以使系统保持更好的性能。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号