首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
基于系统观的网络突现性研究   总被引:1,自引:0,他引:1       下载免费PDF全文
因特网作为一个复杂适应系统,呈现了许多突现性。本文重点提出因特网中的两个突现现象:网络流量呈现自相似以及拓扑结构呈Power-law分布,并对其现象及其形成机制进行了初步探讨;同时,对网络上业务量的自组织临界性进行了简单的分析,从而提出用复杂科学理论来研究探索网络复杂性,对研究和开发下一代网络体系结构具有积极作
用和影响。  相似文献   

2.
Crowe  J.Q. 《IT Professional》1999,1(3):67-69
The Internet's underlying technology is causing a telecommunications revolution far more fundamental than previous shifts and with far greater consequences. In this fast-paced world a bigger company is not necessarily better if it is founded on the economics of the past. The author considers how IP technology is shaking up the telecommunications market. Look for new opportunities and new providers for your voice and data channels  相似文献   

3.
Unwanted traffic is a serious problem on today's network. For too long, the balance between the value of the transactions occurring on the network and the security of the infrastructure itself has tipped in the wrong direction. In this article, the authors identify the threats of most concern and identify some ways in which the broader Internet community is addressing them. The open, bottom-up processes of the IETF and the Internet community in general are making steady inroads in tackling the serious threat that unwanted traffic poses to the Internet's continued growth and enrichment.  相似文献   

4.
Internet traffic behavior has been resistant to modeling. The reasons derive from the Internet's evolution as a composition of independently developed and deployed protocols, technologies and core applications. Moreover, this evolution has experienced no equilibrium thus far. A common complaint about traffic measurement studies is that they do not sustain relevance in an environment where the traffic, technology and topology change faster than we can measure them. Moreover, the proliferation of media and protocols make the acquisition of traffic data almost prohibitively complicated and costly, and finally, the time required to analyze and validate data means that most research efforts are obsolete by the time the findings are published. Thus, far from having an analytic handle on the Internet, we lack the ability even to measure traffic at a granularity that would enable infrastructure-level research. As a result, while the core of the Internet continues its rapid evolution, measurement and modeling of it progress at a leisurely pace. However, both active and passive measurements of the Internet do occur, as do analyses of the routing and IP addressing system  相似文献   

5.
The Internet is rife with paradox. For example, new optical switches capable of forwarding terabits of data (in photonic format) must work with a decades-old protocol suite first developed for software-controlled electronic packet switches. Another example is that while IP multicast offers by far the most efficient delivery vehicle for large-scale multiparty communications, few service providers deploy it, choosing instead to consume bandwidth and host resources with multiple point-to-point connections. One of the most interesting Internet-related paradoxes is the relationship between Internet service providers. While competing very publicly for customers using price, value-add services, and performance as leverage, they must privately cooperate among themselves to provide global connectivity. Indeed, without this cooperation each ISP network might devolve into its own separate world with few or none of the global Internet's benefits. Fortunately, this is not the case; in fact, the Internet is a network of networks-a mesh of separately controlled, interconnected networks that form one large, global entity  相似文献   

6.
The net neutrality debate began a few years ago, prematurely, with overheated rhetoric about potential disasters for the Internet but little in the way of real threats requiring immediate government action. Beginning around May 2007, one of the largest ISPs in the US, Comcast, began a program of discriminatory blocking of certain Internet communications protocols. The blocking has focused on BitTorrent and Gnutella peer-to-peer protocols but also included, for a time, Lotus Notes enterprise collaboration software traffic. Comcast hasn't disputed the blocking, and a variety of independent, although perhaps not entirely unbiased, investigations have verified it. So, for the first time in the Internet's modern history, we have selective discrimination against a particular type of traffic that's widely used and presumptively legal.  相似文献   

7.
Bragg  A.W. 《IT Professional》1999,1(5):37-44
ATM has offered QoS guarantees for nearly a decade, but the push is now for IP-based solutions. IP is ubiquitous in today's congested networks. Applications are more complex, users are more demanding, standards bodies are more receptive, and technology is more sophisticated. All this has focused attention on ways to add QoS to IP networks without exorbitant cost. But with progress has come some confusion. Telecommunications carriers and service providers are already enticing customers by offering two or three distinct classes of service over their IP networks. Vendors are beginning to ship QoS-capable hardware and software. ATM is firmly established in the Internet's core. A lot has happened in a short time, which means that users and providers must be aware of where things are going and what the various QoS technologies can actually do. Some QoS mechanisms deliver strict, absolute performance guarantees. Others merely offer assurances that one service class will take priority over another when resources are scarce  相似文献   

8.
Buried amid industry hoopla about end-user gadgets and macroeconomic issues such as upcoming wireless spectrum auctions, an arcane but increasingly vexing niche of the Internet's architecture has become the subject of considerable debate. At stake is the "promised land" of carrier Ethernet traffic traversing the globe. In preparing for the next-generation network, however, carriers have had to combine their IP-based core resources with legacy synchronous optical network (SONET) and synchronous digital hierarchy (SDH) access technology in the transport layer. How the world's carriers will best combine and migrate those resources into a robust and warrantable packet-based network is the crux of the issue.  相似文献   

9.
梁海英  李政  高远 《计算机科学》2006,33(12):37-42
在BGP/MPLS VPNs中,用MPLS实现的流量工程主要被限止在单个管理域内。然而,随着企业规模的不断扩大,VPN跨越越来越多的管理域,急需管理域间流量的有效方法。以BGP属性、BGP策略和AS关系为基础的,一方面通过配置LOCAL-PREF属性值,运用输入策略,控制AS的出界流量;另一方面,保证客户AS不在提供者间或对等体间过渡流量,或允许客户AS向它的部分提供者通告路由,或人为增长AS-PATH,控制AS的入界流量。仿真表明此方法能有效地在BGP/MPLS VPNs中用BGP实现域间流量工程。  相似文献   

10.
Security experts generally acknowledge that the long-term solution to distributed denial of service attacks is to increase the security level of Internet computers. Attackers would then be unable to find zombie computers to control. Internet users would also have to set up globally coordinated filters to stop attacks early. However, the critical challenge in these solutions lies in identifying the incentives for the Internet's tens of millions of independent companies and individuals to cooperate on security and traffic control issues that do not appear to directly affect them. We give a brief introduction to: network weaknesses that DDoS attacks exploit; the technological futility of addressing the problem solely at the local level; potential global solutions; and why global solutions require an economic incentive framework  相似文献   

11.
P2P流媒体技术,也称为对等网络(peer-to-peer)技术,简单地说,就是一种用户不经过中继设备直接交换数据或服务的技术。它将目前互联网的"内容位于中心"模式改变为"内容位于边缘"模式,将权利交还给用户。在这种架构中,每个节点的地位都相同,具备客户端和服务器的双重特性,可以同时作为服务使用者和服务提供者。该文将为读者讲述在P2P流媒体技术的原理以及应用,希望能使大家对这种现阶段应用广泛,发展迅速的流媒体技术有一定的了解。  相似文献   

12.
传统网络行为分析忽视了自治域的经济特性对网络结构产生的影响。从成本收益的角度分析了不同类型自治域的对等连接行为。通过PeeringDB提供的样本数据,首先统计域间流量分布情况,给出不同类型自治域的对等连接偏好。结果表明,传输服务提供商和接入服务提供商的域间流量与对等连接策略之间存在较强的关联,而内容服务提供商的域间流量与对等连接策略的关联性较弱,主要采取开放式对等连接。根据服务提供商在信息流传输中的作用,分析了经济环境下自治域采取对等连接策略的主要原因,提出自治域是由网络结构、域间流量和成本收益相互影响的生态系统。  相似文献   

13.
Net neutrality - the issue of whether ISPs should be allowed to give (or, more likely, sell) higher-performance access to content or services from certain providers - has been a hot Internet public policy issue in the US. Some network owners (such as Bell South and SBC) indicate that they'd like to charge large content providers (Google, Yahoo, Microsoft, eBay, Amazon, and so on) extra in order to reach potential customers. Web users have a unique set of interests at stake in this debate, but they also have unexploited bargaining chips - roughly 12 billion, to be exact (or however many Web pages exist at the moment). The real threats to the Web's vitality and its hundreds of millions of users have been largely overlooked in this debate, however, as the legislative debate has overemphasized the interests of larger content providers and network operators. As the author illustrates, today's Web users need a neutral, nondiscriminatory Internet as an open platform to support the Web's operation. The good news is that as it moves from a read-only medium, in which most users are merely information consumers, to a read-write medium in which users post pictures, write public blog entries, and link to each other's profiles, active users might have an opportunity to preserve the Internet's open, nondiscriminatory (neutral) operation on which they depend  相似文献   

14.
An Overlay Architecture for High-Quality VoIP Streams   总被引:1,自引:0,他引:1  
The cost savings and novel features associated with voice over IP (VoIP) are driving its adoption by service providers. Unfortunately, the Internet's best effort service model provides no quality of service guarantees. Because low latency and jitter are the key requirements for supporting high-quality interactive conversations, VoIP applications use UDP to transfer data, thereby subjecting themselves to quality degradations caused by packet loss and network failures. In this paper, we describe an architecture to improve the performance of such VoIP applications. Two protocols are used for localized packet loss recovery and rapid rerouting in the event of network failures. The protocols are deployed on the nodes of an application-level overlay network and require no changes to the underlying infrastructure. Experimental results indicate that the architecture and protocols can be combined to yield voice quality on par with the public switched telephone network  相似文献   

15.
This paper studies a game-theoretic model of traffic flow assignment with multiple customer groups and the BPR delay function on a parallel channel network. We prove the existence of a unique Nash equilibrium in the game of m ≥ 2 traffic navigation providers and derive explicit expressions for equilibrium strategies. And finally, we show that the competition of navigation providers on the network increases the average travel time between origin and destination areas.  相似文献   

16.

Due to the popularity of Dynamic Adaptive Streaming Over HTTP (DASH), broadband and Internet service providers’ links transmit mainly multimedia content. As the most popular providers encrypt their video services, the attempts to identify their traffic through Deep Packet Inspection (DPI) encounter difficulties. Therefore, encrypted DASH traffic requires new classification methods. In this work, we propose to identify DASH traffic taking into account statistical dependencies among video flows. For this purpose, we employ cluster analysis which can identify groups of traffic flows that show similarity using only the application level information. In our work, we applied three unsupervised clustering algorithms, namely MinMax K-Means, OPTICS and AutoClass, to classify video traces obtained from an emulated environment. The experimental results show that the employed algorithms are able to effectively distinguish video flows generated by different play-out strategies. The classification performance depends on the network conditions and parameters of the learning process.

  相似文献   

17.
Peer-to-peer (P2P) based content distribution systems have emerged as the main form for content distribution on the Internet,which can greatly reduce the distribution cost of content providers and improve the overall system scalability.However,the mismatch between the overlay and underlay networks causes large volume of redundant traffic,which intensifies the tension between P2P content providers and ISPs.Therefore,how to efficiently use network resources to reduce the traffic burden on the ISPs is crucial for the sustainable development of P2P systems.This paper surveys the state-of-art P2P traffic optimization technologies from three perspectives:P2P cache,locality-awareness and data scheduling.Technological details,comparison between these technologies and their applicabilities are presented,followed by a discussion of the issues that remain to be addressed and the direction of future content distribution research.  相似文献   

18.
The Internet economics track will address how economic and policy issues relate to the emergence of the Internet as critical infrastructure. The authors provide a historical overview of internetworking, identifying key transitions that have contributed to the Internet's development and penetration. Its core architecture wasn't designed to serve as critical communications infrastructure for society; rather, the infrastructure developed far beyond the expectations of the original funding agencies, architects, developers, and early users. The incongruence between the Internet's underlying architecture and society's current use and expectations of it means we can no longer study Internet technology in isolation from the political and economic context in which it is deployed  相似文献   

19.
Over the past several years, traditional carriers and Internet service providers (ISPs) have invested billions of dollars deploying high-speed, high-capacity IP networks. This expansion is intended to lay the foundation for a network that could accommodate exponential traffic growth and deliver new revenue-generating services. Traffic from advanced services incorporating elements such as on-demand video, packet voice, wireless communications, and peer-to-peer networking is expected to consume whatever capacity providers can offer while leading to increased opportunities for revenue growth. The advanced services traffic has yet to materialize. An unintentional consequence of this buildout, however, is that ISP networks possess a glut of capacity. At the same time, ISPs are under great pressure to reduce operational and infrastructure costs while attempting to make money and attract customers with new services. One way to achieve both goals is to carry all traffic over a single IP or multiprotocol label-switching (MPLS) network  相似文献   

20.
通过研究Internet自治层的拓扑结构,以及在Internet小世界现象内的偏好连接、节点和链接的随机或优先消亡、节点间非线性优先连接、新节点所带来的外部链接及网络内新增的内部链接等诸多因素,提出了基于小世界现象的Internet拓朴模型。通过仿真实验与统计数据的对比表明,ITMSW模型能够很好的描述Internet自治层无标度、小世界现象等诸多特性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号