共查询到20条相似文献,搜索用时 15 毫秒
1.
Communication demands have grown from separate data and voice to integrated multimedia, paving the way to converging fixed,
mobile and IP networks. Supporting Multimedia is a challenging task for wireless ad hoc network designers. Multimedia forms
high data rate traffic with stringent Quality of Service (QoS) requirements. Wireless ad hoc networks are characterized by
frequent topology changes, unreliable wireless channel, network congestion and resource contention. Providing scalable QoS
is the most important challenge for multimedia delivery over ad hoc networks. We introduce here a provisioning and routing
architecture for ad hoc networks which scales well while provisioning QoS. The proposed architecture is analysed using a mix
of HTTP, voice and video streaming applications over 54 Mbps 802.11 g-based ad hoc networks. The architecture is simulated
and compared to well-known routing protocols using the OPNET Modeller. The results show that our architecture scales well
with increase in the network size, and outperforms well-known routing protocols. 相似文献
2.
Gold B.T. Kim J. Smolens J.C. Chung E.S. Liaskovitis V. Nurvitadhi E. Falsafi B. Hoe J.C. Nowatzyk A.G. 《Micro, IEEE》2005,25(6):51-59
Traditional techniques that mainframes use to increase reliability -special hardware or custom software - are incompatible with commodity server requirements. The Total Reliability Using Scalable Servers (TRUSS) architecture, developed at Carnegie Mellon, aims to bring reliability to commodity servers. TRUSS features a distributed shared-memory (DSM) multiprocessor that incorporates computation and memory storage redundancy to detect and recover from any single point of transient or permanent failure. Because its underlying DSM architecture presents the familiar shared-memory programming model, TRUSS requires no changes to existing applications and only minor modifications to the operating system to support error recovery. 相似文献
3.
Software Quality Journal - Monitoring for cloud is the key technology to know the status and the availability of the resources and services present in the current infrastructure. However, cloud... 相似文献
4.
Al Davis 《LISP and Symbolic Computation》1992,5(1-2):7-47
TheMayfly is a scalable general-purpose parallel processing system being designed at HP Laboratories, in collaboration with colleagues at the University of Utah. The system is intended to efficiently support parallel variants of modern programming languages such as Lisp, Prolog, and Object Oriented Programming models. These languages impose a common requirement on the hardware platform to supportdynamic system needs such as runtime type checking and dynamic storage management. The main programming language for the Mayfly is a concurrent dialect of Scheme. The system is based on a distributed-memory model, and communication between processing elements is supported by message passing. The initial prototype of Mayfly will consist of 19 identical processing elements interconnected in a hexagonal mesh structure. In order to achieve the goal of scalable performance, each processing element is a parallel processor as well, which permits the application code, runtime operating system, and communication to all run in parallel. A 7 processing element subset of the prototype is presently operational. This paper describes the hardware architecture after a brief background synopsis of the software system structure. 相似文献
5.
A new generation architecture of IP routers called massive parallel forwarding and switching (MPFS) is proposed, which is totally different from modern routers. The basic idea of MPFS is mapping complicated forwarding process into multilevel scalable switch fabric so as to implement packet forwarding in a pipelining and distributed way. This processing mechanism is named forwarding in switching (FIS). By interconnecting multi-stage, lower speed components, called forwarding and switching nodes (FSN), MPFS achieves better scalability in forwarding and switching performance just like MPP. We put emphasis upon IPv6 lookup problem in MPFS and propose a method for partitioning IPv6 FIB and mapping them to switch fabric. Simulation and computation results suggest that MPFS routers can support line-speed forwarding with a million of IPv6 prefixes at 40 Gbps. We also propose an implementation of 160 Tbps core router based on MPFS architecture at last. 相似文献
6.
7.
8.
Kenji Yoshigoe 《Computer Communications》2009,32(4):740-749
A combined input and crosspoint queued (CICQ) switch is receiving significant attention to be the next generation high speed packet switch for its scalability; however, a multi-cabinet implementation of a combined input and crosspoint queued (CICQ) switch unavoidably introduces a large round-trip time (RTT) latency between the line cards and switch fabric, resulting a large crosspoint (CP) buffer requirement. In this paper, virtual crosspoint queues (VCQs) that significantly reduces the CP buffer requirement of the CICQ switch is investigated. The VCQs unit resides inside the switch fabric, is dynamically shared among virtual output queues (VOQ) from the same source port, and is operated at the line rate, making the implementation practical. A threshold-based exhaustive round-robin (T-ERR) arbitration is employed to reduce buffer hogging at VCQ. The T-ERR at VCQ and CP arbiters serves packets residing in a longer queue more frequently than packet residing in a shorter queue. Consequently, the T-ERR, drastically increases the throughput of the CICQ switch with small CP buffers. A multi-cabinet implementation of CICQ switch do not support multicasting traffic well since a combination of small CP buffer in the switch fabric and a large RTT latency between the line cards and switch fabric results in non-work conservation of the intra-switch link. Deployment of multicast FIFO buffer between the input buffer and CP buffer shows a promise. With its ability to achieve high throughput independent of RTT and switch port size, the integration of the VCQ architecture and T-ERR scheduler to the CICQ switch is ideal for supporting ever-increasing Internet traffic that requires higher data rate, larger switch size, and efficient multicasting. 相似文献
9.
Overlay networks create a structured virtual topology above the basic transport protocol level that facilitates deterministic search and guarantees convergence. Overlay networks are evolving into a critical component for self-organizing systems. Here we outline the differences between flooding-style and overlay networks, and offer specific examples of how researchers are applying the latter to problems requiring high-speed, self-organizing network topologies. 相似文献
10.
Tian Min Wang Junjie Zhang Zanjun Du Wei Pan Jingshan Liu Tao 《The Journal of supercomputing》2022,78(9):11441-11463
The Journal of Supercomputing - Sparse LU factorization is essential for scientific and engineering simulations. In this work, we present swSuperLU, a highly scalable sparse direct solver on Sunway... 相似文献
11.
Acacio M.E. Gonzalez J. Garcia J.M. Duato J. 《Parallel and Distributed Systems, IEEE Transactions on》2005,16(1):67-79
One important issue the designer of a scalable shared-memory multiprocessor must deal with is the amount of extra memory required to store the directory information. It is desirable that the directory memory overhead be kept as low as possible, and that it scales very slowly with the size of the machine. Unfortunately, current directory architectures provide scalability at the expense of performance. This work presents a scalable directory architecture that significantly reduces the size of the directory for large-scale configurations of a multiprocessor without degrading performance. First, we propose multilayer clustering as an effective approach to reduce the width of directory entries. Based on this concept, we derive three new compressed sharing codes, some of them with a space complexity of O(log/sub 2/(log/sub 2/(N))) for an N-node system. Then, we present a novel two-level directory architecture to eliminate the penalty caused by compressed directories in general. The proposed organization consists of a small full-map first-level directory (which provides precise information for the most recently referenced lines) and a compressed second-level directory (which provides in-excess information for all the lines). The proposals are evaluated based on extensive execution-driven simulations (using RSIM) of a 64-node cc-NUMA multiprocessor. Results demonstrate that a system with a two-level directory architecture achieves the same performance as a multiprocessor with a big and nonscalable full-map directory, with a very significant reduction of the memory overhead. 相似文献
12.
In this paper, an efficient spatially scalable video coding scheme with a two-layered architecture is proposed. In this architecture,
two spatial layers are referred to as a base layer and an enhancement layer. The base layer is coded to be compatible to H.264
standard, and when coding the enhancement layer, a new inter layer intra coding method (ILICM) is used to improve the coding
efficiency. ILICM intends to use a few specific pixels in the up-sampled and decoded base layer block to predict the corresponding
block in enhancement layer, when those original predictors are not available. Besides, in order to interpolate the base layer
data, a graceful component-based up-sampling method (CUSM) is also introduced in this paper. Based on the human vision system,
CUSM assigns a much simpler up-sampling filter for the chroma component due to its lower sensitivity for human eyes. Generally,
proposed schemes including ILICM and CUSM are expected to increase the coding performance of enhancement layer and reduce
the computing complexity of the decoder, respectively. Experimental results show that, the PSNR values of luma component of
encoded frames are increased with no additional cost on coded bit-rate for ILICM method, while CUSM method can also maintain
the coding performance under the theoretically significant reduction of computational complexity. 相似文献
13.
Peer-to-peer (P2P) streaming solutions are changing the way real-time multimedia transmission is distributed over the Internet. The advances in video coding, like Scalable Video Coding, also turns possible high-quality/definition video distribution and consumption. The recent push in using the standard HTTP protocol for streaming videos in the World Wide Web, is also making converged entertainment services come closer to global deployment across TV, Tablet and mobile devices. The combination of Scalable Video, P2P networks and Web technologies for transmitting high-quality live and time-shifted media contents, is a potential area of innovation and a very actual research topic. This paper describes the architecture of a quality-centric P2P distribution network supporting adaptive and scalable streamable media, that decouples the transport mechanisms from the media content type and structure (video, audio, timed-data, timed-text), irrespective of their encoding schemes. The set of P2P streaming protocols designed for the network enables streaming of live and on-demand media, with very low signaling cost. Prototypes of the P2P network components were implemented and integrated in the streaming platform of EU FP7 SARACEN Project. A suite of tests for evaluation of the performance of the solution demonstrates that it keeps a fairly stable quality level with reduced amplitude and frequency of variations, raising the overall quality perceived by the end-user. 相似文献
14.
《Computer Networks》2007,51(9):2268-2279
The fast growth of Radio Frequency IDentification (RFID) implies a deployment challenge, namely how to keep this technology scalable without renouncing security and privacy features. This paper focuses on combining tag privacy and scalability using the hash locks mechanism. Our contribution is twofold: (i) a cell-based architecture is proposed in which the readers co-operate in order to conduct tag identification in a private and scalable way; (ii) a communication protocol for the proposed architecture is defined and assessed. The proposed architecture and protocol solve the scalability shortcomings of private RFID identification pointed out by Juels and Weis. 相似文献
15.
Modern astronomical instruments produce enormous amounts of three-dimensional data describing the physical Universe. The currently available data sets range from the solar system to nearby stars and portions of the Milky Way Galaxy, including the interstellar medium and some extrasolar planets, and extend out to include galaxies billions of light years away. Because of its gigantic scale and the fact that it is dominated by empty space, modeling and rendering the Universe is very different from modeling and rendering ordinary three-dimensional virtual worlds at human scales. Our purpose is to introduce a comprehensive approach to an architecture solving this visualization problem that encompasses the entire Universe while seeking to be as scale-neutral as possible. One key element is the representation of model-rendering procedures using power scaled coordinates (PSC), along with various PSC-based techniques that we have devised to generalize and optimize the conventional graphics framework to the scale domains of astronomical visualization. Employing this architecture, we have developed an assortment of scale-independent modeling and rendering methods for a large variety of astronomical models, and have demonstrated scale-insensitive interactive visualizations of the physical Universe covering scales ranging from human scale to the Earth, to the solar system, to the Milky Way Galaxy, and to the entire observable Universe 相似文献
16.
Javier Povedano-Molina Jose M. Lopez-Vega Juan M. Lopez-Soler Antonio Corradi Luca Foschini 《Future Generation Computer Systems》2013,29(8):2041-2056
One of the most important features in Cloud environments is to know the status and the availability of the physical resources and services present in the current infrastructure. A full knowledge and control of the current status of those resources enables Cloud administrators to design better Cloud provisioning strategies and to avoid SLA violations. However, it is not easy to manage such information in a reliable and scalable way, especially when we consider Cloud environments used and shared by several tenants and when we need to harmonize their different monitoring needs at different Cloud software stack layers. To cope with these issues, we propose Distributed Architecture for Resource manaGement and mOnitoring in cloudS (DARGOS), a completely distributed and highly efficient Cloud monitoring architecture to disseminate resource monitoring information. DARGOS ensures an accurate measurement of physical and virtual resources in the Cloud keeping at the same time a low overhead. In addition, DARGOS is flexible and adaptable and allows defining and monitoring new metrics easily. The proposed monitoring architecture and related tools have been integrated into a real Cloud deployment based on the OpenStack platform: they are openly available for the research community and include a Web-based customizable Cloud monitoring console. We report experimental results to assess our architecture and quantitatively compare it with a selection of other Cloud monitoring systems similar to ours showing that DARGOS introduces a very limited and scalable monitoring overhead. 相似文献
17.
Gustavo Sanchez César Marcon Luciano Agostini 《Journal of Real-Time Image Processing》2017,13(1):71-83
This article presents a real-time scalable hardware architecture for the bipartition modes of 3D high-efficiency video coding (3D-HEVC) standard, which includes the depth modeling modes 1 (DMM-1) and 4 (DMM-4). A simplification of the DMM-1 algorithm was done, removing the refinement step. This simplification causes a small BD-rate increase (0.09 %) with the advantage of better using our hardware resources, reducing the necessary memory required for storing all DMM-1 wedgelet patterns by 30 %. The scalable architecture can be configured to support all the different block sizes supported by the 3D-HEVC and also to reach different throughputs, according to the application requirements. Then, the proposed solution can be efficiently used for several encoding scenarios and many different applications. Synthesis results considering a test case show that the designed architecture is capable of processing HD 1080p videos in real time, but with other configurations, higher resolutions are also possible to be processed. 相似文献
18.
Now that system-on-chip technology is emerging, single-chip multi-processors are becoming feasible. A key problem of designing such systems is the complexity of their on-chip interconnects and memory architecture. It is furthermore unclear at what level software should be integrated. An example of a single-chip multi-processor for real-time (networked) embedded systems is the multi-microprocessor (MμP). Its architecture consists of a scalable number of identical master processors and a configurable set of shared co-processors. Additionally, an on-chip real-time operating system kernel is included to support transparent multi-tasking over the set of master processors. In this paper, we explore the main design issues of the architecture platform on which the MμP is based. In addition, synthesis results are presented for a lightweight configuration of this architecture platform. 相似文献
19.
Ben Hamida Amal Koubaa Mohamed Nicolas Henri Amar Chokri Ben 《Multimedia Tools and Applications》2016,75(24):17187-17213
Multimedia Tools and Applications - Nowadays, colossal numbers of facilities all over the world are protected from various types of threats by video surveillance cameras. Video surveillance systems... 相似文献
20.
Grid computing is a largely adopted paradigm to federate geographically distributed data centers. Due to their size and complexity, grid systems are often affected by failures that may hinder the correct and timely execution of jobs, thus causing a non-negligible waste of computing resources. Despite the relevance of the problem, state-of-the-art management solutions for grid systems usually neglect the identification and handling of failures at runtime. Among the primary goals to be considered, we claim the need for novel approaches capable to achieve the objectives of scalable integration with efficient monitoring solutions and of fitting large and geographically distributed systems, where dynamic and configurable tradeoffs between overhead and targeted granularity are necessary. This paper proposes GAMESH, a Grid Architecture for scalable Monitoring and Enhanced dependable job ScHeduling. GAMESH is conceived as a completely distributed and highly efficient management infrastructure, concentrating on two crucial aspects for large-scale and multi-domain grid environments: (i) the scalable dissemination of monitoring data and (ii) the troubleshooting of job execution failures. GAMESH has been implemented and tested in a real deployment encompassing geographically distributed data centers across Europe. Experimental results show that GAMESH (i) enables the collection of measurements of both computing resources and conditions of task scheduling at geographically sparse sites, while imposing a limited overhead on the entire infrastructure, and (ii) provides a failure-aware scheduler able to improve the overall system performance, even in the presence of failures, by coordinating local job schedulers at multiple domains. 相似文献