首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The second generation terrestrial TV broadcasting standard from the digital video broadcasting (DVB) project, DVB-T2, has recently been standardized. In this article we perform a complexity analysis of our software defined implementation of the modulator/demodulator parts of a DVB-T2 transmitter and receiver. First we describe the various stages of a DVB-T2 modulator and demodulator, as well as how they have been implemented in our system. We then perform an analysis of the computational complexity of each signal processing block. The complexity analysis is performed in order to identify the blocks that are not feasible to run in realtime on a general purpose processor. Furthermore, we discuss implementing these computationally heavy blocks on other architectures, such as GPUs (graphics processing units) and FPGAs (field-programmable gate arrays), that would still allow them to be implemented in software and thus be easily reconfigurable.  相似文献   

2.
Decentralized detection in a network of wireless sensor nodes involves the fusion of information about a phenomenon of interest (PoI) from geographically dispersed nodes. In this paper, we investigate the problem of binary decentralized detection in a dense and randomly deployed wireless sensor network (WSN), whereby the communication channels between the nodes and the fusion center are bandwidth-constrained. We consider a scenario in which sensor observations, conditioned on the alternate hypothesis, are independent but not identically distributed across the sensor nodes. We compare two different fusion architectures, namely, the parallel fusion architecture (PFA) and the cooperative fusion architecture (CFA), for such bandwidth-constrained WSNs, where each sensor node is restricted to send a I-bit information to the fusion center. For each architecture, we derive expression for the probability of decision error at the fusion center. We propose a consensus flooding protocol for CFA and analyze its average energy consumption. We analyze the effects of PoI intensity, realistic link models, consensus flooding protocol, and network connectivity on the system reliability and average energy consumption for both fusion architectures. We demonstrate that a trade-off exists among spatial diversity gain, average energy consumption, delivery ratio of the consensus flooding protocol, network connectivity, node density, and Poll intensity in CFA. We then provide insight into the design of cooperative WSNs  相似文献   

3.
At the data-link layer, ATM offers a number of features, such as high-bandwidth and per-session quality-of-service (QoS) guarantees, making it particularly attractive to multimedia applications. Unfortunately, many of these features are not visible to applications because of the inadequacies of existing higher-level protocol architectures. Although there is considerable effort underway to tune these protocols for ATM networks, we believe that a new ATM-specific protocol stack is essential to effectively exploit all the benefits of ATM. We describe the semantics of such a protocol stack, and discuss its advantages over traditional protocol architectures from the perspective of multimedia applications. The performance impact of the new protocol architecture is experimentally demonstrated on a video conferencing testbed built around IBM RS/6000's equipped with prototype hardware for video/audio processing, and connected via ATM links  相似文献   

4.
We consider a general multiple-antenna network with multiple sources, multiple destinations, and multiple relays in terms of the diversity-multiplexing tradeoff (DMT). We examine several subcases of this most general problem taking into account the processing capability of the relays (half-duplex or full-duplex), and the network geometry (clustered or nonclustered). We first study the multiple-antenna relay channel with a full-duplex relay to understand the effect of increased degrees of freedom in the direct link. We find DMT upper bounds and investigate the achievable performance of decode-and-forward (DF), and compress-and-forward (CF) protocols. Our results suggest that while DF is DMT optimal when all terminals have one antenna each, it may not maintain its good performance when the degrees of freedom in the direct link are increased, whereas CF continues to perform optimally. We also study the multiple-antenna relay channel with a half-duplex relay. We show that the half-duplex DMT behavior can significantly be different from the full-duplex case. We find that CF is DMT optimal for half-duplex relaying as well, and is the first protocol known to achieve the half-duplex relay DMT. We next study the multiple-access relay channel (MARC) DMT. Finally, we investigate a system with a single source-destination pair and multiple relays, each node with a single antenna, and show that even under the ideal assumption of full-duplex relays and a clustered network, this virtual multiple-input multiple-output (MIMO) system can never fully mimic a real MIMO DMT. For cooperative systems with multiple sources and multiple destinations the same limitation remains in effect.  相似文献   

5.
6.
Domain specific coarse-grained reconfigurable architectures (CGRAs) have great promise for energy-efficient flexible designs for a suite of applications. Designing such a reconfigurable device for an application domain is very challenging because the needs of different applications must be carefully balanced to achieve the targeted design goals. It requires the evaluation of many potential architectural options to select an optimal solution. Exploring the design space manually would be very time consuming and may not even be feasible for very large designs. Even mapping one algorithm onto a customized architecture can require time ranging from minutes to hours. Running a full power simulation on a complete suite of benchmarks for various architectural options require several days. Finding the optimal point in a design space could require a very long time. We have designed a framework/tool that made such design space exploration (DSE) feasible. The resulting framework allows testing a family of algorithms and architectural options in minutes rather than days and can allow rapid selection of architectural choices. In this paper, we describe our DSE framework for domain specific reconfigurable computing where the needs of the application domain drive the construction of the device architecture. The framework has been developed to automate design space case studies, allowing application developers to explore architectural tradeoffs efficiently and reach solutions quickly. We selected some of the core signal processing benchmarks from the MediaBench benchmark suite and some edge-detection benchmarks from the image processing domain for our case studies. We describe two search algorithms: a stepped search algorithm motivated by our manual design studies and a more traditional gradient based optimization. Approximate energy models are developed in each case to guide the search toward a minimal energy solution. We validate our search results by comparing the architectural solutions selected by our tool to an architecture optimized manually and by performing sensitivity tests to evaluate the ability of our algorithms to find good quality minima in the design space. All selected fabric architectures were synthesized on 130 nm cell-based ASIC fabrication process from IBM. These architectures consume almost same amount of energy on average, but the gradient based approach is more general and promises to extend well to new problem domains. We expect these or similar heuristics and the overall design flow of the system to be useful for a wide range of architectures, including mesh based and other commonly used architectures for CGRAs.  相似文献   

7.
Internet of Things (IoT) devices facilitate intelligent service delivery in a broad range of settings, such as smart offices, homes and cities. However, the existing IoT access control solutions are mainly based on conventional identity management schemes and use centralized architectures. There are known security and privacy limitations with such schemes and architectures, such as the single-point failure or surveillance (e.g., device tracking). Hence, in this paper, we present an architecture for capability-based IoT access control utilizing the blockchain and decentralized identifiers to manage the identity and access control for IoT devices. Then, we propose a protocol to provide a systematic view of system interactions, to improve security. We also implement a proof-of-concept prototype of the proposed approach and evaluate the prototype using a real-world use case. Our evaluation results show that the proposed solution is feasible, secure, and scalable.  相似文献   

8.
Cheat-Proof Playout for Centralized and Peer-to-Peer Gaming   总被引:1,自引:0,他引:1  
We explore exploits possible for cheating in real-time, multiplayer games for both client-server and serverless architectures. We offer the first formalization of cheating in online games and propose an initial set of strong solutions. We propose a protocol that has provable anti-cheating guarantees, is provably safe and live, but suffers a performance penalty. We then develop an extended version of this protocol, called asynchronous synchronization, which avoids the penalty, is serverless, offers provable anti-cheating guarantees, is robust in the presence of packet loss, and provides for significantly increased communication performance. This technique is applicable to common game features as well as clustering and cell-based techniques for massively multiplayer games. Specifically, we provide a zero-knowledge proof protocol so that players are within a specific range of each other, and otherwise have no notion of their distance. Our performance claims are backed by analysis using a simulation based on real game traces  相似文献   

9.
We propose computationally highly efficient Neyman-Pearson (NP) tests for anomaly detection over birth-death type discrete time Markov chains. Instead of relying on extensive Monte Carlo simulations (as in the case of the baseline NP), we directly approximate the log-likelihood density to match the desired false alarm rate; and therefore obtain our efficient implementations. The proposed algorithms are appropriate for processing large scale data in online applications with real time false alarm rate controllability. Since we do not require parameter tuning, our algorithms are also adaptive to non-stationarity in the data source. In our experiments, the proposed tests demonstrate superior detection power compared to the baseline NP while nearly achieving the desired rates with negligible computational resources.  相似文献   

10.
Next generation mobile wireless systems (4G) support a wide range of communication protocols and services, thus opening new design challenges. The desired flexibility requires an effective utilization of system resources. In this article, we introduce the concept of system scenarios in wireless baseband engine signal processing optimization and in digital front-end power optimization. The scenario methodology classifies the system behavior from a cost perspective and provides the necessary information for an effective system tuning. We propose improvements for the clustering of the system executions into scenarios and the detection of scenarios at run time achieving a better trade-off between cost estimation accuracy and detection overhead. The first case study of the paper, using the WLAN communication protocol, demonstrates the accurate prediction of the execution time of each block of bits, which on average is 92 % shorter than the worst case allowing us to use the remaining time for the optimization of specifications like power consumption. In the second case study, we concentrate on the efficient signal power management during a WLAN transmission reducing the total energy consumption 50–94 % based on the throughput utilization.  相似文献   

11.
Active networking in environments built to support link rates up to several gigabits per second poses many challenges. One such challenge is that the memory bandwidth and individual processing power of the router's microprocessors limit the total available processing power of a router. In this article we identify and describe three components, which promise a high-performance active network solution. This implements the key features typical to active networking, such as automatic protocol deployment and application specific processing, and it is suitable for a gigabit environment. First, we describe the hardware of the active network node (ANN), a scalable high-performance platform based on off-the-shelf CPUs connected to a gigabit ATM switch backplane. Second, we introduce the ANN's modular, extensible, and highly efficient operating system (NodeOS). Third, we describe an execution environment running on top of the NodeOS, which implements a novel large-scale active networking architecture called distributed code caching  相似文献   

12.
In this article, we concentrate on the study of control plane and management approaches for translucent WDM networks. A hybrid optical control plane (OCP) is proposed, which needs the extensions of both routing and signaling protocol, to combine the best features of routing-based information updating and signaling-based data collection and path evaluation. Simulations are conducted to compare hybrid OCP with two existing control architectures: signaling-based OCP and routing-based OCP. Numerical results show that hybrid OCP exhibits a lower blocking probability than the other approaches, and also minimizes the stability and scalability problems under various traffic conditions.  相似文献   

13.
In this paper, a technoeconomical study of several (optical) packet-switching node architectures is described. Therefore, different architectures proposed in IST-WASPNET and IST-LASAGNE projects as well as a standard optical circuit switching approach are considered, and their economical impact is estimated by means of cost comparisons between the different technologies. The switching architectures all use optical fiber as a transport medium, but each of them uses a different technology to process switching. Their cost is evaluated as a function of most characteristic parameters for each technology. In the all-optical approaches, the main cost is that related to the fiber assembly, whereas for electronic processing, the most expensive cost is related to the optical-electronic-optical (OEO) conversions. The results show that the integration of optical components is crucial to make all-optical packet-switching nodes feasible.  相似文献   

14.
An Algorithm-Hardware-System Approach to VLIW Multimedia Processors   总被引:2,自引:0,他引:2  
Very Long Instruction Word (VLIW) processor architectures for multimedia applications are discussed from an algorithm, hardware and system based point of view. VLIW processors show high flexibility and processing power, as well as a good utilization of resources by compiler-generated code, but their exclusive exploitation of instruction level parallelism (ILP) decreases in efficiency as the degree of parallelism increases. This is mainly caused by characteristics of multimedia algorithms, increasing wiring delays, compiler restrictions, and a widening gap between on-chip processing speed and available bandwidth to external memory. As new multimedia applications and standards continue to evolve (MPEG-4), the demand for higher processing power will continue. Therefore, parallel processing in all its available forms will have to be exploited to achieve significant performance improvements. We show that, due to the diminishing returns from a further increase in ILP, multimedia applications will benefit more from an additional exploitation of parallelism at thread-level. We examine how simultaneous multithreading (SMT), a novel architectural approach combining VLIW techniques with parallel processing of threads, can efficiently be used to further increase performance of typical multimedia workloads.  相似文献   

15.
A reservation-based multicast protocol for WDM optical star networks   总被引:1,自引:0,他引:1  
In this paper, we present a reservation-based medium access control (MAC) protocol with multicast support for wavelength-division multiplexing networks. Our system is based on the single-hop, passive optical star architecture. Of the available wavelengths (channels), one channel is designated as a control channel, and the remaining channels are used for data transmission. Each node is equipped with a pair of fixed transceiver to access the control channel, and a fixed transmitter and a tunable receiver to access data channels. For easy implementation of the protocol in hardware and for precisely computing the protocol's processing overhead, we give a register-transfer model of the protocol. We simulate the protocol to study its throughput behavior, and present its analytic model. For a node to be able to send data packets in successive data slots with no time gap between them, in spite of the situation that the protocol's execution time may be longer than data transmission time, we propose the idea of multiple MAC units at each node. Unicast throughput of our protocol reaches the theoretically possible maximum throughput for MAC protocols with distributed control, and the multicast throughput is at least as good as, and even better than, those delivered by existing MAC protocols with distributed control.  相似文献   

16.
The next-generation optical transport network will evolve from point-to-point connectivity to mesh networking, which can provide fast and automatic provisioning with enhanced flexibility and survivability. Signaling is used to support connection setup, maintenance, and teardown in such a network. In this paper, we study the performance of two hop-by-hop and one parallel signaling schemes in wavelength-routed optical mesh networks. Based on the sequence between optical crossconnect (OXC) switching and signaling message processing, we classify hop-by-hop signaling into two types that comply with the requirements of GMPLS signaling protocols. These two types are forward before switching configuration (FBSC) and forward after switching configuration (FASC). Also, we propose a parallel signaling scheme that is different from the existing hop-by-hop GMPLS signaling protocols. Considering OXC architectures and traffic patterns, we compare the FBSC, FASC, and parallel signaling schemes using simulation experiments, in terms of network blocking probability and reservation time. The simulation data reveal that the performance of a signaling scheme depends on the nature of the signaling as well as the network setting (e.g., the OXC architecture and traffic pattern). We analyze reasons for this result and discuss tradeoffs between these signaling schemes. This work offers some insight into designing an efficient signaling protocol for wavelength-routed optical mesh networks.  相似文献   

17.
In this paper, we present architectures and scheduling algorithms for encoders and decoders that are based on the two-dimensional discrete wavelet transform. We consider the design of encoders and decoders individually, as well as in an integrated encoder-decoder system. We propose architectures ranging from a single-instruction multiple-data processor arrays to folded architectures that are suitable for single-chip implementations. The scheduling algorithms for the folded architectures range from those that try to minimize the latency to those that try to minimize the storage and keep the data flow regular. We include a comparison of the performance of these algorithms to aid the designer in choosing one that is best suited for a specific application  相似文献   

18.
Cellular telephony networks depend on an extensive wired network to provide access to the radio link. The wired network, called a radio access network, provides such functions as power control and, in CDMA networks, combination of soft handoff legs (also known as macrodiversity resolution) that require coordination between multiple radio base stations and multiple mobile terminals. Existing RAN architectures for cellular systems are based on a centralized radio network controller connected by point-to-point links with the radio base transceiver stations. The existing architecture is subject to a single point of failure if the RNC fails, and is difficult to expand because adding an RNC is expensive. Also, although a network operator may have multiple radio link protocols available, most RAN architectures treat each protocol separately and require a separate RAN control protocol for each. We describe a new architecture, the OpenRAN architecture, based on a distributed processing model with a routed IP network as the underlying transport fabric. OpenRAN was developed by the Mobile Wireless Internet Forum IP in the RAN working group. The OpenRAN architecture applies principles to the radio access network that have been successful in reducing cost and increasing reliability in data communications networks. The result is an architecture that can serve as the basis for an integrated next-generation cellular radio access network  相似文献   

19.
The use of parallelism in enhancing protocol execution performance is examined. In particular the use of a separate concurrent task for each protocol layer is discussed. Two layers of the OSI protocol system were implemented and run on a multiprocessor, with from one to five processors at each end of the connection. Potentially concurrent entities included user tasks as data source and sink, the OSI session layer (kernel functional unit only), the OSI transport layer (classes 0 and 2), vestigial network tasks, and tasks to buffer data between layers. Three substantially different design architectures with from nine to twelve tasks at each end of the connection were compared. The design differences centered on different ways to provide interlayer coordination and buffering; the protocol code was kept identical. The implementation used a real-time kernel which provides synchronous (request-reply) interprocess communication. The variation in throughput between designs covered a range of approximately two to one. The best design is found to be symmetrical decentralized two-way pipeline with courier communications  相似文献   

20.
Fading relay channels: performance limits and space-time signal design   总被引:26,自引:0,他引:26  
Cooperative diversity is a transmission technique, where multiple terminals pool their resources to form a virtual antenna array that realizes spatial diversity gain in a distributed fashion. In this paper, we examine the basic building block of cooperative diversity systems, a simple fading relay channel where the source, destination, and relay terminals are each equipped with single antenna transceivers. We consider three different time-division multiple-access-based cooperative protocols that vary the degree of broadcasting and receive collision. The relay terminal operates in either the amplify-and-forward (AF) or decode-and-forward (DF) modes. For each protocol, we study the ergodic and outage capacity behavior (assuming Gaussian code books) under the AF and DF modes of relaying. We analyze the spatial diversity performance of the various protocols and find that full spatial diversity (second-order in this case) is achieved by certain protocols provided that appropriate power control is employed. Our analysis unifies previous results reported in the literature and establishes the superiority (both from a capacity, as well as a diversity point-of-view) of a new protocol proposed in this paper. The second part of the paper is devoted to (distributed) space-time code design for fading relay channels operating in the AF mode. We show that the corresponding code design criteria consist of the traditional rank and determinant criteria for the case of colocated antennas, as well as appropriate power control rules. Consequently space-time codes designed for the case of colocated multiantenna channels can be used to realize cooperative diversity provided that appropriate power control is employed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号