首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 500 毫秒
1.
Through several giant evolutionary steps, Ethernet has become an almost ubiquitous technology for communication. Being versatile enough to be employed in new and various fields of application, it is now making inroads in factories. However, automated systems are different from many other applications of Ethernet,first and foremost because they require the network technology to deliver real-time performance. In the present study, a number of critical aspects of Ethernet, usually referred to as an Industrial Ethernet, are examined. More specifically, there is a focus on the application-to-application delay and jitter characteristics of such networks, when using Internet protocols such as UDP and TCP. It is demonstrated how important it is to take control of the latency in the station nodes, since the main communication delays occur inside the nodes, and different solutions are presented for controlling these delays. In particular, a priority-based protocol stack is assessed. The results show that real-time, Ethernet-based IP communication is now adequate even for demanding automated applications. In this paper, substation automation (power distribution) is used as an example of a demanding automation system.  相似文献   

2.
Computation of Troposphere Slant Delays on a GPU   总被引:2,自引:0,他引:2  
The computation of ray-traced troposphere delays which can be utilized for space geodetic applications is a time-consuming effort when a large number of rays has to be calculated. On the other hand, computation time can be tremendously reduced when algorithms are capable of supporting parallel processing architectures. Thus, by the use of an off-the-shelf graphics processing unit (GPU), it is demonstrated that troposphere slant delays can be computed very efficiently, without loss of accuracy. An adopted ray-tracing algorithm is presented, and results from GPU computations are compared with those obtained from calculations on a standard personal computer's CPU.  相似文献   

3.
This article addresses the media-handling aspects of multimedia conferencing in multihop cellular networks. Participation in multimedia conferences in MCNs may be confined to either the conventional cellular network or the MANET. Participation also can span both networks. However, neither the media-handling architectures currently in use in conventional cellular networks, nor the ones in use in MANETs are appropriate. When they are used, either the participants in the conventional cellular network, or the ones in the MANETs, might experience long and potentially unacceptable end-to-end stream delays. Furthermore, these architectures are the antipodes of each other. Reconciling them is not an easy task. We evaluate the architectures currently in use in cellular networks and MANETs, show that none is suitable, and propose a new architecture that bridges the two worlds. The new architecture uses the MANET media-handling architectures as a starting point. It is based on media mediators. A media mediator is composed of two functional entities: the media gateway controller mediator and the media gateway mediator. A proof-of-concept prototype was implemented, and extensive simulations were conducted to evaluate performance.  相似文献   

4.
The performance of computation-intensive digital signal processing applications running on parallel systems is highly dependent on communication delays imposed by the parallel architecture. In order to obtain a more compact task/processor assignment, a scheduling algorithm considering the communication time between processors needs to be investigated. Such applications usually contain iterative or recursive segments that are modeled as communication sensitive data flow graphs (CS-DFGs), where nodes represent computational tasks and edges represent dependencies between them. Based on the theorems derived, this paper presents a novel efficient technique called cyclo-compaction scheduling, which is applied to a CS-DFG to obtain a better schedule. This new method takes into account the data transmission time, loop carried dependencies, and the target architecture. It implicitly uses the retiming technique (loop pipelining) and a task remapping procedure to allocate processors and to iteratively improve the parallelism while handling the underlying communication and resource constraints. Experimental results on different architectures demonstrate that this algorithm yields significant improvement over existing methods. For some applications, the final schedule length is less than one half of its initial length  相似文献   

5.
Timing analysis of network on chip architectures for MP-SoC platforms   总被引:1,自引:0,他引:1  
Recently, the use of multiprocessor system-on-chip (MP-SoC) platforms has emerged as an important integrated circuit design trend for high-performance computing applications. As the number of reusable intellectual property (IP) blocks on such platforms continues to increase, many have argued that monolithic bus-based interconnect architectures will not be able to support the clock cycle requirements of these leading-edge SoCs. While hierarchical system integration using multiple smaller buses connected through repeaters or bridges offer possible solutions, such approaches tend to be ad hoc in nature, and therefore, lack generality and scalability. Instead, many different forms of network on chip (NoC) architectures have been proposed in the past few years to specifically address this problem. We believe that the NoC approach will ultimately be the preferred communication fabric for next generation designs. To support this conjecture, this paper demonstrates, through detailed circuit design and timing analysis that different proposed NoC architectures to date are guaranteed to achieve the minimum possible clock cycle times in a given CMOS technology, usually specified in normalized units as 10-15 FO4 delays. This is contrasted with the bus-based approach, which may require several design iterations to deliver the same performance when the number of IP blocks connected to the bus exceeds certain limits.  相似文献   

6.
Advances in enabling technologies such as broadband wide area networks and the proliferation of the Internet has led to industry and home users looking beyond conventional communications media. Consequently, the telecommunications industry has been extending its application domain over recent years. Indeed, mediated communication has become a reasonably well established research area. Numerous modes of communication have been utilised successfully for various applications. From e-mail and text chat to videoconferencing systems, the use of mediated communication has become a part of daily life. Virtual environments (VEs) are highly configurable media, ranging from relatively basic to extremely elaborate architectures. At their most complex, they promise a seamless interface between real and synthesised environments. The potential for media-rich environments is seeded in the technology's capacity to faithfully represent the participants (both physically and dynamically) and for those participants to present and interact with (shared) data in an intuitive manner. It is the latter of these attributes which is highly relevant to communications and is discussed in this paper. The various configurations of virtual reality (VR) technology make matching the user to the technology an extremely complex task. Clearly, a basis for evaluating the effectiveness of these systems is required. Even though human factors (HF) evaluation and design techniques are well established in other human/computer interaction (HCI) fields, knowledge and understanding of virtual interfaces is limited. HF evaluation in VR is a complex subject and covers many aspects, such as basic human performance, cognition, and sensory capability. To address all these factors individually in an empirical fashion would demand a very long and expensive research programme. In addition, such studies may not predict the user's overall performance in a multi-modal VE. This paper deals with an alternative approach to understanding the issues of human performance in virtual environments via a process of top-down systems engineering evaluation. This paper is designed to provide an introduction to the assessment of virtual environments, and a reference for interface designers and researchers engaged in the investigation of mediated communication.  相似文献   

7.
Variable digital filters (VDFs) are useful for various signal processing and communication applications where the frequency characteristics, such as fractional delays and cutoff frequencies, can be varied online. In this paper, we investigate the design of VDFs with discrete coefficients as a means of achieving low complexity and efficient hardware implementation. The filter coefficients are expressed as the sum of signed power-of-two terms with a restriction on the total number of power-of-two for the filter coefficients. An efficient design procedure is proposed that includes an improved method for handling the quantization of the VDF coefficients for both the min-max and the least-square criteria leading to an optimum quantized solution. For the least-square criterion, a reduced search region around the optimum quantized solution is further constructed and the branch and bound method in conjunction with an efficient branch cutting scheme is presented to search for an optimum solution in this reduced region.  相似文献   

8.
This article evaluates network and server infrastructure requirements to support real-time flows associated with networked entertainment applications. These include the state information flow to update the status of the virtual environment and immersive communication flows such as voice, video, gesture, and haptics communication. The article demonstrates that scaling these applications to large geographical spreads of participants requires distribution of computation to meet the latency constraints of the applications. This latency-driven distribution of computation is essential even when there are no limitations on the availability of computational resources in one location. The article provides detailed results on distributed server architectures for two of these real-time flows, state information and immersive voice communication. It also identifies a generic set of requirements for the underlying network and server infrastructure to support these applications and propose a new design, called switched overlay networks, for this purpose.  相似文献   

9.
The future of wires   总被引:2,自引:0,他引:2  
Concern about the performance of wires wires in scaled technologies has led to research exploring other communication methods. This paper examines wire and gate delays as technologies migrate from 0.18-μm to 0.035-μm feature sizes to better understand the magnitude of the the wiring problem. Wires that shorten in length as technologies scale have delays that either track gate delays or grow slowly relative to gate delays. This result is good news since these “local” wires dominate chip wiring. Despite this scaling of local wire performance, computer-aided design (CAD) tools must still become move sophisticated in dealing with these wires. Under scaling, the total number of wires grows exponentially, so CAD tools will need to handle an ever-growing percentage of all the wires in order to keep designer workloads constant. Global wires present a more serious problem to designers. These are wires that do not scale in length since they communicate signals across the chip. The delay of these wives will remain constant if repeaters are used meaning that relative to gate delays, their delays scale upwards. These increased delays for global communication will drive architectures toward modular designs with explicit global latency mechanisms  相似文献   

10.
The MIT Alewife Machine   总被引:3,自引:0,他引:3  
A variety of models for parallel architectures, such as shared memory, message passing, and data flow, have converged in the recent past to a hybrid architecture form called distributed shared memory (DSM). Alewife, an early prototype of such DSM architectures, uses hybrid software and hardware mechanisms to support coherent shared memory, efficient user level messaging, fine grain synchronization, and latency tolerance. Alewife supports up to 512 processing nodes connected over a scalable and cost effective mesh network at a constant cost per node. Four mechanisms combine to achieve Alewife's goals of scalability and programmability: software extended coherent shared memory provides a global, linear address space; integrated message passing allows compiler and operating system designers to provide efficient communication and synchronization; support for fine grain computation allows many processors to cooperate on small problem sizes; and latency tolerance mechanisms-including block multithreading and prefetching-mask unavoidable delays due to communication. Extensive results from microbenchmarks, together with over a dozen complete applications running on a 32-node prototype, demonstrate that integrating message passing with shared memory enables a cost efficient solution to the cache coherence problem and provides a rich set of programming primitives. Our results further show that messaging and shared memory operations are both important because each helps the programmer to achieve the best performance for various machine configurations  相似文献   

11.
We evaluate the validity of the fundamental assumption behind application-specific programmable processors: that applications differ from each other in key parameters which are exploitable, such as the available instruction-level parallelism (ILP), demand on various hardware resources, and the desired mix of function units. Following the tradition of the CAD community, we develop an accurate chip area estimate and a set of aggressive hardware optimization algorithms. We follow the tradition of the architecture community by using comprehensive real-life benchmarks and production quality tools. This combination enables us to build a unique framework for system-level synthesis and to gain valuable insights about design and use of application-specific programmable processors for modern applications. We explore the application-specific programmable processor (ASSP) design space to understand the relationship between performance and area. The architecture model we used is the Hewlett Packard PA-RISC with single level caches. The system, including all memory and bus latencies, is simulated and no other specialized ALU or memory structures are being used. The experimental results reveal a number of important characteristics of the ASSP design space. For example, we found that in most cases a single programmable architecture performs similarly to a set of architectures that are tuned to individual application. A notable exception is highly cost sensitive designs, which we observe need a small number of specialized architectures that require smaller areas. Also, it is clear that there is enough parallelism in the typical media and communication applications to justify use of high number of function units. We found that the framework introduced in this paper can be very valuable in making early design decisions such as area and architectural configuration tradeoff, cache and issue width tradeoff under area constraint, and the number of branch units and issue width  相似文献   

12.
Clock and data recovery (CDR) architectures used in high-speed wire-linked communication receivers are often shown as PLL or DLL based topologies. However, there are many other types of CDR architectures such as phase-interpolator, oversampling and injection locked based topologies. The best choice for the CDR topology will depend on the application and the specification requirements. This paper presents an overview and comparative study of the most commonly used CDR architectures. This analysis includes the circuit structures, design challenges, major performance limitations and primary applications. Finally, the tradeoffs among the various CDR architectures are summarized.  相似文献   

13.
Multicast delivery has become more and more important in modern multimedia applications. VoD and videoconferences are two examples. Multimedia integrates texts, audios, videos and still images in a variety of applications. The data in this media can be time critical in terms of maximum delay and delay jitter. In order to satisfy all these applications, the network needs to have an efficient multicasting mechanism using the true capability of ATM networks. In the native solution, a separate connection can be set up from the source to each group node, also called full connectivity. The full connectivity needs O(N/sup 2/) connections, where N is the number of nodes in a group. Instead, we can have one tree spanning all the participants. Multicast using a single shared tree has become the trend. In this paper, we propose a bi-directional multipoint-to-multipoint multicast scheme, a SD channel-based Multicast with Round-robin Access (SDRAM), for ATM networks, which uses a single tree for a multicast group consisting of multiple participants that are either senders, receivers, or a mix of both. We first discuss why the resequencer model will not be suitable for multimedia traffic, then propose the SDRAM scheme to solve the problems, and finally compare our scheme with the resequencer model through simulation. Results show the mean queuing delays and mean inter-PDU delays of our scheme are not sensitive to mean PDU size while the mean queuing delays and mean inter-PDU delays of the resequencer scheme are very sensitive to mean PDU size.  相似文献   

14.
The main objective of this paper is to present this author's assessment of the impact that alternative network media and wiring strategies have on the life-cycle cost of a local area network (LAN). In terms of trends and issues related to media such as twisted-pair, optical fiber and coaxial cable and wiring strategies, the author sees the following: 1) a general feeling of confusion among many potential LAN customers, 2) for those organizations which did choose a particular media and wiring strategy such as broadband or baseband coaxial cable, an increased move toward multiple media networks which can support various communication architectures, and 3) increasing management concern for LAN life-cycle costs. These subjects are addressed further as part of this paper.  相似文献   

15.
The use of network delay estimation for multimedia data retrieval   总被引:1,自引:0,他引:1  
Multimedia data have specific temporal presentation requirements. For example in video conferencing applications the voice and images of the participants must be delivered and presented synchronously. These requirements can be achieved by scheduling or managing system resources. We present a technique called limited a priori scheduling (LAP) to manage the delivery channel from source to destination for digital multimedia data. By using delay estimation a LAP scheduler can retrieve stored digital media spanning arbitrary networks with unspecified delays. The use of delay estimation also facilitates selective degradation of service in bandwidth and buffer limited situations. Such degradation enables the continuous real-time playout and synchronization of various media arriving from different sources. The performance of the LAP scheduler is described based on implementation and experimentation using Ethernet  相似文献   

16.
The introduction of ATM is being propelled by the need for fast data communications in public and private networks. What is needed is a clear picture of protocol architectures and traffic characteristics of the various applications to be supported both now and in the future. The authors discuss ATM network architectures with special focus on the support of connectionless LAN interconnection and show which benefits can be obtained from the introduction of a connectionless server in an ATM network. They outline issues related to the use of existing protocols when ATM networks are introduced and show which protocol complexities have to be handled by gateways and servers, again using the example of connectionless LAN interconnection. Having presented network and protocol architectures the issue of data communication in ATM, resource allocation, is treated  相似文献   

17.
Recently, vehicular communication systems have attracted much attention, fueled largely by the growing interest in Intelligent Transportation Systems (ITS). These systems are aimed at addressing critical issues like passenger safety and traffic congestion, by integrating information and communication technologies into transportation infrastructure and vehicles. They are built on top of self organizing networks, known as a Vehicular Ad hoc Networks (VANET), composed of mobile vehicles connected by wireless links. While the solutions based on the traditional layered communication system architectures such as OSI model are readily applicable, they often fail to address the fundamental problems in ad hoc networks, such as dynamic changes in the network topology. Furthermore, many ITS applications impose stringent QoS requirements, which are not met by existing ad hoc networking solutions. The paradigm of cross-layer design has been introduced as an alternative to pure layered design to develop communication protocols. Cross-layer design allows information to be exchanged and shared across layer boundaries in order to enable efficient and robust protocols. There has been several research efforts that validated the importance of cross-layer design in vehicular networks. In this article, a survey of recent work on cross-layer communication solutions for VANETs is presented. Major approaches to cross-layer protocol design is introduced, followed by an overview of corresponding cross-layer protocols. Finally, open research problems in developing efficient cross-layer protocols for next generation transportation systems are discussed.  相似文献   

18.
Decentralized processing algorithms are attractive alternatives to centralized algorithms for target tracking applications in smart sensor networks since they provide the ability to scale, reduce vulnerability, reduce communication, and share processing responsibilities among individual nodes. Sharing the processing responsibilities allows parallel processing of raw data at the individual nodes. However, this introduces other difficulties in multi-modal smart sensor networks, such as non-observability of the targets’ states at any individual node and various delays such as varying processing delays, communication delays and signal propagation delays for the different modalities. In this paper, we provide a novel algorithm to determine the initial probability distribution of multiple targets’ states in a decentralized manner. The targets’ state vectors consist of the targets’ positions and velocities on the 2D plane. Our approach can determine the state vector distribution even if the individual sensors alone are not capable of observing it. Our approach can also compensate for varying delays among the assorted modalities. The resulting distribution can be used to initialize various tracking algorithms. Our approach is based on Monte Carlo methods, where the state distributions are represented by a weighted set of discrete state realizations. A robust weighting strategy is formulated to account for missed detections, clutter and estimation delays. To demonstrate the effectiveness of the algorithm, we simulate a network with direction-of-arrival nodes and range-Doppler nodes.  相似文献   

19.
An optimization approach to the high level synthesis of VLSI multichip architectures is presented in this paper. This research is important for industry since it is well known that these early high level decisions have the greatest impact on the final VLSI implementation. Optimal application-specific architectures are synthesized here to minimize latency given constraints on chip area, I/O pin count and interchip communication delays. A mathematical integer programming (IP) model for simultaneously partitioning, scheduling, and allocating hardware (functional units, I/O pins, and interchip busses) is formulated. By exploiting the problem structure, using polyhedral theory, the size of the search space is decreased and a new variable selection strategy is introduced based on the branch and bound algorithm. Multichip optimal architectures for several examples are synthesized in practical cpu times. Execution times are comparable to previous heuristic approaches, however there are significant improvements in optimal schedules and allocations of multichips. This research breaks new ground by 1) simultaneously partitioning, scheduling, and allocating in practical cpu times, 2) guaranteeing globally optimal architectures for multichip systems for a specific objective function, and 3) supporting interchip communication delay, interchip bus allocation, and other complex interface constraints  相似文献   

20.
Extensive research has been conducted on task scheduling and mapping on a multi-processor system on chip. The mapping strategy on a network on chip (NoC) has a huge effect on the communication energy and performance. This paper proposes an efficient core mapping for NoC-based architectures. Which focus on energy- aware and reliability-aware mapping issues for NoC-based architectures and considers new applications with insignificant inter-processor communication overhead to be added to the system. This methodology was assessed by applying it to various benchmark applications. Simulation results reveal that the proposed mapping algorithm greatly improves the reliability of the system and reduce the communication energy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号