首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper proposes a transitional actor model from legacy code to decidable dataflow. In addition, the proposed actor model provides dynamic behavior and top down design with static analysis such as deadlock detection and buffer memory size computation by combining Kahn process network (KPN) model and decidable dataflow (DCDF) model. In the proposed model, each port can have its own model of computation, which is different from the existing actor based models, so that it is called port based actor (PBA) model. Each port has either Kahn process network model or decidable dataflow model. A port group for KPN ports is introduced to specify KPN ports that are internally related. The proposed port based actor model is a generalized actor model extended from constant rate dataflow with intermediate ports (CRDF-IP) in which through the intermediate ports, an actor can consume and produce samples arbitrary times per execution. The decomposition of a PBA graph into DCDF graphs allows to apply static analysis, scheduling, and code generation methods developed for DCDF model. This paper explains formal definitions and static analysis for PBA model. Moreover, scheduling and efficient code generation methods are also explained. To validate that the proposed model works, PBA model has been implemented and an H.263 video encoder algorithm is specified and synthesized in PBA model.  相似文献   

2.
This article deals with the ETSI computation model-the E-model-for evaluating speech communication quality in telephone networks. The reasons why such models are of interest for transmission planning of modern networks and the various subjective tests and customer surveys that must form the basis for the computation algorithms are discussed. Then a survey is given of models that are described in the ITU's documents. The structure of the new E-model and the various considerations taken in its development and verification are also discussed. Results from application of the E-model on typical connections agree well with results from other models and from published subjective tests. In particular, impairments caused by low-bit-rate codecs can be quite well predicted by the model, better than by the hitherto used methodology of “quantizing distortion units.” The E-model has also been used in the process of updating some of the ITU-T C-series Recommendations on transmission planning  相似文献   

3.
The deployment of new network architectures, services, and protocols is often manual, ad hoc, and time-consuming. We introduce “spawning networks,” a new class of programmable networks that automate the life cycle process for the creation, deployment, and management of network architectures. These networks are capable of spawning distinct “child” virtual networks with their own transport, “parent's” network resources and in isolation from other spawned networks. Spawned child networks represent programmable virtual networks and support the controlled access to communities at users with specific connectivity, security, and quality of service requirements. In this article we present a framework for the realization of spawning networks based on the notion of the Genesis Kernel, a virtual network operating system capable of creating distinct virtual network architectures on the fly. We discuss the motivation and principles that underpin spawning networks and focus on the design of the transport, programming and life cycle environments, which comprise the main architectural components of the Genesis Kernel  相似文献   

4.
Parameterized dataflow modeling for DSP systems   总被引:1,自引:0,他引:1  
Dataflow has proven to be an attractive computation model for programming digital signal processing (DSP) applications. A restricted version of dataflow, termed synchronous dataflow (SDF), that offers strong compile-time predictability properties, but has limited expressive power, has been studied extensively in the DSP context. Many extensions to synchronous dataflow have been proposed to increase its expressivity while maintaining its compile-time predictability properties as much as possible. We proposed a parameterized dataflow framework that can be applied as a meta-modeling technique to significantly improve the expressive power of any dataflow model that possesses a well-defined concept of a graph iteration, Indeed, the parameterized dataflow framework is compatible with many of the existing dataflow models for DSP including SDF, cyclo-static dataflow, scalable synchronous dataflow, and Boolean dataflow. In this paper, we develop precise, formal semantics for parameterized synchronous dataflow (PSDF)-the application of our parameterized modeling framework to SDF-that allows data-dependent, dynamic DSP systems to be modeled in a natural and intuitive fashion. Through our development of PSDF, we demonstrate that desirable properties of a DSP modeling environment such as dynamic reconfigurability and design reuse emerge as inherent characteristics of our parameterized framework. An example of a speech compression application is used to illustrate the efficacy of the PSDF approach and its amenability to efficient software synthesis techniques. In addition, we illustrate the generality of our parameterized framework by discussing its application to cyclo-static dataflow, which is a popular alternative to the SDF model  相似文献   

5.
In this paper, an internal design model called FunState (functions driven by state machines) is presented that enables the representation of different types of system components and scheduling mechanisms using a mixture of functional programming and state machines. It is shown how properties relevant for scheduling and verification of specification models such as Boolean dataflow, cyclostatic dataflow, synchronous dataflow, marked graphs, and communicating state machines as well as Petri nets can be represented in the FunState model of computation. Examples of methods suited for FunState are described, such as scheduling and verification. They are based on the representation of the model's state transitions in the form of a periodic graph. The feasibility of the novel approach is shown with an asynchronous transfer mode switch example  相似文献   

6.
In the 1970's, Baskett, Chandy, Muntz and Palacios, Kelly, and others, generalized the earlier results of Jackson (1957) and obtained explicit solutions for the steady-state distributions of some restricted queueing networks. These queueing networks are called “product-form networks,” due to the structure of their explicit solutions. The class of such tractable networks is quite small, however. For example, if customers require different mean service times on different revisits to the same server, or if customers on a later visit are given higher priority, then very little is known concerning whether the network is even stable or what form the steady-state distribution has if it exists. Recently, some new methods have been developed for establishing the stability of a system and for obtaining bounds on key performance measures such as mean delay, mean number in system, or mean throughput. Since they are based on the well-developed computational tool of linear programming, these methods can be widely employed in diverse applications in communication networks, computer systems, and manufacturing systems. We provide a tutorial exposition of some of these recent developments  相似文献   

7.
Overview of the MPEG Reconfigurable Video Coding Framework   总被引:2,自引:0,他引:2  
Video coding technology in the last 20 years has evolved producing a variety of different and complex algorithms and coding standards. So far the specification of such standards, and of the algorithms that build them, has been done case by case providing monolithic textual and reference software specifications in different forms and programming languages. However, very little attention has been given to provide a specification formalism that explicitly presents common components between standards, and the incremental modifications of such monolithic standards. The MPEG Reconfigurable Video Coding (RVC) framework is a new ISO standard currently under its final stage of standardization, aiming at providing video codec specifications at the level of library components instead of monolithic algorithms. The new concept is to be able to specify a decoder of an existing standard or a completely new configuration that may better satisfy application-specific constraints by selecting standard components from a library of standard coding algorithms. The possibility of dynamic configuration and reconfiguration of codecs also requires new methodologies and new tools for describing the new bitstream syntaxes and the parsers of such new codecs. The RVC framework is based on the usage of a new actor/ dataflow oriented language called CAL for the specification of the standard library and instantiation of the RVC decoder model. This language has been specifically designed for modeling complex signal processing systems. CAL dataflow models expose the intrinsic concurrency of the algorithms by employing the notions of actor programming and dataflow. The paper gives an overview of the concepts and technologies building the standard RVC framework and the non standard tools supporting the RVC model from the instantiation and simulation of the CAL model to software and/or hardware code synthesis.  相似文献   

8.
Supporting packet-data QoS in next generation cellular networks   总被引:1,自引:0,他引:1  
In the past few years, the Internet has grown beyond anyone's reasonable imagination into a universal communication platform. At the same time the cellular networks, with their ability to reach a person “anywhere, anytime,” have grown impressively as well. Thus the combination of mobile networks and the Internet into the so called “mobile Internet” promises to be an important technology area. The indications are clear: the cellular networks are rapidly adopting suitable network models for supporting packet data services. A key component of this packet data service model is quality of service (QoS), which is crucial for supporting disparate services envisioned in the future cellular networks. We describe the packet data QoS architecture and specific mechanisms that are being defined for multi-service QoS provisioning in the Universal Mobile Telecommunication Systems  相似文献   

9.
Typical design flows supporting the software development for multiprocessor systems are based on a board support package and high-level programming interfaces. These software design flows fail to support critical design activities, such as design space exploration or software synthesis. One can observe, however, that design flows based on a formal model of computation can overcome these limitations. In this article, we analyze the major challenges in multiprocessor software development and present a taxonomy of software design flows based on this analysis. Afterwards, we focus on design flows based on the Kahn process network (KPN) model of computation and elaborate on corresponding design automation techniques. We argue that the productivity of software developers and the quality of designs could be considerably increased by making use of these techniques.  相似文献   

10.
Spectral compression is a necessary function for applications in optical communication such as noise reduction, use of bandwidth-limited devices, etc. Ideal duobinary modulation, which allows the reduction of the spectrum bandwidth, requires a π (rad) phase shift between the “-1” and the “+1” logical levels. We show that with a reasonable finite extinction ratio of 10 dB, a phase shift as low as 0.18 π can be used, with a resulting spectrum compression ratio of nearly 2  相似文献   

11.
Large-grain synchronous dataflow graphs or multi-rate graphs have the distinct feature that the nodes of the dataflow graph fire at different rates. Such multi-rate large-grain dataflow graphs have been widely regarded as a powerful programming model for DSP applications. In this paper we propose a method to minimize buffer storage requirement in constructing rate-optimal compile-time (MBRO) schedules for multi-rate dataflow graphs. We demonstrate that the constraints to minimize buffer storage while executing at the optimal computation rate (i.e. the maximum possible computation rate without storage constraints) can be formulated as a unified linear programming problem in our framework. A novel feature of our method is that in constructing the rate-optimal schedule, it directly minimizes the memory requirement by choosing the schedule time of nodes appropriately. Lastly, a new circular-arc interval graph coloring algorithm has been proposed to further reduce the memory requirement by allowing buffer sharing among the arcs of the multi-rate dataflow graph.We have constructed an experimental testbed which implements our MBRO scheduling algorithm as well as (i) the widely used periodic admissible parallel schedules (also known as block schedules) proposed by Lee and Messerschmitt (IEEE Transactions on Computers, vol. 36, no. 1, 1987, pp. 24–35), (ii) the optimal scheduling buffer allocation (OSBA) algorithm of Ning and Gao (Conference Record of the Twentieth Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, Charleston, SC, Jan. 10–13, 1993, pp. 29–42), and (iii) the multi-rate software pipelining (MRSP) algorithm (Govindarajan and Gao, in Proceedings of the 1993 International Conference on Application Specific Array Processors, Venice, Italy, Oct. 25–27, 1993, pp. 77–88). Schedules generated for a number of random dataflow graphs and for a set of DSP application programs using the different scheduling methods are compared. The experimental results have demonstrated a significant improvement (10–20%) in buffer requirements for the MBRO schedules compared to the schedules generated by the other three methods, without sacrificing the computation rate. The MBRO method also gives a 20% average improvement in computation rate compared to Lee's Block scheduling method.  相似文献   

12.
Modeling applications and architectures at various levels of abstraction is becoming more and more an accepted approach in embedded system design. When looking at the modeling of applications in the domain of video, audio, and graphics applications, we notice that they exhibit a high degree of task parallelism and operate on streams of data. Models that we can use to specify such stream-based applications on a high level of abstraction are the dataflow models and process network models. Each of these models has its own merits. Therefore, an alternative approach is to introduce a model of computation that combines the semantics of both models of computation. In this article, we introduce such a model of computation, which we call the Stream-Based Functions (SBF) model of computation and show an example. Furthermore, we discuss the composition and decomposition of SBF objects and put the SBF model of computation in the context of relevant dataflow models and process network models.  相似文献   

13.
Typical embedded hardware/software systems are implemented using a combination of C and an HDL such as Verilog. While each is well-behaved in isolation, combining the two gives a nondeterministic model of computation whose ultimate behavior must be validated through expensive (cycle-accurate) simulation. We propose an alternative for describing such systems. Our software/hardware integration medium (shim) model, effectively Kahn networks with rendezvous communication, provides deterministic concurrency. We present the Tiny-shim language for such systems and its semantics, demonstrate how to implement it in hardware and software, and discuss how it can be used to model a real-world system. By providing a powerful, deterministic formalism for expressing systems, designing systems, and verifying their correctness will become easier.  相似文献   

14.
This paper addresses the problem of trading off between the minimization of program and data memory requirements of single-processer Implementations of dataflow programs. Based on the formal model of synchronous dataflow (SDF) graphs, so called single appearance schedules are known to be program-memory optimal. Among these schedules, buffer memory schedules are investigated and explored based on a two-step approach: 1) an evolutionary algorithm (EA) is applied to efficiently explore the (in general) exponential search space of actor firing orders; 2) for each order, the buffer costs are evaluated by applying a dynamic programming post-optimization step (GDPPO). This iterative approach is compared to existing heuristics for buffer memory optimization  相似文献   

15.
An interdisciplinary multilaboratory effort to develop an implantable neural prosthetic that can coexist and bidirectionally communicate with living brain tissue is described. Although the final achievement of such such a goal is many years in the future, it is proposed that the path to an implantable prosthetic is now definable, allowing the problem to be solved in a rational, incremental manner. Outlined in this report is our collective progress in developing the underlying science and technology that will enable the functions of specific brain damaged regions to be replaced by multichip modules consisting of novel hybrid analog/digital microchips. The component microchips are “neurocomputational” incorporating experimentally based mathematical models of the nonlinear dynamic and adaptive properties of biological neurons and neural networks. The hardware developed to date, although limited in capacity, can perform computations supporting cognitive functions such as pattern recognition, but more generally will support any brain function for which there is sufficient experimental information. To allow the “neurocomputational” multichip module to communicate with existing brain tissue, another novel microcircuitry element has been developed-silicon-based multielectrode arrays that are “neuromorphic,” i.e., designed to conform to the region-specific cytoarchitecture of the brain, When the “neurocomputational” and “neuromorphic” components are fully integrated, our vision is that the resulting prosthetic, after intracranial implantation, will receive electrical impulses from targeted subregions of the the brain, process the information using the hardware model of that brain region, and communicate back to the functioning brain. The proposed prosthetic microchips also have been designed with parameters that can be optimized after implantation, allowing each prosthetic to adapt to a particular user/patient  相似文献   

16.
Through detailed investigation of the sophisticated functions of the mammalian cochlea, we are aiming to apply its resonator array structure to efficient and smart sensors and actuators. We outline our study on an equivalent mechanical model of the cochlea named the “fishbone structure,” which can be fabricated from a thin Si plate alone. Special emphasis is placed on applications of the structure to both sensors (“artificial cochlea microphone”) and actuators (“giant impulse generator”). The applicability of the “fishbone structure” to both sensors and actuators is an outcome of the passive and loss-free property of it, which is originated from the transmission line model of the cochlea. We fabricated the structure by the Si micro-machining process and examined the soundness of the approach first by finite-element analysis and secondly by experiments using the fabricated devices  相似文献   

17.
We investigate the notion of “congestion” in spread spectrum wireless networks, such as those employing direct-sequence code-division multiple access. We find “congestion” to be multidimensional in nature, but two features emerge: (1) when congestion occurs, transmit powers and cell site interference levels increase and (2) capacity constraints are approached. Among other measures, we focus on a particular measure, λ, which is immediately of interest, since λ<1 is the condition for network feasibility. We relate λ both to the “power warfare” that arises as “capacity limits” are approached, and to the level of traffic in the network, where we consider traffic in regions ranging from local (single cells) to global (the whole network)  相似文献   

18.
Contractor selection criteria: a cost-benefit analysis   总被引:1,自引:0,他引:1  
This paper describes an empirical study aimed at ranking prequalification criteria on the basis of perceived total cost-benefit to stakeholders. A postal questionnaire was distributed to 100 client and contractor organizations in Australia in 1997. Forty-eight responses were analyzed for scores on 38 categories of contractor information in terms of “value to client” “contractor costs,” “client costs,” and “value for money.” The client and contractor responses for “value to client” and “client costs” of processing were found to be homogeneous. Those for “contractor costs” and “value for money” differed significantly between the clients and the contractors. A simple linear regression analysis was used to model the responses, and an index of cost-benefit was derived for each of the categories. This was found to be superior to all of the nonlinear alternatives examined. The model was also found to have greater intuitive value than the equivalent raw “value for money” responses  相似文献   

19.
The recent MPEG Reconfigurable Media Coding (RMC) standard aims at defining media processing specifications (e.g. video codecs) in a form that abstracts from the implementation platform, but at the same time is an appropriate starting point for implementation on specific targets. To this end, the RMC framework has standardized both an asynchronous dataflow model of computation and an associated specification language. Either are providing the formalism and the theoretical foundation for multimedia specifications. Even though these specifications are abstract and platform-independent the new approach of developing implementations from such initial specifications presents obvious advantages over the approaches based on classical sequential specifications. The advantages appear particularly appealing when targeting the current and emerging homogeneous and heterogeneous manycore or multicore processing platforms. These highly parallel computing machines are gradually replacing single-core processors, particularly when the system design aims at reducing power dissipation or at increasing throughput. However, a straightforward mapping of an abstract dataflow specification onto a concurrent and heterogeneous platform does often not produce an efficient result. Before an abstract specification can be translated into an efficient implementation in software and hardware, the dataflow networks need to be partitioned and then mapped to individual processing elements. Moreover, system performance requirements need to be accounted for in the design optimization process. This paper discusses the state of the art of the combinatorial problems that need to be faced at this design space exploration step. Some recent developments and experimental results for image and video coding applications are illustrated. Both well-known and novel heuristics for problems such as mapping, scheduling and buffer minimization are investigated in the specific context of exploring the design space of dataflow program implementations.  相似文献   

20.
We have studied the effect of the thickness of the multiplication region on the noise performance characteristics of avalanche photodiodes (APD's). Our simulation results are based on a full band Monte Carlo model with anisotropic threshold energies for impact ionization. Simulation results suggest that the well known McIntyre expression for the excess noise factor is not directly applicable for devices with a very thin multiplication region. Since the number of ionization events is drastically reduced when the multiplication layer is very thin, the “ionization coefficient” is not a good physical parameter to characterize the process. Instead “effective quantum yield,” which is a measure of the total electron-hole pair generation in the device, is a more appropriate parameter to consider. We also show that for the device structure considered here, modeling the excess noise factor using a “discrete Bernoulli trial” model as opposed to the conventional “continuum theory” produces closer agreement to experimental measurements. Our results reinforce the understanding that impact ionization is a strong function of carrier energy and the use of simplified field-dependent models to characterize this high energy process fails to accurately model this phenomenon  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号