首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
The complexity of the circuit that can fit on an integrated circuit (IC) chip has reached the level of a million transistors with the advent of Very-large-Scale Integration (VLSI). Several automatic synthesis systems have evolved that "aid" the human designer in managing this complexity. This paper surveys such efforts. The synthesis is viewed as the process of transforming a high-level design specification into a lower level design specification that includes more structural details, leading to the physical design of the IC. The characteristics of ten automatic synthesis systems are summarized.  相似文献   

3.
This paper presents a high-level leakage power analysis and reduction algorithm. The algorithm uses device-level models for leakage to precharacterize a given register-transfer level module library. This is used to estimate the power consumption of a circuit due to leakage. The algorithm can also identify and extract the frequently idle modules in the datapath, which may be targeted for low-leakage optimization. Leakage optimization is based on the use of dual threshold voltage (V/sub T/) technology. The algorithm prioritizes modules giving a high-level synthesis system an indication of where most gains for leakage reduction may be found. We tested our algorithm using a number of benchmarks from various sources. We ran a series of experiments by integrating our algorithm into a low-power high-level synthesis system. In addition to reducing the power consumption due to switching activity, our algorithm provides the high-level synthesis system with the ability to detect and reduce leakage power consumption, hence, further reducing total power consumption. This is shown over a number of technology generations. The trend in these generations indicates that leakage becomes the dominant component of power at smaller feature size and lower supply voltages. Results show that using a dual-V/sub T/ library during high-level synthesis can reduce leakage power by an average of 58% for the different technology generations. Total power can be reduced by an average of 15.0%-45.0% for 0.18-0.07 /spl mu/m technologies, respectively. The contribution of leakage power to overall power consumption ranges from 22.6% to 56.2%. Our approach reduced these values to 11.7%-26.9%.  相似文献   

4.
This paper provides an overview of a program synthesis system for a class of quantum chemistry computations. These computations are expressible as a set of tensor contractions and arise in electronic structure modeling. The input to the system is a a high-level specification of the computation, from which the system can synthesize high-performance parallel code tailored to the characteristics of the target architecture. Several components of the synthesis system are described, focusing on performance optimization issues that they address.  相似文献   

5.
Multiprocessor systems-on-chip (MPSoC) pose a considerable validation challenge due to their size and complexity. We approach the problem of MPSoC validation through a tool that employs a reusable abstract executable specification written in C++. The tool effectively leverages a simulation-based, trace-driven mechanism. Traces are computed by simulating a system level register-transfer level (RTL) implementation of an MPSoC. The tool then analyzes the traces for correctness by checking them across executions of the abstract executable specification. We have effectively used the tool on various live MPSoC design projects based on the Power Architecture technology (The Power Architecture and Power.org wordmarks and the Power and Power.org logos and related marks are trademarks and service marks licensed by Power.org.). We demonstrate the effectiveness of the technique through results from these projects where we uncovered a number of design errors not found by any other technique.  相似文献   

6.
The paper presents a novel hierarchical approach to test pattern generation for sequential circuits based on an input model of mixed-level decision diagrams. A method that handles, both, data and control parts of the design in a uniform manner is proposed. The method combines deterministic and simulation-based techniques. On the register-transfer level, deterministic path activation is combined with simulation based-techniques used for constraints solving. The gate-level local test patterns for components are randomly generated driven by high-level constraints and partial path activation solutions. Experiments show that high fault coverages for circuits with complex sequential structures can be achieved in a very short time by using this approach.  相似文献   

7.
As new applications in embedded communications and control systems push the computational limits of digital signal processing (DSP) functions, there will be an increasing need for software applications to be migrated to hardware in the form of a hardware-software codesign system. In many cases, access to the high-level source code may not be available. It is thus desirable to have a technology to translate the software binaries intended for processors to hardware implementations. This paper provides details on the retargetable FREEDOM compiler. The compiler automatically translates DSP software binaries to register-transfer level (RTL) VHDL and Verilog for implementation on field-programmable gate arrays (FPGAs) as standalone or system-on-chip implementations. We describe the underlying optimizations and some novel algorithms for alias analysis, data dependency analysis, memory optimizations, procedure call recovery, and back-end code scheduling. Experimental results on resource usage and performance are shown for several program binaries intended for the Texas Instruments C 6211 DSP (VLIW) and the ARM 922 T reduced instruction set computer (RISC) processors. Implementation results for four kernels from the Simulink demo library and others from commonly used DSP applications, such as MPEG-4, Viterbi, and JPEG are also discussed. The compiler generated RTL code is mapped to Xilinx Virtex II and Altera Stratix FPGAs. We record overall performance gains of 1.5-26.9 for the hardware implementations of the kernels. Comparisons with the power aware compiler techniques (PACT) high-level synthesis compiler are used to show that software binaries can be used as intermediate representations from any high-level language and generate efficient hardware implementations.  相似文献   

8.
The authors address the problem of evaluating area tradeoffs for VLSI layouts from high-level specifications (typically register-transfer level). An area prediction approach based on two models, analytical and constructive, are presented. A circuit design is partitioned recursively down to a level specified by the user, thus generating a slicing tree. An analytical model is then used to predict the shape function of each of the leaf subcircuits. By traversing the tree in post-order, the shape function of the entire layout design can be predicted constructively. This approach also permits the user to trade off the accuracy of the prediction versus the runtime of the predictor. Such a scheme is quite useful for high level design tasks. The authors show experimentally that the estimates obtained using this model are within 5% of the actual layout area for designs ranging from 125 to 12000 cells  相似文献   

9.
10.
11.
12.
13.
In this paper, we propose a robust register-transfer level (RTL) power modeling methodology for functional units. Our models are consistently accurate over a wide range of input statistics, they are automatically constructed and can provide pattern-by-pattern power estimates. An additional desirable feature of our modeling methodology is the capability of accounting for the impact of technology variations, library changes and synthesis tools. Our methodology is based on the concept of node sampling, as opposed to more traditional approaches based on input sampling  相似文献   

14.
In this paper, we propose a controller resynthesis technique to enhance the testability of register-transfer level (RTL) controller/data path circuits. Our technique exploits the fact that the control signals in an RTL implementation are don't cares under certain states/conditions. We make an effective use of the don't care information in the controller specification to improve the overall testability (better fault coverage and shorter test generation time). If the don't care information in the controller specification leaves little scope for respecification, we add control vectors to the controller to enhance the testability. Experimental results with example benchmarks show an average increase in testability of 9% with a 3–4 fold decrease in test generation time for the modified implementation. The area, delay and power overheads incurred for testability are very low. The average area overhead is 0.4%, and the average power overhead is 4.6%. There was no delay overhead due to this technique in most of the cases.  相似文献   

15.
16.
In this paper we present the Siemens high-level synthesis system CALLAS and describe its design methodology and synthesis strategy. It supports the synthesis of control-dominated applications and uses a VHDL subset for the algorithmic specification. Its main feature can be characterized as “What you simulate is what you synthesize.” This principle permits a validation of the synthesis results by simulation or even formal verification. CALLAS has been successfully applied on real designs which were implemented in silicon. These examples demonstrate that CALLAS fulfils the constraints and objectives of a hardware designer. The circuits are comparable in quality to results achieved by synthesis starting at the register-transfer-level  相似文献   

17.
Due to the continuous technology scaling, soft error becomes a major reliability issue at nanoscale technologies. Single or multiple event transients at low levels can result in multiple correlated bit flips at logic or higher abstraction levels. Addressing this correlation is essential for accurate low-level soft error rate estimation, and more importantly, for the cross-level error abstraction, e.g. from bit errors at logic level to word errors at register-transfer level. This paper proposes a novel error estimation method to take into consideration both signal and error correlations. It unifies the treatment of error-free signals and erroneous signals, so that the computation of error probabilities and correlations can be carried out using techniques for signal probabilities and correlations calculation. The proposed method not only reports accurate error probabilities when internal gates are impaired by soft errors, but also gives quantification of the error correlations in their propagation process. This feature enables our method to be a versatile technique used in high-level error estimation. The experimental results validate our proposed technique showing that compared with Monte-Carlo simulation, it is 5 orders of magnitude faster, while the average inaccuracy of error probability estimation is only 0.02.  相似文献   

18.
Adaptivity is a key feature in embedded systems which requires explicit support in Electronic System-Level (ESL) design methodologies. Similarly, a higher level of abstraction during system specification is crucial for enabling ESL design activities such as Design Space Exploration. This has motivated the development of methodologies, such as A-HetSC, for enabling SystemC abstract specification of the adaptive parts of a system. However, in order to enable practical ESL design flows, it is essential to enable suitable implementation paths from the abstract adaptive specification. Otherwise, the cost of manual refinement of the abstract adaptive specification or simply the inability to systematically achieve an implementation will compromise the advantages obtained by the abstract model. This paper presents a system-level implementation methodology for tackling the implementation of the adaptive parts of an abstract SystemC specification. These adaptive parts are described as abstract adaptive processes, the basic constructs for specifying adaptivity in the A-HetSC methodology. The methodology proposes two main implementation paths. One path automatically targets an embedded SW implementation, for an immediate, flexible and cheap implementation of the adaptive processes. For the HW implementation path, a systematic SystemC-based refinement for targeting a refined model is proposed. Such a refined model is already supported by high-level synthesis tools, which automate the rest of the HW implementation path. It enables a cost efficient implementation of adaptive functionality in contexts where a software implementation does not fulfil time performance demands. The refinement of an adaptive inverse transform module, as part of an adaptive video decoder (AVD), is used as a demonstrative example.  相似文献   

19.
20.
Over the last few years, embedded software synthesis has drawn much attention. However, few works deal with software synthesis for hard real-time systems considering arbitrary inter-tasks precedence and exclusion relations. Code generation for meeting all timing and resource constraints is not a trivial task. Thus, this research area has several open issues, mainly related to generation of predictable-guaranteed scheduled code. The method proposed in this paper starts from a high-level specification, and automatically translates such specification into a time Petri net model; this model is adopted for finding a feasible static schedule meeting all constraints. If found, the approach generates a scheduled code, based on the found feasible schedule. Therefore, the user just enter the specification and receives, as result, the scheduled code. Thus, all intermediary phases are hidden from the users.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号