首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A new technique, node sampling, is proposed, to speed up probability-based power estimation methods. This technique samples and processes only a small portion of the total nodes to estimate the power consumption of a circuit. It is different from previous speed-up techniques for probability-based methods which reduce the processing time for each node, and is also different from the sampling techniques for simulation-based methods which sample input vector sequences. The experimental results demonstrate the validity of the proposed method  相似文献   

2.
Monte Carlo (MC) analysis is often considered a golden reference for yield analysis because of its high accuracy. However, repeating the simulation hundreds of times is often too expensive for large circuit designs. The most widely used approach to reduce MC complexity is using efficient sampling methods to reduce the number of simulations. Aside from those sampling techniques, this paper proposes a novel approach to further improve MC simulation speed with almost the same accuracy. By using an improved delta circuit model, simulation speed can be improved automatically due to the dynamic step control in transient analysis. In order to further improve the efficiency while combining the delta circuit model and the sampling technique, a cluster-based delta-QMC technique is proposed in this paper to reduce the delta change in each sample. Experimental results indicate that the proposed approach can increase speed by two orders of magnitude with almost the same accuracy, which significantly improves the efficiency of yield analysis.  相似文献   

3.
By the reduction in the size of transistors and the development of submicron technology, as well as the construction of more integrated circuits on chips, leakage power has become one of the main concerns of electronic circuit designers. In this article, we first review techniques presented in recent years to reduce leakage power and then present a new technique based on the gate-level body biasing technique and the multi-threshold CMOS technique to minimize leakage power in digital circuits. Afterward, we develop another new method by improving the first proposed technique to achieve higher efficiency and simultaneously reduce leakage power and propagation delay in digital circuits. In the proposed technique, we use two dynamic threshold MOSFET transistors to reduce leakage current. In this paper, the body biasing generator structure is applied to reduce propagation delay. The proposed technique has been successfully validated and verified by post-layout simulation with Cadence Virtuoso based on the 32 nm process technology.We evaluate the efficiency of the proposed techniques by examining factors including power, delay, area, and the power delay product. The simulation results using HSPICE software and performance analysis to process corner variations based on the 32 nm process technology show that the proposed technique, in addition to having proper performance in different corners of the technology, significantly reduces leakage power and propagation delay in logic CMOS circuits. In general, the proposed technique has a very successful performance compared to previous techniques.  相似文献   

4.
有源钳位技术通常只在正激DC/DC功率电源拓扑结构中。现介绍有源钳位和同步整流技术在反激拓扑结构中的应用及研究。通过有源钳位和同步整流应用到反激式拓扑中的原理设计技术,以及厚膜和模块立体混装技术,以达到电源高效率和高功率密度的目的。通过5 V/20 W电源样品设计和组装以及模拟试验,电源样品功率密度达50.8 W/inch3,效率为89.2%,纹波为43 m V。试验样品证明反激式有源钳位和同步整流技术是提高中小功率隔离电源功率密度的最有效技术途径。  相似文献   

5.
An important issue for end users and distributors of photovoltaic (PV) modules is the inspection of the power output specification of a shipment. The question is whether or not the modules satisfy the specifications given in the data sheet, namely the nominal power output under standard test conditions relative to the power output tolerance. Since collecting control measurements of all modules is usually unrealistic, decisions have to be based on random samples. In many cases, one has access to flash data tables of final output power measurements (flash data) from the producer. We propose to rely on the statistical acceptance sampling approach as an objective decision framework, which takes into account both the end users and producers risk of a false decision. A practical solution to the problem is discussed which has been recently found by the authors. The solution consists of estimates of the required optimal sample size and the associated critical value where the estimation uses the information contained in the additional flash data. We propose and examine an improved solution which yields even more reliable estimated sampling plans as substantiated by a Monte Carlo study. This is achieved by employing advanced statistical estimation techniques. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

6.
The quality control of photovoltaic modules in terms of the output power to satisfy the technical specification is of great importance for producers as well as consumers and also represents a major issue of certification procedures. Previous work focused on one‐sided specification limits to reject underperforming samples (lots) of photovoltaic modules or solar cells. In the present paper, we generalize the classic acceptance sampling methodology and derive sampling plans on the basis of two‐sided specification limits. Those sampling plans can be constructed for arbitrary output power distributions by making use of flash data tables. For the out‐of‐spec setting, the sampling plans are solutions of rather involved nonlinear equations. Explicit formulas, which resemble known sampling plans, can only be obtained under symmetry assumptions. Further, the solution depends on the ratio of overperforming modules to underperforming modules. We investigate by numerical studies to which extent the required sample size depends on that ratio and the shape of the underlying output power distribution. The application to real examples indicates that in practice, the new approach often results in substantially smaller control samples than classic approaches. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

7.
Hardware/software partitioning is a key issue in the design of embedded systems when performance constraints have to be met and chip area and/or power dissipation are critical. For that reason, diverse approaches to automatic hardware/software partitioning have been proposed since the early 1990s. In all approaches so far, the granularity during partitioning is fixed, i.e., either small system parts (e.g., base blocks) or large system parts (e.g., whole functions/processes) can be swapped at once during partitioning in order to find the best hardware/software tradeoff. Since the deployment of a fixed granularity is likely to result in suboptimum solutions, we present the first approach that features a flexible granularity during hardware/software partitioning. Our approach is comprehensive in so far that the estimation techniques, our multigranularity performance estimation technique described here in detail, that control partitioning, are adapted to the flexible partitioning granularity. In addition, our multilevel objective function is described. It allows us to tradeoff various design constraints/goals (performance/hardware area) against each other. As a result, our approach is applicable to a wider range of applications than approaches with a fixed granularity. We also show that our approach is fast and that the obtained hardware/software partitions are much more efficient (in terms of hardware effort, for example) than in cases where a fixed granularity is deployed  相似文献   

8.
The majority of currently available wireless devices’ localization systems are based on received signal strength (RSS) measurements. The input to the localization technique is configured by the average of a number of N instantaneous RSS samples; thereupon, during simulation of such techniques, it is necessary to generate N RSS samples and calculate the corresponding average, which results to increased computational cost and run time. A new technique for reducing computational cost and run time of localization techniques simulators is proposed based on directly sampling a probability distribution function (PDF) corresponding to the average RSS of N samples. However, PDFs of the average RSS cannot be readily calculated and often there is no analytical solution. A study based on goodness-of-fit tests and localization precision is presented herein in order to numerically evaluate the replacement of unknown average RSS PDFs with empirically yielded ones. Furthermore, an indoor propagation and localization techniques simulator has been developed employing the proposed technique. Numerical results demonstrate the applicability of the proposed approach in achieving fast simulation of average small-scale fading and its application to RSS-based localization techniques simulation.  相似文献   

9.
The utility of a hierarchical approach to modeling and simulation in power electronics is illustrated using a high power factor (HPF) AC-DC switching power converter as an example. The combined use of several modeling techniques and tools is shown to provide comprehensive simulation coverage during analysis and design of the HPF converter. A summary of the various modeling levels, and their application domains and CPU times is given in tabular form. The estimated times are obtained by linear scaling of the measured times for a particular level. The CPU times show the necessity of using a hierarchical approach when simulating extended intervals and/or large circuits  相似文献   

10.
This paper addresses the design techniques of reconfigurable analog-to-digital converters for multi-standard wireless communication terminals. While most multi-standard converters reported so far follow an ad hoc design approach, which do not guarantee either efficient silicon area occupation or power efficiency in the different operation modes, the methodology presented here formulates a systematic design flow that ensures that both factors are considered at all hierarchical levels. Expandible cascade modulators are considered as the starting point to further reconfigurability at the architectural level. From here on, and using a combination of accurate behavioral modeling, statistical optimization techniques, and device-level simulation, the proposed methodology handles the design complexity of a reconfigurable converter while ensuring adaptive power consumption and boosting hardware sharing. A case study is presented where a reconfigurable modulator is designed to operate under three communication standards, GSM, Bluetooth, and UMTS, in a 130 nm-CMOS technology.  相似文献   

11.
In this paper, an improved maximum power point tracking (MPPT) approach being low parameter dependency, simple structure and limited search interval has been presented for distributed MPPT photovoltaic (PV) systems. Basically, this approach is based on scanning of power–voltage (P-V) characteristic curve of PV modules in a limited duty ratio interval which makes tracking operation simple, fast and efficiently available in both uniform irradiance and partial shading conditions (PSCs). By limiting the scanning interval of maximum and minimum values of duty ratio via some analyses related to P-V characteristic for PSCs, global MPPT (GMPPT) is achieved in an efficient way. So as to validate performance of the proposed approach, a single-ended primary inductance converter has been used in both simulation and experimental studies. PV simulator has been used as a PV source to obtain different module characteristics with different number of bypass diodes and PV power levels. Both simulation and experimental results clarify that improved MPPT approach realises GMPPT effectively. Due to the high performance results, this approach can be an alternative technique in module-integrated converters, smart modules and PV power optimisers in which single module is used.  相似文献   

12.
Increasing network lifetime (NL) is an important requirement in wireless sensor networks (WSNs). One of the techniques to extend NL is to use Data Aggregation Trees (DATs). DATs improve NL by combining the energy efficiency benefits of both Data Aggregation (DA) and tree‐based routing. While centralized and distributed strategies for DAT construction are widely used, we propose a combined approach for DAT construction to improve NL. The approach reduces the communication overhead and relaxes the requirement of complete network information at the sink. In the proposed work, this collaborative approach is termed as Extended Local View (ELV) approach. Two ELV‐based DAT construction algorithms termed as ELV with Fixed sink (ELVF) and ELV with Random sink (ELVR) are proposed. Both ELVF and ELVR use heuristics‐based technique of Local Path Reestablishment (LPR) and greedy‐based technique of Extended Path Reestablishment (EPR). Using these techniques a sequence of DATs are scheduled that collectively improve NL and also reduce the associated DAT reconstruction overhead. Performance of ELVF and ELVR is evaluated with rigorous experiments, and the simulation results show that the proposed algorithms have improved NL and are scalable across different DA ratio values. DAT schedule analysis further demonstrates reduced DAT reconstruction overhead of the proposed algorithms which illustrates its suitability for hostile and critical environments.  相似文献   

13.
All liquid heating systems, including solar thermal collectors and fossil-fueled heaters, are designed to convert low-temperature liquid to high-temperature liquid. In the presence of low- and high-temperature fluids, temperature differences can be created across thermoelectric devices to produce electricity so that the heat dissipated from the hot side of a thermoelectric device will be absorbed by the cold liquid and this preheated liquid enters the heating cycle and increases the efficiency of the heater. Consequently, because of the avoidance of waste heat on the thermoelectric hot side, the efficiency of heat-to-electricity conversion with this configuration is better than that of conventional thermoelectric power generation systems. This research aims to design and analyze a thermoelectric power generation system based on the concept described above and using a low-grade heat source. This system may be used to generate electricity either in direct conjunction with any renewable energy source which produces hot water (solar thermal collectors) or using waste hot water from industry. The concept of this system is designated “ELEGANT,” an acronym from “Efficient Liquid-based Electricity Generation Apparatus iNside Thermoelectrics.” The first design of ELEGANT comprised three rectangular aluminum channels, used to conduct warm and cold fluids over the surfaces of several commercially available thermoelectric generator (TEG) modules sandwiched between the channels. In this study, an ELEGANT with 24 TEG modules, referred to as ELEGANT-24, has been designed. Twenty-four modules was the best match to the specific geometry of the proposed ELEGANT. The thermoelectric modules in ELEGANT-24 were electrically connected in series, and the maximum output power was modeled. A numerical model has been developed, which provides steady-state forecasts of the electrical output of ELEGANT-24 for different inlet fluid temperatures.  相似文献   

14.
A stratified technique is proposed for testing multichip module systems. Stratification in multichip modules due to the different nature and procurement of these chips is exploited for achieving a high quality-level at a saving of a significant number of tests during assembly. Unlike conventional random testing, the proposed approach (referred to as the lowest yield-stratum first-testing), takes into account the uneven known-good-yield. In the lowest yield-stratum first-testing approach, the effect of the uneven known-good-yield between strata is analyzed with respect to the variance of known-good-yield and the sample size. The lowest yield-stratum first-testing approach significantly outperforms conventional random testing and random stratified testing. This method is competitive even compared to a conventional exhaustive testing at a very small loss in quality-level by greedy (first) testing the chips in the stratum with lower known-good-yield. A Markov-chain model is developed to analyze these testing approaches under the assumption of physically independent failure of chips in multichip module systems  相似文献   

15.
These days, peoples are more concerned respects petroleum product energy and conservational issues caused on the power generation networks and renewable power resources at any other time. Amongst the renewable power resources, solar and windmill power generations are essential competitors. Photovoltaic modules additionally have moderately least transformation effectiveness. General system price was decreased utilizing significant productivity control which are made to determine for most significant achievable energy from solar PV array module utilizing MPPT procedures. Existing solar power generation likewise have the burden of being for the day outputs is less immediate introduction from natural sun radiation. By utilizing the Internet of Things (IoT) strategies for monitoring and controlling the solar power generation was significantly enhance the performance, and maintenance of the solar power plant. In this work explicitly argue advances IoT technique to increase output result of solar power generation at the system level. Covering turning the photovoltaic system in the position of maximum sunlight, obtaining significant available power obtained from the solar PV array and significant battery health management by using sophisticated distribution control (SDC) and independent component analysis techniques (ICA).The simulation work done under with the MATLAB software using proposed SDC and ICA logics the simulation results demonstrate the efficiency of the proposed method and its ability to track the maximum power of the PV panel. Over 97% efficiency achieved by using SDC and ICA methods.  相似文献   

16.
Various core-based power evaluation approaches for microprocessors, caches, memories and buses have been proposed in the past. We propose a new power evaluation technique that is targeted toward peripheral cores. Our approach is the first to combine for peripherals both gate-level-obtained power data with a system-level simulation model written in an object-oriented language. Our approach decomposes peripheral functionality into so-called instructions. The approach can be applied with three increasingly fast methods: system simulation, trace simulation or trace analysis. We show that our models are sufficiently accurate in order to make power-related system-level design decisions but at a computation time that is orders of magnitude faster than a gate-level simulation.  相似文献   

17.
Traffic sampling is viewed as a prominent strategy contributing to lightweight and scalable network measurements. Although multiple sampling techniques have been proposed and used to assist network engineering tasks, these techniques tend to address a single measurement purpose, without detailing the network overhead and computational costs involved. The lack of a modular approach when defining the components of traffic sampling techniques also makes difficult their analysis. Providing a modular view of sampling techniques and classifying their characteristics is, therefore, an important step to enlarge the sampling scope, improve the efficiency of measurement systems, and sustain forthcoming research in the area. Thus, this paper defines a taxonomy of traffic sampling techniques resorting to a comprehensive analysis of the inner components of existing proposals. After identifying granularity , selection scheme , and selection trigger as the main components differentiating sampling proposals, the study goes deeper on characterizing these components, including insights into their computational weight. Following this taxonomy, a general‐purpose architecture is established to sustain the development of flexible sampling‐based measurement systems. Traveling inside packet sampling techniques, this paper contributes to a clearer positioning and comparison of existing proposals, providing a road map to assist further research and deployments in the area. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

18.
In this paper, efficient clock delayed domino logic with variable strength voltage keeper is proposed. The variable strength of the keeper is achieved through applying two different body biases to the keeper. The circuits used to generate the body biases are called capacitive body bias generator and cross-coupled capacitive body bias generator. Compared to a previous work, the body bias generator circuits presented in this paper are simpler and do not require double or triple power supply while consuming less area and power. To show the efficiency of the proposed technique, the implementation of a carry generator circuit by the proposed techniques and the previous work are compared. The simulation results for standard CMOS technologies of 0.18 mum and 70 nm show considerable improvements in terms of power and power delay product. In addition, the proposed technique shows much less temperature dependence when compared to that of previous work  相似文献   

19.
This article presents a novel special-purpose data memory subsystem, called Xtream-Fit, suitable for embedded media processing, and demonstrates how it achieves high energy-delay efficiency across a wide range of media devices, including systems concurrently executing multiple applications under synchronization constraints. Experimental results show that Xtream-Fit delivers a substantial improvement in energy-delay product, as compared to general-purpose memory subsystems enhanced with state of-the-art cache decay and SDRAM dynamic power mode control policies. Xtreams-Fit's performance is predicted on a novel, task-based execution model that enhances opportunities for efficient stream granularity prefetching and aggressive software-based energy conservation techniques.  相似文献   

20.
This paper describes a new technique that can speedup simulation of high-speed, wide-area packet networks by one to two orders of magnitude. Speedup is achieved by coarsening the representation of network traffic from packet-by-packet to train-by-train, where a train represents a cluster of closely spaced packets. Coarsening the timing granularity creates longer trains and makes the simulation proceed more quickly since the cost of processing trains is independent of train size. Coarsening the timing granularity introduces, of course, a degree of approximation. This paper presents experiments that evaluate our coarse time-grain simulation technique for first in/first out (FIFO) switched, TCP/IP, and asynchronous transfer mode (ATM) networks carrying a mix of data and streaming traffic. We show that delay, throughput, and loss rate can frequently be estimated within a few percent via coarse time-grain simulation. This paper also describes how to apply coarse time-grain simulation to other switch disciplines. Finally, this paper introduces three more simulation techniques which together can double the performance of well written packet simulators without trading with the simulation accuracy. These techniques reduce the number of outstanding simulation events and reduce the cost of manipulating the event list  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号