全文获取类型
收费全文 | 69篇 |
免费 | 1篇 |
专业分类
化学工业 | 7篇 |
机械仪表 | 2篇 |
建筑科学 | 1篇 |
能源动力 | 3篇 |
轻工业 | 5篇 |
无线电 | 14篇 |
一般工业技术 | 3篇 |
冶金工业 | 5篇 |
自动化技术 | 30篇 |
出版年
2024年 | 1篇 |
2022年 | 1篇 |
2021年 | 3篇 |
2020年 | 2篇 |
2019年 | 2篇 |
2018年 | 2篇 |
2017年 | 1篇 |
2016年 | 3篇 |
2015年 | 1篇 |
2014年 | 3篇 |
2013年 | 2篇 |
2012年 | 3篇 |
2011年 | 4篇 |
2010年 | 2篇 |
2009年 | 2篇 |
2008年 | 2篇 |
2007年 | 5篇 |
2006年 | 1篇 |
2005年 | 4篇 |
2004年 | 6篇 |
2003年 | 5篇 |
2002年 | 4篇 |
2001年 | 2篇 |
2000年 | 2篇 |
1999年 | 1篇 |
1998年 | 2篇 |
1997年 | 2篇 |
1996年 | 1篇 |
1993年 | 1篇 |
排序方式: 共有70条查询结果,搜索用时 15 毫秒
1.
Ramanujam J. Jinpyo Hong Kandemir M. Narayan A. Agarwal A. 《Signal Processing, IEEE Transactions on》2006,54(1):286-294
Most embedded systems have limited amount of memory. In contrast, the memory requirements of the digital signal processing (DSP) and video processing codes (in nested loops, in particular) running on embedded systems is significant. This paper addresses the problem of estimating and reducing the amount of memory needed for transfers of data in embedded systems. First, the problem of estimating the region associated with a statement or the set of elements referenced by a statement during the execution of nested loops is analyzed. For a fixed execution ordering, a quantitative analysis of the number of elements referenced is presented; exact expressions for uniformly generated references and a close upper and lower bound for nonuniformly generated references are derived. Second, in addition to presenting an algorithm that computes the total memory required, this paper also discusses the effect of transformations (that change the execution ordering) on the lifetimes of array variables, i.e., the time between the first and last accesses to a given array location. The term maximum window size is introduced, and quantitative expressions are derived to compute the maximum window size. A detailed analysis of the effect of unimodular transformations on data locality, including the calculation of the maximum window size, is presented. 相似文献
2.
3.
Yang Ding Mahmut Kandemir Padma Raghavan Mary Jane Irwin 《Journal of Parallel and Distributed Computing》2009
In parallel to the changes in both the architecture domain–the move toward chip multiprocessors (CMPs)–and the application domain–the move toward increasingly data-intensive workloads–issues such as performance, energy efficiency and CPU availability are becoming increasingly critical. The CPU availability can change dynamically due to several reasons such as thermal overload, increase in transient errors, or operating system scheduling. An important question in this context is how to adapt, in a CMP, the execution of a given application to CPU availability change at runtime. Our paper studies this problem, targeting the energy-delay product (EDP) as the main metric to optimize. We first discuss that, in adapting the application execution to the varying CPU availability, one needs to consider the number of CPUs to use, the number of application threads to accommodate and the voltage/frequency levels to employ (if the CMP has this capability). We then propose to use helper threads to adapt the application execution to CPU availability change in general with the goal of minimizing the EDP. The helper thread runs parallel to the application execution threads and tries to determine the ideal number of CPUs, threads and voltage/frequency levels to employ at any given point in execution. We illustrate this idea using four applications (Fast Fourier Transform, MultiGrid, LU decomposition and Conjugate Gradient) under different execution scenarios. The results collected through our experiments are very promising and indicate that significant EDP reductions are possible using helper threads. For example, we achieved up to 66.3%, 83.3%, 91.2%, and 94.2% savings in EDP when adjusting all the parameters properly in applications FFT, MG, LU, and CG, respectively. We also discuss how our approach can be extended to address multi-programmed workloads. 相似文献
4.
Guilin Chen Kandemir M. 《Parallel and Distributed Systems, IEEE Transactions on》2008,19(9):1201-1214
One of the critical goals in code optimization for multi-processor-system-on-a-chip (MPSoC) architectures is to minimize the number of off-chip memory accesses. This is because such accesses can be extremely costly from both performance and power angles. While conventional data locality optimization techniques can be used for improving data access pattern of each processor independently, such techniques usually do not consider locality for shared data. This paper proposes a strategy that reduces the number of off-chip references due to shared data. It achieves this goal by restructuring a parallelized application code in such a fashion that a given data block is accessed by parallel processors within the same time frame, so that its reuse is maximized while it is in the on-chip memory space. This tends to minimize the number of off-chip references since the accesses to a given data block are clustered within a short period of time during execution. Our approach employs a polyhedral tool that helps us isolate computations that manipulate a given data block. In order to test the effectiveness of our approach, we implemented it using a publicly-available compiler infrastructure and conducted experiments with twelve data-intensive embedded applications. Our results show that optimizing data locality for shared data elements is very useful in practice. 相似文献
5.
Therapeutic effects of silymarin and naringin on methotrexate‐induced nephrotoxicity in rats: Biochemical evaluation of anti‐inflammatory,antiapoptotic, and antiautophagic properties 下载免费PDF全文
6.
Hard-sphere molecular dynamics simulations of lid-driven microcavity gas flow with various subsonic speeds and lid temperatures are conducted. Simulations with faster and colder lids show streamlines of stronger primary vortices. Variations of mass and energy centers with respect to lid speed and temperature are examined. Center of energy is less sensitive to employed lid conditions than center of gravity is. Although moving lid imparts energy into fluid, due to change of impingement rates on the walls of fixed temperature, average energy within the cavity seems quite insensitive to the subsonic lid speed. Behavior of compressibility at both top corners is observed even at low Mach numbers widely considered within incompressible flow region. While high Knudsen number causes considerable property slips near the lid, two-dimensional pressure, density, and temperature plots of excellent quality are generated. Results are promising in use of molecular dynamics simulations for compressible vortex flow analyses while providing insights for understanding microfluidics and nanofluidics in context of molecular mass, momentum and heat transfer in microscale and nanoscale systems. 相似文献
7.
Kandaswamy M.A. Kandemir M. Choudhary A. Bernholdt D. 《Parallel and Distributed Systems, IEEE Transactions on》2002,13(12):1303-1319
Many large scale applications have significant I/O requirements as well as computational and memory requirements. Unfortunately, the limited number of I/O nodes provided in a typical configuration of the modern message-passing distributed-memory architectures such as Intel Paragon and IBM SP-2 limits the I/O performance of these applications severely. We examine some software optimization techniques and evaluate their effects in five different I/O-intensive codes from both small and large application domains. Our goals in this study are twofold. First, we want to understand the behavior of large-scale data-intensive applications and the impact of I/O subsystems on their performance and vice versa. Second, and more importantly, we strive to determine the solutions for improving the applications' performance by a mix of software techniques. Our results reveal that different applications can benefit from different optimizations. For example, we found that some applications benefit from file layout optimizations whereas others take advantage of collective I/O. A combination of architectural and software solutions is normally needed to obtain good I/O performance. For example, we show that with a limited number of I/O resources, it is possible to obtain good performance by using appropriate software optimizations. We also show that beyond a certain level, imbalance in the architecture results in performance degradation even when using optimized software, thereby indicating the necessity of an increase in I/O resources. 相似文献
8.
Soontae?KimEmail author Narayanan?Vijaykrishnan Mahmut?Kandemir Mary?Jane?Irwin 《Design Automation for Embedded Systems》2004,9(1):5-18
As technology scales down into deep-submicron, leakage energy is becoming a dominant source of energy consumption. Leakage energy is generally proportional to the area of a circuit and caches constitute a large portion of the die area. Therefore, there has been much effort to reduce leakage energy in caches. Most techniques have been targeted at cell leakage energy optimization. Bitline leakage energy is critical as well. To this end, we propose a predictive precharging scheme to reduce bitline leakage energy consumption. Results show that energy savings are significant with little performance degradation. Also, our predictive precharging is more beneficial in more aggressively scaled technologies. 相似文献
9.
Kandemir M. Vijaykrishnan N. Irwin M.J. Wu Ye 《Very Large Scale Integration (VLSI) Systems, IEEE Transactions on》2001,9(6):801-804
Optimizing for energy constraints is of critical importance due to the proliferation of battery-operated embedded devices. Thus, it is important to explore both hardware and software solutions for optimizing energy. The focus of high-level compiler optimizations has traditionally been on improving performance. In this paper, we present an experimental evaluation of several state-of-the-art high-level compiler optimizations on energy consumption, considering both the processor core (datapath) and memory system. This is in contrast to many of the previous works that have considered them in isolation 相似文献
10.
Soft errors issues in low-power caches 总被引:1,自引:0,他引:1
Degalahal V. Lin Li Narayanan V. Kandemir M. Irwin M.J. 《Very Large Scale Integration (VLSI) Systems, IEEE Transactions on》2005,13(10):1157-1166
As technology scales, reducing leakage power and improving reliability of data stored in memory cells is both important and challenging. While lower threshold voltages increase leakage, lower supply voltages and smaller nodal capacitances reduce energy consumption but increase soft errors rates. In this work, we present a comprehensive study of soft error rates on low-power cache design. First, we study the effect of circuit level techniques, used to reduce the leakage energy consumption, on soft error rates. Our results using custom designs show that many of these approaches may increase the soft error rates as compared to a standard 6T SRAM. We also validate the effects of voltage scaling on soft error rate by performing accelerated tests on off-the-shelf SRAM-based chips using a neutron beam source. Next, we study the impact of cache decay and drowsy cache, which are two commonly used architectural-level leakage reduction approaches, on the cache reliability. Our results indicate that the leakage optimization techniques change the reliability of cache memory. More importantly, we demonstrate that there is a tradeoff between optimizing for leakage power and improving the immunity to soft error. We also study the impact of error correcting codes on soft error rates. Based on this study, we propose an adaptive error correcting scheme to reduce the leakage energy consumption and improve reliability. 相似文献