全文获取类型
收费全文 | 67篇 |
免费 | 0篇 |
专业分类
化学工业 | 4篇 |
机械仪表 | 2篇 |
建筑科学 | 1篇 |
能源动力 | 3篇 |
轻工业 | 5篇 |
无线电 | 14篇 |
一般工业技术 | 3篇 |
冶金工业 | 5篇 |
自动化技术 | 30篇 |
出版年
2024年 | 3篇 |
2022年 | 1篇 |
2021年 | 2篇 |
2020年 | 2篇 |
2019年 | 2篇 |
2018年 | 2篇 |
2017年 | 1篇 |
2016年 | 3篇 |
2015年 | 1篇 |
2014年 | 3篇 |
2013年 | 1篇 |
2012年 | 3篇 |
2011年 | 4篇 |
2010年 | 2篇 |
2009年 | 1篇 |
2008年 | 2篇 |
2007年 | 5篇 |
2006年 | 1篇 |
2005年 | 3篇 |
2004年 | 6篇 |
2003年 | 5篇 |
2002年 | 4篇 |
2001年 | 2篇 |
2000年 | 2篇 |
1999年 | 1篇 |
1998年 | 1篇 |
1997年 | 2篇 |
1996年 | 1篇 |
1993年 | 1篇 |
排序方式: 共有67条查询结果,搜索用时 15 毫秒
41.
De La Luz V. Kadayif I. Kandemir M. Sezer U. 《Parallel and Distributed Systems, IEEE Transactions on》2004,15(4):289-303
Improving memory energy consumption of programs that manipulate arrays is an important problem as these codes spend large amounts of energy in accessing off-chip memory. We propose a data-driven strategy to optimize the memory energy consumption in a banked memory system. Our compiler-based strategy modifies the original execution order of loop iterations in array-dominated applications to increase the length of the time period(s) in which memory banks are idle (i.e., not accessed by any loop iteration). To achieve this, it first classifies loop iterations according to their bank accesses patterns and then, with the help of a polyhedral tool, tries to bring the iterations with similar bank access patterns close together. Increasing the idle periods of memory banks brings two major benefits: first, it allows us to place more memory banks into low-power operating modes and, second, it enables us to use a more aggressive (i.e., more energy saving) operating mode (hence, saving more energy) for a given bank (instead of a less aggressive mode). The proposed strategy can reduce memory energy consumption in both sequential and parallel applications. Our strategy has been implemented in an experimental compiler using a polyhedral tool and evaluated using nine array-dominated applications on both a cacheless system and a system with cache memory. Our experimental results indicate that the proposed strategy is very successful in reducing the memory system energy and improves the memory energy by as much as 36.8 percent over a strategy that uses low-power modes without optimizing data access pattern. Our results also show that optimizations that target reducing off-chip memory energy can generate very different results from those that target at improving only cache locality. 相似文献
42.
Compiler-directed locality optimization techniques are effective in reducing the number of cycles spent in off-chip memory accesses. Recently, methods have been developed that transform memory layouts of data structures at compile-time to improve spatial locality of nested loops beyond current control-centric (loop nest-based) optimizations. Most of these data-centric transformations use a single static (program-wide) memory layout for each array. A disadvantage of these static layout-based locality enhancement strategies is that they might fail to optimize codes that manipulate arrays, which demand different layouts in different parts of the code. We introduce a new approach, which extends current static layout optimization techniques by associating different memory layouts with the same array in different parts of the code. We call this strategy "quasidynamic layout optimization." In this strategy, the compiler determines memory layouts (for different parts of the code) at compile time, but layout conversions occur at runtime. We show that the possibility of dynamically changing memory layouts during the course of execution adds a new dimension to the data locality optimization problem. Our strategy employs a static layout optimizer module as a building block and, by repeatedly invoking it for different parts of the code, it checks whether runtime layout modifications bring additional benefits beyond static optimization. Our experiments indicate significant improvements in execution time over static layout-based locality enhancing techniques. 相似文献
43.
Kadayif I. Kandemir M. Chen G. Ozturk O. Karakoy M. Sezer U. 《Parallel and Distributed Systems, IEEE Transactions on》2005,16(5):396-411
With energy consumption becoming one of the first-class optimization parameters in computer system design, compilation techniques that consider performance and energy simultaneously are expected to play a central role. In particular, compiling a given application code under performance and energy constraints is becoming an important problem. In this paper, we focus on an on-chip multiprocessor architecture and present a set of code optimization strategies. We first evaluate an adaptive loop parallelization strategy (i.e., a strategy that allows each loop nest to execute using a different number of processors if doing so is beneficial) and measure the potential energy savings when unused processors during execution of a nested loop are shut down (i.e., placed into a power-down or sleep state). Our results show that shutting down unused processors can lead to as much as 67 percent energy savings at the expense of up to 17 percent performance loss in a set of array-intensive applications. To eliminate this performance penalty, we also discuss and evaluate a processor preactivation strategy based on compile-time analysis of nested loops. Based on our experiments, we conclude that an adaptive loop parallelization strategy combined with idle processor shut down and preactivation can be very effective in reducing energy consumption without increasing execution time. We then generalize our strategy and present an application parallelization strategy based on integer linear programming (ILP). Given an array-intensive application, our optimization strategy determines the number of processors to be used in executing each loop nest based on the objective function and additional compilation constraints provided by the user/programmer. Our initial experience with this constraint-based optimization strategy shows that it is very successful in optimizing array-intensive applications on on-chip multiprocessors under multiple energy and performance constraints. 相似文献
44.
Kandemir M. Choudhary A. Ramanujam J. Kandaswamy M.A. 《Parallel and Distributed Systems, IEEE Transactions on》2000,11(7):648-668
This paper presents a unified framework that optimizes out-of-core programs by exploiting locality and parallelism, and reducing communication overhead. For out-of-core problems where the data set sizes far exceed the size of the available in-core memory, it is particularly important to exploit the memory hierarchy by optimizing the I/O accesses. We present algorithms that consider both iteration space (loop) and data space (file layout) transformations in a unified framework. We show that the performance of an out-of-core loop nest containing references to out-of-core arrays can be improved by using a suitable combination of file layout choices and loop restructuring transformations. Our approach considers array references one-by-one and attempts to optimize each reference for parallelism and locality. When there are references for which parallelism optimizations do not work, communication is vectorized so that data transfer can be performed before the innermost loop. Results from hand-compiles on IBM SP-2 and Inter Paragon distributed-memory message-passing architectures show that this approach reduces the execution times and improves the overall speedups. In addition, we extend the base algorithm to work with file layout constraints and show how it is useful for optimizing programs that consist of multiple loop nests 相似文献
45.
Memory system optimization of embedded software 总被引:1,自引:0,他引:1
Wolf W. Kandemir M. 《Proceedings of the IEEE. Institute of Electrical and Electronics Engineers》2003,91(1):165-182
The memory system often determines a great deal about the behavior of an embedded system: performance, power, and manufacturing cost. A great many software techniques have been developed over the past decade to optimize software to improve these characteristics. Embedded software design and compilation can take advantage of two important facts: the hardware target is known; and we can spend more time and computational effort to optimize the software. This paper surveys techniques for optimizing memory behavior of embedded software and points to some future trends in the field. 相似文献
46.
47.
48.
In vitro study of encrustation is an important part of assessment of materials as potential alloplasts or devices in the urinary tract. This modified semi-automated technique comprises a circular reaction chamber with an encrustation mixture, the level of which is controlled by a float switch which operates the exit peristaltic pump. The composition of the reactants used simulates infected urine with alkaline pH. Results of a preliminary study of the deposits by scanning electron micrography (SEM) and energy dispersive X-ray (EDX) microanalysis are consistent with struvite and hydroxyapatite, similar to the main minerals deposited on urinary catheters. It is a relatively simple, effective and inexpensive set-up for study of encrustation on materials. 相似文献
49.
50.
Isil Oz Haluk Rahmi Topcuoglu Mahmut Kandemir Oguz Tosun 《Journal of Systems Architecture》2012,58(3-4):160-176
Executing multiple applications concurrently is an important way of utilizing the computational power provided by emerging chip multiprocessor (CMP) architectures. However, this multiprogramming brings a resource management and partitioning problem, for which one can find numerous examples in the literature. Most of the resource partitioning schemes proposed to date focus on performance or energy centric strategies. In contrast, this paper explores reliability-aware core partitioning strategies targeting CMPs. One of our schemes considers both performance and reliability objectives by maximizing a novel combined metric called the vulnerability-delay product (VDP). The vulnerability component in this metric is represented with Thread Vulnerability Factor (TVF), a recently proposed metric for quantifying thread vulnerability for multicores. Execution time of the given application represents the delay component of the VDP metric. As part of our experimental analysis, proposed core partitioning schemes are compared with respect to normalized weighted speedup, normalized weighted reliability loss and normalized weighted vulnerability delay product gain metrics for various workloads of benchmark applications. 相似文献