首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   8篇
  免费   0篇
无线电   1篇
冶金工业   3篇
自动化技术   4篇
  2015年   1篇
  2005年   1篇
  2004年   3篇
  1998年   1篇
  1997年   1篇
  1995年   1篇
排序方式: 共有8条查询结果,搜索用时 15 毫秒
1
1.
2.
This paper presents a compiler strategy to optimize data accesses in regular array-intensive applications running on embedded multiprocessor environments. Specifically, we propose an optimization algorithm that targets at reducing extra off-chip memory accesses caused by interprocessor communication. This is achieved by increasing the application-wide reuse of data that resides in scratch-pad memories of processors. Our results obtained using four array-intensive image processing applications indicate that exploiting interprocessor data sharing can reduce energy-delay product significantly on a four-processor embedded system.  相似文献   
3.
To determine the effect of smoking on Helicobacter pylori eradication and ulcer healing, we investigated 232 patients with H. pylori-positive duodenal ulcer. Patients were given one of seven different treatment protocols and divided into three groups according to smoking habits. Group 1 (n = 128) consisted of non-smokers, group 2 (n = 65) of mild smokers (5-20 cigarettes/day) and group 3 (n = 39) of heavy smokers (> 20/day). The eradication of H. pylori and ulcer healing rate was controlled eight weeks later after ceasing the therapy. The overall eradication rate was 66% in all patients and 68%, 66%, 59% in each group, respectively. The eradication rates showed no statistical difference between groups. Complete ulcer healing was achieved in 84% of all patients and ulcer healing rate between groups did not show any significance (85%, 83% and 82% respectively). These results suggest that smoking status does not influence the eradication of H. pylori and duodenal ulcer healing rates at eight weeks in patients on different treatment schedules.  相似文献   
4.
5.
With energy consumption becoming one of the first-class optimization parameters in computer system design, compilation techniques that consider performance and energy simultaneously are expected to play a central role. In particular, compiling a given application code under performance and energy constraints is becoming an important problem. In this paper, we focus on an on-chip multiprocessor architecture and present a set of code optimization strategies. We first evaluate an adaptive loop parallelization strategy (i.e., a strategy that allows each loop nest to execute using a different number of processors if doing so is beneficial) and measure the potential energy savings when unused processors during execution of a nested loop are shut down (i.e., placed into a power-down or sleep state). Our results show that shutting down unused processors can lead to as much as 67 percent energy savings at the expense of up to 17 percent performance loss in a set of array-intensive applications. To eliminate this performance penalty, we also discuss and evaluate a processor preactivation strategy based on compile-time analysis of nested loops. Based on our experiments, we conclude that an adaptive loop parallelization strategy combined with idle processor shut down and preactivation can be very effective in reducing energy consumption without increasing execution time. We then generalize our strategy and present an application parallelization strategy based on integer linear programming (ILP). Given an array-intensive application, our optimization strategy determines the number of processors to be used in executing each loop nest based on the objective function and additional compilation constraints provided by the user/programmer. Our initial experience with this constraint-based optimization strategy shows that it is very successful in optimizing array-intensive applications on on-chip multiprocessors under multiple energy and performance constraints.  相似文献   
6.
Improving memory energy consumption of programs that manipulate arrays is an important problem as these codes spend large amounts of energy in accessing off-chip memory. We propose a data-driven strategy to optimize the memory energy consumption in a banked memory system. Our compiler-based strategy modifies the original execution order of loop iterations in array-dominated applications to increase the length of the time period(s) in which memory banks are idle (i.e., not accessed by any loop iteration). To achieve this, it first classifies loop iterations according to their bank accesses patterns and then, with the help of a polyhedral tool, tries to bring the iterations with similar bank access patterns close together. Increasing the idle periods of memory banks brings two major benefits: first, it allows us to place more memory banks into low-power operating modes and, second, it enables us to use a more aggressive (i.e., more energy saving) operating mode (hence, saving more energy) for a given bank (instead of a less aggressive mode). The proposed strategy can reduce memory energy consumption in both sequential and parallel applications. Our strategy has been implemented in an experimental compiler using a polyhedral tool and evaluated using nine array-dominated applications on both a cacheless system and a system with cache memory. Our experimental results indicate that the proposed strategy is very successful in reducing the memory system energy and improves the memory energy by as much as 36.8 percent over a strategy that uses low-power modes without optimizing data access pattern. Our results also show that optimizations that target reducing off-chip memory energy can generate very different results from those that target at improving only cache locality.  相似文献   
7.
Compiler-directed locality optimization techniques are effective in reducing the number of cycles spent in off-chip memory accesses. Recently, methods have been developed that transform memory layouts of data structures at compile-time to improve spatial locality of nested loops beyond current control-centric (loop nest-based) optimizations. Most of these data-centric transformations use a single static (program-wide) memory layout for each array. A disadvantage of these static layout-based locality enhancement strategies is that they might fail to optimize codes that manipulate arrays, which demand different layouts in different parts of the code. We introduce a new approach, which extends current static layout optimization techniques by associating different memory layouts with the same array in different parts of the code. We call this strategy "quasidynamic layout optimization." In this strategy, the compiler determines memory layouts (for different parts of the code) at compile time, but layout conversions occur at runtime. We show that the possibility of dynamically changing memory layouts during the course of execution adds a new dimension to the data locality optimization problem. Our strategy employs a static layout optimizer module as a building block and, by repeatedly invoking it for different parts of the code, it checks whether runtime layout modifications bring additional benefits beyond static optimization. Our experiments indicate significant improvements in execution time over static layout-based locality enhancing techniques.  相似文献   
8.
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号