首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   19篇
  免费   0篇
化学工业   1篇
机械仪表   2篇
无线电   2篇
一般工业技术   3篇
自动化技术   11篇
  2016年   2篇
  2015年   1篇
  2014年   3篇
  2012年   2篇
  2011年   1篇
  2009年   2篇
  2004年   3篇
  1996年   2篇
  1995年   1篇
  1993年   1篇
  1991年   1篇
排序方式: 共有19条查询结果,搜索用时 31 毫秒
1.
2.
3.
4.
The GCD and Banerjee tests are the standard data dependence tests used to determine whether a loop may be parallelized/vectorized. In an earlier work, (1991) the authors presented a new data dependence test, the I test, which extends the accuracy of the GCD and the Banerjee tests. In the original presentation, only the case of general dependence was considered, i.e., the case of dependence with a direction vector of the form (*,*,...,*). In the present work, the authors generalize the I test to check for data dependence subject to an arbitrary direction vector  相似文献   
5.
An experimental evaluation of data dependence analysis techniques   总被引:1,自引:0,他引:1  
Optimizing compilers rely upon program analysis techniques to detect data dependences between program statements. Data dependence information captures the essential ordering constraints of the statements in a program that need to be preserved in order to produce valid optimized and parallel code. Data dependence testing is very important for automatic parallelization, vectorization, and any other code transformation. In this paper, we examine the impact of data dependence analysis in practice. A number of data dependence tests have been proposed in the literature. In each test, there are different trade offs between accuracy and efficiency. We present an experimental evaluation of several data dependence tests, including the Banerjee test, the I-Test, and the Omega test. We compare these tests in terms of data dependence accuracy, compilation efficiency, effectiveness in parallelization, and program execution performance. We analyze the reasons why a data dependence test can be inexact and we explain how the examined tests handle such cases. We run various experiments using the Perfect Club Benchmarks and the scientific library Lapack. We present the measured accuracy of each test and the reasons for any approximation. We compare these tests in term's of efficiency and we analyze the trade offs between accuracy and efficiency. We also determine the impact of each data dependence test on the total compilation time. Finally, we measure the number of loops parallelized by each test and we compare the execution performance of each benchmark on a multiprocessor. Our results indicate that the Omega test is more accurate, but also very inefficient in the cases where the other two tests are inaccurate. In general, the cost of the Omega test is high and uses a significant percentage of the total compilation time. Furthermore, the difference in accuracy of the Omega test over the Banerjee test and the l-Test does not improve parallelization and program execution performance.  相似文献   
6.
Particulate matter (PM) pollution is responsible for hundreds of thousands of deaths worldwide, the majority due to cardiovascular disease (CVD). While many potential pathophysiological mechanisms have been proposed, there is not yet a consensus as to which are most important in causing pollution-related morbidity/mortality. Nor is there consensus regarding which specific types of PM are most likely to affect public health in this regard. One toxicological mechanism linking exposure to airborne PM with CVD outcomes is oxidative stress, a contributor to the development of CVD risk factors including atherosclerosis. Recent work suggests that accelerated shortening of telomeres and, thus, early senescence of cells may be an important pathway by which oxidative stress may accelerate biological aging and the resultant development of age-related morbidity. This pathway may explain a significant proportion of PM-related adverse health outcomes, since shortened telomeres accelerate the progression of many diseases. There is limited but consistent evidence that vehicular emissions produce oxidative stress in humans. Given that oxidative stress is associated with accelerated erosion of telomeres, and that shortened telomeres are linked with acceleration of biological ageing and greater incidence of various age-related pathology, including CVD, it is hypothesized that associations noted between certain pollution types and sources and oxidative stress may reflect a mechanism by which these pollutants result in CVD-related morbidity and mortality, namely accelerated aging via enhanced erosion of telomeres. This paper reviews the literature providing links among oxidative stress, accelerated erosion of telomeres, CVD, and specific sources and types of air pollutants. If certain PM species/sources might be responsible for adverse health outcomes via the proposed mechanism, perhaps the pathway to reducing mortality/morbidity from PM would become clearer. Not only would pollution reduction imperatives be more focused, but interventions which could reduce oxidative stress would become all the more important.  相似文献   
7.
In an earlier work, aThreshold Scheduling Algorithmwas proposed to schedule the functional (DAG) parallelism in a program on distributed memory systems. In this work, we address the issue of adapting the schedule for a set of distributed memory architectures with the same computation costs but higher communication costs. We introduce a new concept ofdominant edgesof a schedule to denote those edges which dictate the schedule time of the destination nodes due to the changes in their communication costs. Using this concept, we derive the conditions under which schedule on the whole or at least part of the graph can be reused for a different architecture keeping the cost of program repartitioning and rescheduling to a minimum. We demonstrate the practical significance of the method by incorporating it in the compiler backend for targeting Sisal (Streams and Iterations in a Single Assignment Language) on a family of Intel i860 architectures, Gamma, Delta, and Paragon, which vary in their communication costs. It is shown that almost 30 to 65% of the schedule can be reused unchanged, thereby avoiding program repartitioning to a large degree. The remainder of the schedule can beregeneratedthrough a linear algorithm at run time.  相似文献   
8.
9.
The function block (FB) concept has been adopted by recent International Electrotechnical Commission (IEC) standards to define a methodology for the development of modular, re-usable, open, and vendor-independent distributed control applications. Control engineers are already familiar with the FB construct, and field devices and fieldbuses are expected to be compliant with this approach in the near future. New generation FB-oriented CASE tools are required to support the whole development process. This paper presents an approach to the design and development of an IEC-compliant CASE tool (ICT). The proposed approach is based on a four-layer architecture that successfully unifies the FB concept with the Unified Modelling Language. During the development of our prototype ICT, this architecture proved to be very significant for the identification of the key abstractions that the ICT must provide as building blocks of its various diagrams used during the modelling process of control systems.  相似文献   
10.
Advances in information technology and knowledge management change the way that e-negotiations, which constitute an important aspect of worldwide e-trading, can be structured and represented. In this paper, a novel approach that focuses on knowledge modeling, formalization, representation and management in the domain of e-negotiation is described. The proposed approach exploits Ontologies, Service Oriented Architectures, Semantic Web Services, software agent platforms, and Knowledge-Bases to construct a framework that favors dynamically adapted negotiation protocols, negotiation process visualization and management, modeling and preference elicitation of the negotiated object and automatic deployment of negotiation interfaces. Negotiation process, protocol and strategy are examined, and a hybrid approach that integrates rules and workflow diagrams to describe and represent them is introduced.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号