首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   60篇
  免费   0篇
化学工业   1篇
无线电   6篇
一般工业技术   1篇
冶金工业   19篇
自动化技术   33篇
  2014年   1篇
  2013年   1篇
  2011年   2篇
  2010年   1篇
  2009年   4篇
  2008年   5篇
  2007年   4篇
  2006年   1篇
  2004年   1篇
  2003年   1篇
  2001年   2篇
  2000年   1篇
  1998年   4篇
  1997年   4篇
  1996年   1篇
  1995年   4篇
  1994年   1篇
  1993年   2篇
  1991年   3篇
  1990年   2篇
  1989年   2篇
  1988年   1篇
  1983年   1篇
  1976年   3篇
  1975年   1篇
  1974年   1篇
  1972年   1篇
  1965年   1篇
  1963年   1篇
  1961年   1篇
  1957年   2篇
排序方式: 共有60条查询结果,搜索用时 46 毫秒
1.
Many researchers approach the problem of programming distributed memory machines by assuming a global shared name space. Thus the user views the distributed memory of the machine as though it were shared. A major issue that arises at this point is how to manage the memory. When a processor accesses data stored on another processor's memory, data must be moved between the two processors. Once these data are retrieved from another processor's memory, several interesting issues are raised. Where should these data be stored locally? What transformations must be performed to the code to guarantee that the nonlocal accesses reference the correct memory location? What optimizations can be performed to reduce the time spent in accessing the nonlocal data? In this paper we examine various data migration mechanisms that allow an explicit and controlled mapping of data to memory. We describe, experimentally evaluate, and model a set of schemes for storing and retrieving off-processor array elements. The schemes are all based on using hash tables for efficient access of nonlocal data. The three different techniques evaluated are the basic hashed cache, partial enumeration, and full enumeration, the details of which are described in the paper. In all three schemes, nonlocal data are stored in hash tables—the difference is in the amount of memory used by the schemes and the retrieval mechanisms for nonlocal data.  相似文献   
2.
Similarity search in high-dimensional spaces is a pivotal operation for several database applications, including online content-based multimedia services. With the increasing popularity of multimedia applications, these services are facing new challenges regarding (1) the very large and growing volumes of data to be indexed/searched and (2) the necessity of reducing the response times as observed by end-users. In addition, the nature of the interactions between users and online services creates fluctuating query request rates throughout execution, which requires a similarity search engine to adapt to better use the computation platform and minimize response times. In this work, we address these challenges with Hypercurves, a flexible framework for answering approximate k-nearest neighbor (kNN) queries for very large multimedia databases. Hypercurves executes in hybrid CPU–GPU environments and is able to attain massive query-processing rates through the cooperative use of these devices. Hypercurves also changes its CPU–GPU task partitioning dynamically according to the observed load, aiming for optimal response times. In our empirical evaluation, dynamic task partitioning reduced query response times by approximately 50 % compared to the best static task partition. Due to a probabilistic proof of equivalence to the sequential kNN algorithm, the CPU–GPU execution of Hypercurves in distributed (multi-node) environments can be aggressively optimized, attaining superlinear scalability while still guaranteeing, with high probability, results at least as good as those from the sequential algorithm.  相似文献   
3.
Complex parallel applications can often be modeled as directed acyclic graphs of coarse-grained application tasks with dependences. These applications exhibit both task and data parallelism, and combining these two (also called mixed parallelism) has been shown to be an effective model for their execution. In this paper, we present an algorithm to compute the appropriate mix of task and data parallelism required to minimize the parallel completion time (makespan) of these applications. In other words, our algorithm determines the set of tasks that should be run concurrently and the number of processors to be allocated to each task. The processor allocation and scheduling decisions are made in an integrated manner and are based on several factors such as the structure of the task graph, the runtime estimates and scalability characteristics of the tasks, and the intertask data communication volumes. A locality-conscious scheduling strategy is used to improve intertask data reuse. Evaluation through simulations and actual executions of task graphs derived from real applications and synthetic graphs shows that our algorithm consistently generates schedules with a lower makespan as compared to Critical Path Reduction (CPR) and Critical Path and Allocation (CPA), two previously proposed scheduling algorithms. Our algorithm also produces schedules that have a lower makespan than pure task- and data-parallel schedules. For task graphs with known optimal schedules or lower bounds on the makespan, our algorithm generates schedules that are closer to the optima than other scheduling approaches.  相似文献   
4.
Scheduling, in many application domains, involves optimization of multiple performance metrics. For example, application workflows with real-time constraints have strict throughput requirements and also desire a low latency or response time. In this paper, we present a novel algorithm for the scheduling of workflows that act on a stream of input data. Our algorithm focuses on the two performance metrics, latency and throughput, and minimizes the latency of workflows while satisfying strict throughput requirements. We also describe steps to use the above approach to solve the problem of meeting latency requirements while maximizing throughput. We leverage pipelined, task and data parallelism in a coordinated manner to meet these objectives and investigate the benefit of task duplication in alleviating communication overheads in the pipelined schedule for different workflow characteristics. The proposed algorithm is designed for a realistic bounded multi-port communication model, where each processor can simultaneously communicate with at most k distinct processors. Experimental evaluation using synthetic benchmarks as well as those derived from real applications shows that our algorithm consistently produces lower latency schedules that meet throughput requirements, even when previously proposed schemes fail.  相似文献   
5.
Developments in optical microscopy imaging have generated large high-resolution data sets that have spurred medical researchers to conduct investigations into mechanisms of disease, including cancer at cellular and subcellular levels. The work reported here demonstrates that a suitable methodology can be conceived that isolates modality-dependent effects from the larger segmentation task and that 3D reconstructions can be cognizant of shapes as evident in the available 2D planar images. In the current realization, a method based on active geodesic contours is first deployed to counter the ambiguity that exists in separating overlapping cells on the image plane. Later, another segmentation effort based on a variant of Voronoi tessellations improves the delineation of the cell boundaries using a Bayesian formulation. In the next stage, the cells are interpolated across the third dimension thereby mitigating the poor structural correlation that exists in that dimension. We deploy our methods on three separate data sets obtained from light, confocal, and phase-contrast microscopy and validate the results appropriately.  相似文献   
6.
7.
Efficient storage and retrieval of multi-attribute data sets has become one of the essential requirements for many data-intensive applications. The Cartesian product file has been known as an effective multi-attribute file structure for partial-match and best-match queries. Several heuristic methods have been developed to decluster Cartesian product files across multiple disks to obtain high performance for disk accesses. Although the scalability of the declustering methods becomes increasingly important for systems equipped with a large number of disks, no analytic studies have been done so far. The authors derive formulas describing the scalability of two popular declustering methods-Disk Module and Fieldwise Xor-for range queries, which are the most common type of queries. These formulas disclose the limited scalability of the declustering methods, and this is corroborated by extensive simulation experiments. From the practical point of view, the formulas given in the paper provide a simple measure that can be used to predict the response time of a given range query and to guide the selection of a declustering method under various conditions  相似文献   
8.
We are developing a computer-aided prognosis system for neuroblastoma (NB), a cancer of the nervous system and one of the most malignant tumors affecting children. Histopathological examination is an important stage for further treatment planning in routine clinical diagnosis of NB. According to the International Neuroblastoma Pathology Classification (the Shimada system), NB patients are classified into favorable and unfavorable histology based on the tissue morphology. In this study, we propose an image analysis system that operates on digitized H&E stained whole-slide NB tissue samples and classifies each slide as either stroma-rich or stroma-poor based on the degree of Schwannian stromal development. Our statistical framework performs the classification based on texture features extracted using co-occurrence statistics and local binary patterns. Due to the high resolution of digitized whole-slide images, we propose a multi-resolution approach that mimics the evaluation of a pathologist such that the image analysis starts from the lowest resolution and switches to higher resolutions when necessary. We employ an offline feature selection step, which determines the most discriminative features at each resolution level during the training step. A modified k-nearest neighbor classifier is used to determine the confidence level of the classification to make the decision at a particular resolution level. The proposed approach was independently tested on 43 whole-slide samples and provided an overall classification accuracy of 88.4%.  相似文献   
9.
10.
We examine the effectiveness of optimizations aimed to allowing distributed machine to efficiently compute inner loops over globally defined data structures. Our optimizations are specifically targeted toward loops in which some array references are made through a level of indirection. Unstructured mesh codes and sparse matrix solvers are examplese of programs with kernels of this sort. Experimental data that quantify the performance obtainable using the methods discussed here are included.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号