首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   122篇
  免费   14篇
  国内免费   7篇
综合类   1篇
化学工业   1篇
机械仪表   2篇
建筑科学   1篇
无线电   14篇
一般工业技术   5篇
自动化技术   119篇
  2024年   1篇
  2020年   3篇
  2019年   2篇
  2017年   3篇
  2016年   2篇
  2015年   16篇
  2014年   16篇
  2013年   19篇
  2012年   16篇
  2011年   29篇
  2010年   7篇
  2009年   16篇
  2008年   9篇
  2007年   2篇
  2006年   1篇
  2005年   1篇
排序方式: 共有143条查询结果,搜索用时 296 毫秒
51.
52.
Cross-Approximate Entropy (Cross-ApEn) is a useful measure to quantify the statistical dissimilarity of two time series. In spite of the advantage of Cross-ApEn over its one-dimensional counterpart (Approximate Entropy), only a few studies have applied it to biomedical signals, mainly due to its high computational cost. In this paper, we propose a fast GPU-based implementation of the Cross-ApEn that makes feasible its use over a large amount of multidimensional data. The scheme followed is fully scalable, thus maximizes the use of the GPU despite of the number of neural signals being processed. The approach consists in processing many trials or epochs simultaneously, with independence of its origin. In the case of MEG data, these trials can proceed from different input channels or subjects. The proposed implementation achieves an average speedup greater than 250× against a CPU parallel version running on a processor containing six cores. A dataset of 30 subjects containing 148 MEG channels (49 epochs of 1024 samples per channel) can be analyzed using our development in about 30 min. The same processing takes 5 days on six cores and 15 days when running on a single core. The speedup is much larger if compared to a basic sequential Matlab® implementation, that would need 58 days per subject. To our knowledge, this is the first contribution of Cross-ApEn measure computation using GPUs. This study demonstrates that this hardware is, to the day, the best option for the signal processing of biomedical data with Cross-ApEn.  相似文献   
53.
This paper presents an effective scheme for clustering a huge data set using a PC cluster system, in which each PC is equipped with a commodity programmable graphics processing unit (GPU). The proposed scheme is devised to achieve three-level hierarchical parallel processing of massive data clustering. The divide-and-conquer approach to parallel data clustering is employed to perform the coarse-grain parallel processing by multiple PCs with a message passing mechanism. By taking advantage of the GPU’s parallel processing capability, moreover, the proposed scheme can exploit two types of the fine-grain data parallelism at the different levels in the nearest neighbor search, which is the most computationally-intensive part of the data-clustering process. The performance of our scheme is discussed in comparison with that of the implementation entirely running on CPU. Experimental results clearly show that the proposed hierarchial parallel processing can remarkably accelerate the data clustering task. Especially, GPU co-processing is quite effective to improve the computational efficiency of parallel data clustering on a PC cluster. Although data-transfer from GPU to CPU is generally costly, acceleration by GPU co-processing is significant to save the total execution time of data-clustering.  相似文献   
54.
55.
Holographic Optical Tweezers (HOT) are a versatile way of manipulating microscopic particles in 3D. However, their ease of use has been hampered by the computational load of calculating the holograms, resulting in an unresponsive system. We present a program for generating these holograms on a consumer Graphics Processing Unit (GPU), coupled to an easy-to-use interface in LabVIEW (National Instruments). This enables a HOT system to be set up without writing any additional code, as well as providing a platform enabling the fast generation of other holograms. The GPU engine calculates holograms over 300 times faster than the same algorithm running on a quad core CPU. The hologram algorithm can be altered on-the-fly without recompiling the program, allowing it to be used to control Spatial Light Modulators in any situation where the hologram can be calculated in a single pass. The interface has also been rewritten to take advantage of new features in LabVIEW 2010. It is designed to be easily modified and extended to integrate with hardware other than our own.  相似文献   
56.
随着空间遥感技术和对地观测技术的不断发展,光学、热红外和微波等不同技术手段可以获取同一地区的多种遥感影像数据(多时相、多光谱、多传感器、多平台和多分辨率等),每天获取的遥感数据量越来越大。同时,大量的遥感应用需要快速地对这些遥感数据进行处理与分析,提供辅助决策信息。因此,如果不能及时进行数据处理,这些数据就会失去时效性,甚至失去数据本身的价值。高性能计算与并行处理技术,加速了遥感影像数据处理与信息提取的进度,如大规模多处理系统、网格与云计算技术、通用图形处理器(GPGPU)等。文中综述了高性能计算、并行处理及云计算技术应用于遥感领域的最新进展,给出了一些研究与应用范例,并提出了当前高性能遥感影像处理所面临的一些挑战。  相似文献   
57.
Heterogeneous performance prediction models are valuable tools to accurately predict application runtime, allowing for efficient design space exploration and application mapping. The existing performance models require intricate system architecture knowledge, making the modeling task difficult. In this research, we propose a regression‐based performance prediction framework for general purpose graphical processing unit (GPGPU) clusters that statistically abstracts the system architecture characteristics, enabling performance prediction without detailed system architecture knowledge. The regression‐based framework targets deterministic synchronous iterative algorithms using our synchronous iterative GPGPU execution model and is broken into two components: the computation component that models the GPGPU device and host computations and the communication component that models the network‐level communications. The computation component regression models use algorithm characteristics such as the number of floating‐point operations and total bytes as predictor variables and are trained using several small, instrumented executions of synchronous iterative algorithms that include a range of floating‐point operations‐to‐byte requirements. The regression models for network‐level communications are developed using micro‐benchmarks and employ data transfer size and processor count as predictor variables. Our performance prediction framework achieves prediction accuracy over 90% compared with the actual implementations for several tested GPGPU cluster configurations. The end goal of this research is to offer the scientific computing community, an accurate and easy‐to‐use performance prediction framework that empowers users to optimally utilize the heterogeneous resources. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   
58.
黄玉龙  邹循进  刘奎  苏本跃 《计算机应用》2014,34(11):3112-3116
现有Top-k查询优化算法无法充分利用图形处理器(GPU)强大的并行吞吐量及时获取查询结果,为此提出了一种基于统一计算设备架构(CUDA)模型的大规模分段查询算法。通过划分查询过程以及采用分段并行处理策略,该算法可最大限度地提升查询过程中的计算和比较效率。实验结果表明,与4线程多核优化算法相比,所提算法具有明显的性能优势,当有序列表数量为6,遍历步长为120时,性能达到最优,此时比多核算法快40倍。  相似文献   
59.
基于稀疏表示的超分辨率算法的图像重建质量好,但算法复杂,现有的CPU串行执行算法无法满足视频实时处理的需要。为此提出了基于GPU加速的稀疏表示的实时视频超分辨率算法。该算法着重于优化数据并行处理流程,提高GPU资源利用率,通过设置视频帧队列、提高显存访问并发率、采用主成分分析(PCA)降维、优化字典查找等手段,使算法执行速度比现有CPU串行算法提高了2个数量级,在显示分辨率为669×546的视频回放测试中达到每秒33帧。  相似文献   
60.
Exponentially growing number of devices on Internet incurs an ever‐increasing load on the network routers in executing network protocols. Parallel processing has recently become an unavoidable means to scale up the router performance. The research effort elaborated in this paper is focused on exploiting the modern trends of general‐purpose computing on graphics processing unit computing in speeding up the execution of network protocols. An additional benefit is off‐loading the CPU, which can now be fully dedicated to the packet processing and forwarding. To this end, the Shortest Path First algorithm in the Open Shortest Path First protocol and the choice of the best routes in the Border Gateway Protocol are parallelized for efficient execution on Compute Unified Device Architecture platform. An evaluation study was conducted on three different graphics processing units with representative network workload for a varying number of routes and devices. The obtained speedup results confirmed the viability and cost‐effectiveness of such an approach. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号