首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   25篇
  完全免费   12篇
  自动化技术   37篇
  2018年   3篇
  2017年   3篇
  2016年   2篇
  2015年   2篇
  2014年   6篇
  2013年   11篇
  2012年   3篇
  2011年   1篇
  2010年   3篇
  2009年   1篇
  2008年   2篇
排序方式: 共有37条查询结果,搜索用时 31 毫秒
1.
结合精确大气散射图计算的图像快速去雾   总被引:5,自引:1,他引:4  
提出一种基于精确大气散射图的单幅图像快速去雾算法.首先基于大气散射光的特性,充分利用双边滤波保边缘的平滑特性,估测大气散射光和图像局部对比度,并通过引入像素值与平均灰度值的比较,得出更加准确的大气散射图,然后根据大气散射模型复原雾天图像.通过对获得的结果图像进行色调调整和局部去噪的优化处理,得到一幅视觉上较真实的清晰无雾图像.通过与几种典型的图像去雾算法比较,表明本文算法对于远景和深度发生突变的位置可以获得更好的去雾效果.同时,本文算法的时间复杂度与图像大小成线性关系,并且由于本文算法可以并行运行,因此可以进一步采用GPU加速,从而使得本文算法可以满足实时应用的需求.  相似文献
2.
包含反射、折射和焦散效果的全局光照快速绘制方法   总被引:1,自引:0,他引:1  
基于分治的思想,提出一种能够交互绘制直接光照、间接光照、反射、折射、焦散等多种效果的全局光照近似计算方法.该方法采用粗粒度的体结构来模拟低频间接光照,利用细粒度的图像对场景进行采样,计算反射、折射和焦散效果;将粗粒度的体采样和细粒度的图像方法相结合,提出了包含多次递归反射、折射的延缓收集缓冲区构建方法、基于体素的双向光照收集方法以及多分辨率自适应光照收集方法.与光子映射方法相比,文中方法更快,针对完全动态的场景绘制速度在10~30帧/s之间;与其他加速单一效果的方法相比,该方法不但可以快速准确地计算间接光照,而且包含了多种镜面效果,绘制效果逼真,显著增强了真实感.  相似文献
3.
锥束CT检测成像仿真系统的研究与实现   总被引:1,自引:0,他引:1  
设计并实现一个用于CT专业教学和科研的锥束CT检测成像仿真系统.该仿真系统能够模拟对CT设备的控制,生成正投影数据,重建被测物体,并能对重建结果进行三维显示.该系统包括如下关键技术:按真实比例对CT设备建模,实现了锥束CT工作过程的可视化;用场景图方法组织CT场景,简化了场景图形的管理方式;采用GPU加速CT正投影、图像重建和体绘制算法,在一定程度上解决了锥束CT大数据量快速计算问题.实验结果表明,该系统具有较快的运行速度,可满足用户交互操作的需要.  相似文献
4.
In the numerical analysis of strongly correlated quantum lattice models one of the leading algorithms developed to balance the size of the effective Hilbert space and the accuracy of the simulation is the density matrix renormalization group (DMRG) algorithm, in which the run-time is dominated by the iterative diagonalization of the Hamilton operator. As the most time-dominant step of the diagonalization can be expressed as a list of dense matrix operations, the DMRG is an appealing candidate to fully utilize the computing power residing in novel kilo-processor architectures.  相似文献
5.
Existing face imaging systems are not suitable to meet the face representation and recognition demands for emerging applications in areas such as interactive gaming, enhanced learning environments and directed advertising. This is mainly due to the poor capture and characterisation of facial data that compromises their spatial and temporal precision. For emerging applications it is not only necessary to have a high level of precision for the representation of facial data, but also to characterise dynamic faces as naturally as possible and in a timely manner. This study proposes a new framework for capturing and recovering dynamic facial information in real-time at significantly high order of spatial and temporal accuracy to capture and model subtle facial changes for enhanced realism in 3D face visualisation and higher precision for face recognition applications. We also present a novel, fast, and robust correspondence mapping approach for 3D registration of moving 3D faces.  相似文献
6.
The algorithmic and implementation principles are explored in gainfully exploiting GPU accelerators in conjunction with multicore processors on high-end systems with large numbers of compute nodes, and evaluated in an implementation of a scalable block tridiagonal solver. The accelerator of each compute node is exploited in combination with multicore processors of that node in performing block-level linear algebra operations in the overall, distributed solver algorithm. Optimizations incorporated include: (1) an efficient memory mapping and synchronization interface to minimize data movement, (2) multi-process sharing of the accelerator within a node to obtain balanced load with multicore processors, and (3) an automatic memory management system to efficiently utilize accelerator memory when sub-matrices spill over the limits of device memory. Results are reported from our novel implementation that uses MAGMA and CUBLAS accelerator software systems simultaneously with ACML (2013)  [2] for multithreaded execution on processors. Overall, using 940 nVidia Tesla X2090 accelerators and 15,040 cores, the best heterogeneous execution delivers a 10.9-fold reduction in run time relative to an already efficient parallel multicore-only baseline implementation that is highly optimized with intra-node and inter-node concurrency and computation–communication overlap. Detailed quantitative results are presented to explain all critical runtime components contributing to hybrid performance.  相似文献
7.
In this paper, we revisit the design and implementation of Branch-and-Bound (B&B) algorithms for solving large combinatorial optimization problems on GPU-enhanced multi-core machines. B&B is a tree-based optimization method that uses four operators (selection, branching, bounding and pruning) to build and explore a highly irregular tree representing the solution space. In our previous works, we have proposed a GPU-accelerated approach in which only a single CPU core is used and only the bounding operator is performed on the GPU device. Here, we extend the approach (LL-GB&B) in order to minimize the CPU–GPU communication latency and thread divergence. Such an objective is achieved through a GPU-based fine-grained parallelization of the branching and pruning operators in addition to the bounding one. The second contribution consists in investigating the combination of a GPU with multi-core processing. Two scenarios have been explored leading to two approaches: a concurrent (RLL-GB&B) and a cooperative one (PLL-GB&B). In the first one, the exploration process is performed concurrently by the GPU and the CPU cores. In the cooperative approach, the CPU cores prepare and off-load to GPU pools of tree nodes using data streaming while the GPU performs the exploration. The different approaches have been extensively experimented on the Flowshop scheduling problem. Compared to a single CPU-based execution, LL-GB&B allows accelerations up to (××160) for large problem instances. Moreover, when combining multi-core and GPU, we figure out that using RLL-GB&B is not beneficial while PLL-GB&B enables an improvement up to 36% compared to LL-GB&B.  相似文献
8.
为了实现在无标记的情况下不对称地跟踪人手及其所有关节的位置和姿态,提出一种软硬件结合的混合跟踪计算框架,同时使用电磁跟踪器和无标记人手关节姿态分析算法提出基于 CUDA 的异步并行粒子群优化(PSO)加速方法。首先通过跟踪器测量人手手腕的位置姿态,使用Kinect数据作为输入,在三空间(双颜色空间和深度空间)下进行手部区域分割;然后使用PSO方法将手关节的23个自由度的跟踪问题转化为求解一个优化问题,使用不对称策略来提高部分手指的跟踪性能,寻找给定参数空间内能够最小化观测值和估计值之间偏差的手模型参数解。该方法不需要进行任何标记,可以对手部关节姿态进行连续跟踪,实验结果表明,其在实验的硬件平台上可以达到12帧/s的运行速度,平均误差稳定在10 mm以内。  相似文献
9.
对于复杂输入的贝叶斯网络,精确推理时间较长。文中针对贝叶斯网络精确推理中的团树传播算法,提出了一种基于CPU-GPU异构计算平台的并行化方法。首先研究团节点间信念势更新方式,提出了节点级并行化方法加速更新过程;其次,提出了利用计算复杂度的优先级队列方法,通过拓扑级并行化加速全局推理过程;最后,通过输入不同团树结构-线性结构、两分支二叉树结构和完全二叉树结构验证算法加速效果。实验结果表明,节点级并行化方法对线性结构有明显加速效果,拓扑级并行化对两分支二叉树和满二叉树结构有明显加速效果。  相似文献
10.
家居布局作为虚拟场景设计的重要内容,在虚拟现实、三维游戏以及室内家居设计中都有应用.针对现有的家居自动布局方法存在约束冲突容易导致局部最优,以及由于全局优化方法收敛速度慢而达不到实时要求的问题,提出层次优化的思想化解约束冲突并采用粒子群优化算法解决布局优化问题.首先引入层次树对家具之间的约束关系进行结构化组织,避免约束冲突;然后引入粒子群优化算法进行优化求解,由于粒子群优化算法有着良好的并行结构,便于GPU加速,从而提高算法效率.通过多样化的实例对算法的有效性进行了验证,并对运行效率进行细致分析,结果表明,文中方法提升了家居布局的质量和效率.  相似文献
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号