共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
3.
网格环境中的资源监测 总被引:2,自引:0,他引:2
为了构建和执行网格应用,就必须了解网格资源的类型和可用性。该文提出了一个专门为网格设计的开放和通用的资源监测结构,并描述了这个资源监测结构中的资源监测系统的实现。 相似文献
4.
基于资源路由器的网格资源管理系统 总被引:1,自引:0,他引:1
论述了实现网格资源管理系统所涉及到的各方面问题。论述的是一种基于资源路由器的网格资源管理系统。介绍了网格资源管理系统的组成及各组成部件的作用及网格资源管理中的3个关键技术,包括资源管理信息库管理、资源描述信息库管理、资源服务的接口设计。 相似文献
5.
一种双目立体视觉算法的GPU实现 总被引:1,自引:0,他引:1
利用可编程图形硬件GPU实现了非参数局域变换双目立体视觉算法。该算法使用局部非参数统计的结果而不是像素灰度值作为匹配代价,相对于其它基于区域的立体匹配算法,具有物体边界区域处理稳定和适于硬件实现等优点。该文利用GPU的最新特性实现了算法的全部运算都在GPU上执行。由于GPU的并行流水特性,算法在GPU上的运算速度较在CPU上得到提高。 相似文献
6.
ABSTRACTLinear least squares problems (LLSPs) routinely arise in many scientific and engineering problems. One of the fastest ways to solve LLSPs involves performing calculations in parallel on graphics processing units (GPUs). However, GPU algorithms are typically designed for one GPU architecture and may be suboptimal or unusable on another GPU. To design optimal algorithms for any GPU with little need for modifying code, tuneable parameters can simplify the transition of GPU algorithms to different GPU architectures. In this paper, we investigate the benefits of using derivative-free optimization (DFO) and simulation optimization (SO) to systematically optimize tuneable parameters for a GPU or hybrid CPU/GPU LLSP solvers. Computational experiments show that both DFO and SO can be effective tools for determining optimal tuning parameters that can speed up the performance of the popular LLSP solver MAGMA by about 1.8x, compared to MAGMA's default parameters for large tall and skinny matrices. Using DFO solvers, we were able to identify optimal parameters after enumerating an order of magnitude fewer parameter combinations than with direct enumeration. Additionally, the proposed approach is faster than a state-of-the-art autotuner and provides better tuning parameters. 相似文献
7.
8.
This paper presents a trust brokering system that operates in a peer-to-peer manner. The network of trust brokers operate by providing peer reviews in the form of recommendations regarding potential resource targets. One of the distinguishing features of our work is that it separately models the accuracy and honesty concepts. By separately modeling these concepts, our model is able to significantly improve the performance. We apply the trust brokering system to a resource manager to illustrate its utility in a public-resource Grid environment. The simulations performed to evaluate the trust-aware resource matchmaking strategies indicate that high levels of ‘robustness’ can be attained by considering trust while matchmaking and allocating resources.Dr Azzedin contributed towards the work in this paper while he was at University of Manitoba, Winnipeg, Canada. 相似文献
9.
本文主要从笔者所参与建设的UNIRES数字资源库系统入手,分析了数字资源库的分层结构设计,并对存储层的逻辑组织结构和系统架构设计进行了着重的分析。最后本文针对海量的数字资源存储分别从时间和空间两个角度提出了相应的优化策略。 相似文献
10.
11.
12.
13.
正交频分复用(OFDM)系统中,资源调度是影响整个网络性能的关键问题,而现有的静态资源调度算法和动态资源调度算法不能很好地兼顾系统吞吐量以及用户公平性两发面的性能.在分析现有算法的性能特点的基础上,基于比例公平准则,提出了一种多用户OFDM系统下行链路中的动态调度算法,并进行了数值仿真.仿真结果表明,在基站发射功率及用户吞吐量公平分配的约束下,系统吞吐量性能较好.且算法在用户吞吐量公平性上有所提高,复杂度较低. 相似文献
14.
15.
Sayed Chhattan Shah Qurat-Ul-Ain Nizamani Sajjad Hussain Chauhdary Myong-Soon Park 《Journal of Parallel and Distributed Computing》2012
This paper addresses the problem of resource allocation to interdependent tasks in mobile ad hoc computational Grids. Dependencies between tasks imply that there can be heavy communication induced by data transfers between tasks executed on separate nodes. The communication in mobile ad hoc Grids is always expensive and unreliable, and therefore plays a critical role in application performance. There are several factors that contribute to communication cost. Unreliable and short-term connectivity can increase communication cost due to frequent failure and activation of links, and ineffective resource allocation can increase communication cost due to multi hop communication between dependent tasks. To reduce communication cost, an effective and robust resource allocation scheme is required. However, the design of such a scheme for mobile ad hoc computational Grids exhibits numerous difficulties due to the constrained communication environment, node mobility, and lack of pre-existing network infrastructure. 相似文献
16.
17.
一种铅笔滤镜生成算法及其在GPU上的实现 总被引:2,自引:0,他引:2
提出了一种基于卷积算子的铅笔滤镜生成算法,通过分析真实铅笔纹理的结构特征,抽象出铅笔笔画的简单数学模型.根据该模型可以方便地确定其对应的笔刷模板(卷积算子),进而获得用户所需要的铅笔纹理.本文将该算法与图形处理单元(GPu)相结合,借助于GPU的硬件加速功能成功地实现了对视频图像的实时铅笔画风格绘制. 相似文献
18.
N. Ferrando M.A. Gosálvez J. Cerdá R. Gadea K. Sato 《Computer Physics Communications》2011,182(3):628-640
Presently, dynamic surface-based models are required to contain increasingly larger numbers of points and to propagate them over longer time periods. For large numbers of surface points, the octree data structure can be used as a balance between low memory occupation and relatively rapid access to the stored data. For evolution rules that depend on neighborhood states, extended simulation periods can be obtained by using simplified atomistic propagation models, such as the Cellular Automata (CA). This method, however, has an intrinsic parallel updating nature and the corresponding simulations are highly inefficient when performed on classical Central Processing Units (CPUs), which are designed for the sequential execution of tasks. In this paper, a series of guidelines is presented for the efficient adaptation of octree-based, CA simulations of complex, evolving surfaces into massively parallel computing hardware. A Graphics Processing Unit (GPU) is used as a cost-efficient example of the parallel architectures. For the actual simulations, we consider the surface propagation during anisotropic wet chemical etching of silicon as a computationally challenging process with a wide-spread use in microengineering applications. A continuous CA model that is intrinsically parallel in nature is used for the time evolution. Our study strongly indicates that parallel computations of dynamically evolving surfaces simulated using CA methods are significantly benefited by the incorporation of octrees as support data structures, substantially decreasing the overall computational time and memory usage. 相似文献
19.
20.
一种基于GOR+GPU算法的机器人视觉导航方法 总被引:1,自引:0,他引:1
提出一种一般物体识别(GOR)方法.借鉴词袋(BOW)的统计模型,利用SIFT(尺度不变特征变换)检测算子进行特征向量描述.为了增加信息的冗余度,利用物体部件空间关系的统计信息来描述一幅图片中所有特征点的空间(相对距离和角度)关系,增广了原BOW模型中的特征向量.运用无监督判别分类器支持向量机(SVM)来实现分类识别.与此同时,采用GPU加速技术来实现SIFT特征提取与描述,以保证其实时性.然后,存手绘地图辅助导航的基础上,将该方法成功地应用到室内移动机器人导航上.实验结果表明,基于该方法的机器人导航技术具有较强的鲁棒性和有效性. 相似文献