首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 78 毫秒
1.
针对传统分流方法存在精准度低的问题,提出基于边缘计算的低压电力通信数据分流方法,并用来分析本地业务、公网业务和终端/网络,为缓解网络时延问题提供支持.将低压电力通信数据分流机制建立在基于边缘计算的分流平台上,由此构建数据分流单元;详细研究本地分流、控制面数据分流、上行用户面数据处理和下行用户面数据处理内容.引入服务质量...  相似文献   

2.
对面向大数据的内存数据管理技术的相关研究进行综述。梳理大数据环境下数据管理技术发展的脉络和格局的变化;分析新环境下的内存数据管理技术面临的发展机遇与研究挑战;介绍相关的前沿研究,其中包括分布式编程模型、混合存储体系结构、内存数据管理等;给出技术和管理上的发展展望。  相似文献   

3.
流量控制与数据分流技术浅谈   总被引:1,自引:0,他引:1  
文章通过对acl限速、交换机端口限速、专用流控软件限速等手段对流量的监测、分析以及优化来控制带宽;并对这三种方法进行分析比较,介绍了使用某种方法的背景,实施难度,实现方法,优缺点以及在实际使用中的问题等,通过实际比较使用了较为优秀的流量控制方法来实现流量的分析、监测以及控制等,以及流量控制后的结果;同时介绍了出口分流的部分内容。  相似文献   

4.
数据是天文学发展的重要驱动。分布式存储和高性能计算(High Performance Computing,HPC)为应对海量天文数据的复杂性、不规则的存储和计算起到推动作用。天文学研究中多信息和多学科交叉融合成为必然,天文大数据已进入大规模计算时代。高性能计算为天文大数据处理和分析提供了新的手段,针对一些传统手段无法解决的问题给出了新的方案。文中根据天文数据分类和特征,以高性能计算为支撑,对天文大数据的数据融合、高效存取、分析及后续处理、可视化等问题进行了研究,总结了现阶段的技术特点,提出了处理天文大数据的研究策略和技术方法,并对天文大数据处理面对的问题和发展趋势进行了探讨。  相似文献   

5.
大数据流式计算:关键技术及系统实例   总被引:5,自引:0,他引:5  
大数据计算主要有批量计算和流式计算两种形态,目前,关于大数据批量计算系统的研究和讨论相对充分,而如何构建低延迟、高吞吐且持续可靠运行的大数据流式计算系统是当前亟待解决的问题且研究成果和实践经验相对较少.总结了典型应用领域中流式大数据所呈现出的实时性、易失性、突发性、无序性、无限性等特征,给出了理想的大数据流式计算系统在系统结构、数据传输、应用接口、高可用技术等方面应该具有的关键技术特征,论述并对比了已有的大数据流式计算系统的典型实例,最后阐述了大数据流式计算系统在可伸缩性、系统容错、状态一致性、负载均衡、数据吞吐量等方面所面临的技术挑战.  相似文献   

6.
针对计算密集型作业与数据密集型作业混合情况,在一个作业有时间限制的动态环境中,对传统的网格作业调度方法进行扩展,提出了三种网格作业调度启发式算法:Emin min、Ebest、Esufferage。并在一个由多个Cluster组成的、通过高速网络连接的网格模型上,对三种算法进行验证。与Min min算法的比较结果显示:三种算法均优于Min min算法。与ASJS算法比较结果显示:Emin min减少了等待时间与作业的makespan; Esufferage算法以减少作业完成量为代价,减少了作业的等待时间及makespan; Ebest在完成作业数量上与ASJS基本保持一致,但却增加了作业的等待时间与makespan。总体上,Emin min具有比较大的优势。  相似文献   

7.
网络计算中计算与数据存储的可扩展结构研究   总被引:1,自引:0,他引:1  
在分析网络计算中数据存储新特点和传统存储方式缺陷的基础上,提出了一种信息存储与数据计算相分离的具有可扩展性的数据存储新结构,讨论了存储与计算相分离的可实现性,它包括:存储与计算的差异、系统分离的可能性及必要性,提出了存储与计算分离后的实现模式,指出了网络计算环境中数据存储的发展方向。  相似文献   

8.
近年来,随着数据量的不断增大,数据密集型计算任务变得日益繁重.如何能够快速、高效地实现在大规模数据集上的计算,已成为数据密集型计算的主要研究方向.最近几年,研究人员利用新型的硬件处理器对数据密集型计算进行加速处理,并针对不同新型处理器的特点,设计了不同形式的加速处理算法.主要对新型硬件处理器基于数据密集型计算的研究进行了综述.首先概述了新型硬件处理器的特点;然后,分别对新型处理器FPGA和GPU等硬件进行性能分析,并分析了每种处理器对数据密集型计算的效果;最后提出了进一步的研究方向.  相似文献   

9.
数据密集型计算编程模型研究进展   总被引:12,自引:0,他引:12  
作为一种新兴的计算模式,云计算受到了学术界和产业界的广泛关注.云计算以互联网服务和应用为中心,服务提供者需要存储和分析海量数据.为了能够低成本高效率地处理Web量级数据,主要的互联网公司都在由商品化服务器组成的大规模集群系统上研发了分布式编程系统.编程模型可以降低开发人员在大规模集群上编程的难度,并让程序充分利用集群资源,但设计这样的编程模型面临巨大挑战.首先说明了数据密集型计算的特点,并指出了编程模型要解决的基本问题;接着深入介绍了国际上代表性的编程模型,并对这些编程模型的特点进行了比较和分析;最后对当前所面临的问题和今后的发展趋势进行了总结和展望.  相似文献   

10.
《软件》2019,(11):19-23
海量时空数据的高效存储、读写、处理与分析是当前地理信息科学领域的研究热点。本文对目前主流大数据技术产品进行了选取和融合,开展了基于HDFS+Spark的时空大数据存储、处理分析等方面的研究和探讨,以智慧无锡时空信息云平台为应用对象,搭建了一套时空大数据存储处理的集群平台,并通过具体应用实验,得到了时空数据存储、处理、挖掘的响应时间及可视化展示结果,证实了HDFS+Spark集群计算平台在解决时空大数据存储、处理、挖掘方面的有效性。  相似文献   

11.
Cloud computing provides the capability to connect resource-constrained clients with a centralized and shared pool of resources, such as computational power and storage on demand. Large matrix determinant computation is almost ubiquitous in computer science and requires largescale data computation. Currently, techniques for securely outsourcing matrix determinant computations to untrusted servers are of utmost importance, and they have practical value as well as theoretical significance for the scientific community. In this study, we propose a secure outsourcing method for large matrix determinant computation. We employ some transformations for privacy protection based on the original matrix, including permutation and mix-row/mixcolumn operations, before sending the target matrix to the cloud. The results returned from the cloud need to be decrypted and verified to obtain the correct determinant. In comparison with previously proposed algorithms, our new algorithm achieves a higher security levelwith greater cloud efficiency. The experimental results demonstrate the efficiency and effectiveness of our algorithm.  相似文献   

12.
以最大化中继协作系统的信道容量为目标,研究了三节点无线中继信道中,中继节点位置对信道容量的影响。通过构造特定函数,给出了不同情况下信道容量的表达式,研究了信道容量最大时的最优中继位置。进行仿真,结果表明应用本文方法能准确有效地分析研究不同信道衰落参数下的信道容量和最优中继位置,且结论与直接求解信道容量计算公式时相同,并大大简化了运算复杂度。此外,该方法对其他准则下协作伙伴的选择具有指导意义。  相似文献   

13.
分布式数据流上的Skyline计算   总被引:1,自引:0,他引:1  
为了降低分布式数据流上的连续Skyline计算过程中的通信开销,提出了基于远程过滤的思想并对相关理论基础进行了证明,描述了系统的体系结构并提出了两个过滤模型v_Max和Distance。理论分析和实验结果证明了所提方法在某些数据分布情况下降低通信开销的有效性。  相似文献   

14.
Optical flow computation has been extensively used for motion estimation of objects in image sequences. The results obtained by most optical flow techniques are computationally intensive due to the large amount of data involved. A new change-based data flow pipelined architecture has been developed implementing the Horn and Schunk smoothness constraint; pixels of the image sequence that significantly change, fire the execution of the operations related to the image processing algorithm. This strategy reduces the data and, combined with the custom hardware implemented, it achieves a significant optical flow computation speed-up with no loss of accuracy. This paper presents the bases of the change-driven data flow image processing strategy, as well as the implementation of custom hardware developed using an Altera Stratix PCI development board.
Rocío Gómez-FabelaEmail:

Julio C. Sosa   received the degree in electronic engineering in 1997 from the Instituto Tecnológico de Lázaro Cárdenas, México, the M.Sc. degree in electrical engineering in 2000 from the Centro de Investigacón y de Estudios Avanzadosthen del I.P.N., México and he is candidate to Ph.D. by University of Valencia, Spain. Currently he is associate professor at the Postgrade Department, the Escuela Superior de Cómputo—I.P.N. México. His research interests include hardware architectures, artificial intelligence and microelectronic. Jose A. Boluda   was born in Xàtiva (Spain) in 1969. He graduated in physics (1992) and received his Ph.D. (2000) in physics, both at the University of Valencia. From 1993, he was with the electronics and computer science department of the University of Valencia, Spain, where he collaborated in several projects related to ASIC design and image processing. He has been a visiting researcher with the Department of Electrical Engineering at the University of Virginia, USA and the Department of Applied Informatics at the University of Macedonia, Greece. He is currently Titular Professor in the Department of Informatics at the University of Valencia. His research interests include reconfigurable systems, VHDL hardware design, programmable logic synthesis and sensor design. Fernando Pardo   received the M.S. degree in physics from the University of Valencia, Valencia, Spain in 1991, and the Ph.D. in computer engineering from the University of Valencia, Valencia, Spain in 1997. From 1991 to 1993, he was with the Electronics and Computer Science department of the University of Valencia, Spain, where he collaborated in several research projects. In 1994 he was with the Integrated Laboratory for Advanced Robotics at the University of Genoa, Italy, where he worked on space-variant image processing. In 1994 he joined IMEC (Interuniversitary Micro-Electronics Centre), Belgium, where he worked on projects related to CMOS space-variant image sensors. In 1995 he joined the University of Valencia, Spain, where he is currently Associate Professor and the Head of the Computer Engineering Department. He is currently leading several projects regarding architectures for high-speed image processing and bio-inspired image sensors. Rocío Gómez-Fabela   was born in México City in 1979. She received the Computer Engineering degree in 2001 from Escuela Superior de Cómputo, México. She is currently studying towards the Ph.D. in the Department of Informatics, University of Valencia, Spain. Her current research interests are softcomputing, reconfigurable systems and VHDL hardware design.  相似文献   

15.
面对大数据规模庞大且计算复杂等问题,基于MapReduce框架采用两阶段渐进式的聚类思想,提出了改进的K-means并行化计算的大数据聚类方法。第一阶段,该算法通过Canopy算法初始化划分聚类中心,从而迅速获取粗精度的聚类中心点;第二阶段,基于MapReduce框架提出了并行化计算方案,使每个数据点围绕其邻近的Canopy中心进行细化的聚类或合并,从而对大数据实现快速、准确地聚类分析。在MapReduce并行框架上进行算法验证,实验结果表明,所提算法能够有效地提升并行计算效率,减少计算时间,并提升大数据的聚类精度。  相似文献   

16.
This paper presents 2 main contributions. The first is a compact representation of huge sets of functional data or trajectories of continuous‐time stochastic processes, which allows keeping the data always compressed even during the processing in main memory. It is oriented to facilitate the efficient computation of the sample autocovariance function without a previous decompression of the data set, by using only partial local decoding. The second contribution is a new memory‐efficient algorithm to compute the sample autocovariance function. The combination of the compact representation and the new memory‐efficient algorithm obtained in our experiments the following benefits. The compressed data occupy in the disk 75% of the space needed by the original data. The computation of the autocovariance function used up to 13 times less main memory, and run 65% faster than the classical method implemented, for example, in the R package.  相似文献   

17.
Existing Global Data Computation (GDC) protocols for asynchronous systems are round-based algorithms designed for fully connected networks. In this paper, we discuss GDC in asynchronous chordal rings, a non-fully connected network. The virtual links approach to solve the consensus problem may be applied to GDC for non-fully connected networks, but it incurs high message overhead. To reduce the overhead, we propose a new non-round-based GDC protocol for asynchronous chordal rings with perfect failure detectors. The main advantage of the protocol is that there is no notion of rounds. Every process creates two messages initially, with one message traversing in a clockwise direction and visiting each and every process in the chordal ring. The second message traverses in a counterclockwise direction. When there is direct connection between two processes, a message is sent directly. Otherwise, the message is sent via virtual links. When the two messages return, the process decides according to the information maintained by the two messages. The perfect failure detector of a process need only detect the crash of neighboring processes, and the crash information is disseminated to all other processes. Analysis and comparison with two virtual links approaches show that our protocol reduces message complexity significantly.  相似文献   

18.
With the popularity of social network, the demand for real-time processing of graph data is increasing. However, most of the existing graph systems adopt a batch processing mode, therefore the overhead of maintaining and processing of dynamic graph is significantly high. In this paper, we design iGraph, an incremental graph processing system for dynamic graph with its continuous updates. The contributions of iGraph include: 1) a hash-based graph partition strategy to enable fine-grained graph updates; 2) a vertexbased graph computing model to support incremental data processing; 3) detection and rebalance methods of hotspot to address the workload imbalance problem during incremental processing. Through the general-purpose API, iGraph can be used to implement various graph processing algorithms such as PageRank. We have implemented iGraph on Apache Spark, and experimental results show that for real life datasets, iGraph outperforms the original GraphX in respect of graph update and graph computation.  相似文献   

19.
目的 多尺度方法的提出解决了传统HS(Horn Schunck)算法不能计算大位移光流的问题,但同时也增加了迭代运算的步数。为加快迭代收敛速度,研究大位移变分光流计算的快速算法,并分析其性能。方法 将用于加快变分图像处理迭代运算的Split Bregman方法、对偶方法和交替方向乘子法应用到大位移光流计算中。结果 分别进行了精度、迭代步数、运行时间的对比实验。引入3种快速方法的模型均能够在保证精度的同时,在较少时间内计算出图像序列的光流场,所需时间为传统方法的11%~42%。结论 将3种快速方法应用到大位移变分光流计算中,对于不同图像序列均可以较大地提高计算效率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号