首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 515 毫秒
1.
针对无结构化P2P网络的洪泛搜索与随机漫步机制的盲目性,提出了一种利用Hash函数与M-tree技术将文件聚类后,再利用路由表完全分布式存储索引指针的新的路由算法。该算法使每个节点的路由表主要记录拥有各类资源的高能力节点指针,并利用概率统计的方法不断地更新路由表项。当节点收到搜索以后,通过查询路由表,只需一跳就进入能以最大概率回应的节点处查找,并能以较低的网络时延命中多个优质资源副本,达到了高速并行下载的目的。仿真试验和数学分析表明该算法有效地减少了盲目搜索造成的网络流量,提高了查找成功率,并且具有越稀缺的资源越容易找到的特性。  相似文献   

2.
讨论了基于共享存储的并行狭义遗传算法。该算法利用狭义遗传算法与区域搜索法的结合 ,实现了数据级的并行操作 ,具有较高的并行度。它只需要较少的通讯开销 ,从而具有很高的运行效率  相似文献   

3.
分析了继基于传统互联网的网格之后出现的一种新型网络——光子网格网络的研究目标与特点,研究了光子网格中数据传输的机制。在此基础上给出了一种基于网格文件传输协议(GridFTP)的面向光子网格中大文件传输的多通道并行传输机制。理论分析和数据传输实验结果表明,这种机制可以有效地降低呼叫阻塞率和缩短时延,在有大文件传输需要的网格应用中能获得更好的性能,同时这种机制的引入也进一步提高了光子网格的传输性能。  相似文献   

4.
为了提高双网格校正小波聚类算法的效率,提出了基于散列函数的双网格校正小波聚类算法,该算法应用散列表去消除量化数据空间中的空单元,降低数据空间算法的复杂度。先量化特征空间;再构造散列函数形成散列表,将量化后的特征值存储到散列表中;在散列表上并行对原始网格和校正网格进行小波变换,在特征空间的不同层次上寻找连通单元;利用校正网格产生的聚类结果去校正原始网格产生的聚类结果,得到最终聚类结果。将此方法应用到航空发动机转子故障诊断中,实验证明此算法在保证了转子故障诊断精度的基础上,提高了效率。  相似文献   

5.
为了提场卷积算法在矢量!字信号处理器(DSP)上的执行效率,提出了一种高效的并行化卷积算法——基2并行短卷积(PSC R2)算法。该算法采用了基2短卷积运算结构,摆脱了传统并行化卷积算法的直接结构,从而有效降低了算法的循环次!。基于该算法结构,还提出了矢量DSP专用指令以匹配卷积的运算结构,保障算法执行效率。通过实际评估,证明了该算法在时间复杂度上仅为传统的内循环矢量化(VIL)算法的43%,为外循环矢量化(VOL)算法的55%,并且在存储空间开销上能够与传统算法基本持平。利用该算法,可以大幅降低移动通信和数字信号处理中的卷积、相关、滤波运算的时间复杂度。  相似文献   

6.
面向大规模地形LOD模型的并行简化算法   总被引:1,自引:0,他引:1  
大规模地形的快速绘制一般采用层次细节(LOD)模型,需要在预处理阶段使用网格简化算法对模型进行简化。简化质量与简化效率间的矛盾一直是各种简化算法所需面临的问题。通过利用通用的并行编程环境MPI,提出了一种基于四叉树网格剖分的并行简化算法,通过并行化达到提高算法效率的目的,并就模型拼接及负载平衡进行了相关讨论。最后,通过具体实例在集群环境下验证了算法的有效性,得到了较好的并行效果。  相似文献   

7.
为提高网格作业运行的成功率,研究了提高作业调度的可靠性的方法.研究表明,现有容错的网格作业调度算法多通过作业复制来降低节点的软硬件故障造成的作业失败的概率,它们既没有考虑作业的多个副本因共处的网络环境故障造成的同时失败,也没有考虑作业的多个副本由于所在节点缺乏相同的资源而同时失败.针对这一问题,提出了节点相似度的概念和计算方法,并将其应用到容错的网格调度算法中.提出的容错的调度算法将作业的多个副本分配到具有不同相似度的节点上运行,充分利用网格环境的分布性和异构性进一步减小作业失败的概率.  相似文献   

8.
针对无网格Galerkin法刚度矩阵的稀疏存储实现难、节点与积分点的全局搜索效率低等问题,该文基于交叉节点对及其循环组装整体刚度矩阵的思想,利用CSR格式存储刚度矩阵,通过局部搜索方法来搜寻节点与积分点,提出了一种采用三角形网格进行积分计算的无网格Galerkin法。通过数值算例对比了不同节点规模的刚度矩阵存储消耗,以及节点与积分点的搜索效率。结果表明所提出算法在满足计算精度的前提下,能有效地节省存储空间和提高节点与积分点的搜索效率,并对复杂形状的几何模型具有良好的适应性。  相似文献   

9.
针对由小卫星组成的低地球轨道(LEO)卫星星座网络的星上计算能力和存储资源有限,以及传统的星座路由算法虽能很好地适应网络的动态性但对星上计算能力和存储资源的要求都较高的问题,在基于对实际LEO卫星星座网络充分分析的基础上,提出了一种基于离线计算的简洁高效的路由算法.该算法在保证路由有效性的前提下,能够通过使用备份路径来提供流量自适应机制.复杂性分析和仿真结果表明,该算法只需较小的星上存储开销和星上处理开销,而且具有较好的端到端时延性能.该算法简洁、高效的特点使其能作为实际LEO卫星星座网络的实用化路由协议.  相似文献   

10.
多块结构化网格CFD并行计算和负载平衡研究   总被引:2,自引:0,他引:2  
基于连续拼接多块结构化网格,通过求解雷诺平均Navier-Stokes方程研究并行计算中的负载平衡问题。利用组合优化中的排序理论设计负载平衡算法,实现了网格数据的自动划分和各处理机上计算任务的自动分配。在工作站集群MPI并行环境下,通过实例考察了负载平衡算法和并行计算的性能,16个处理机上的负载均方差和负载相对均方差分别为0.0084和0.1347%,并行计算结果和实验数据吻合良好,并行效率高。本文算法具有良好的可扩展性,适用于MIMD结构计算机上基于多块结构化网格并行计算中的负载平衡问题。  相似文献   

11.
Communication is important for providing intelligent services in connected vehicles. Vehicles must be able to communicate with different places and exchange information while driving. For service operation, connected vehicles frequently attempt to download large amounts of data. They can request data downloading to a road side unit (RSU), which provides infrastructure for connected vehicles. The RSU is a data bottleneck in a transportation system because data traffic is concentrated on the RSU. Therefore, it is not appropriate for a connected vehicle to always attempt a high speed download from the RSU. If the mobile network between a connected vehicle and an RSU has poor connection quality, the efficiency and speed of the data download from the RSU is decreased. This problem affects the quality of the user experience. Therefore, it is important for a connected vehicle to connect to an RSU with consideration of the network conditions in order to try to maximize download speed. The proposed method maximizes download speed from an RSU using a machine learning algorithm. To collect and learn from network data, fog computing is used. A fog server is integrated with the RSU to perform computing. If the algorithm recognizes that conditions are not good for mass data download, it will not attempt to download at high speed. Thus, the proposed method can improve the efficiency of high speed downloads. This conclusion was validated using extensive computer simulations.  相似文献   

12.
杆系DEM(离散元,discrete element method)是求解结构强非线性问题的有效方法,但随着结构数值计算规模的扩大,杆系DEM所需要的计算时间也随之急剧膨胀.为了提高杆系DEM的计算效率,该研究提出单元级并行、节点级并行的计算方法,基于CPU-GPU异构平台,建构了杆系DEM并行计算框架,编制了相应的几...  相似文献   

13.
The theory of network reliability has been applied to many complicated network structures, such as computer and communication networks, piping systems, electricity networks, and traffic networks. The theory is used to evaluate the operational performance of networks that can be modeled by probabilistic graphs. Although evaluating network reliability is an Non‐deterministic Polynomial‐time hard problem, numerous solutions have been proposed. However, most of them are based on sequential computing, which under‐utilizes the benefits of multi‐core processor architectures. This paper addresses this limitation by proposing an efficient strategy for calculating the two‐terminal (terminal‐pair) reliability of a binary‐state network that uses parallel computing. Existing methods are analyzed. Then, an efficient method for calculating terminal‐pair reliability based on logical‐probabilistic calculus is proposed. Finally, a parallel version of the proposed algorithm is developed. This is the first study to implement an algorithm for estimating terminal‐pair reliability in parallel on multi‐core processor architectures. The experimental results show that the proposed algorithm and its parallel version outperform an existing sequential algorithm in terms of execution time. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

14.
A new algorithm is proposed to approximate the terminal-pair network reliability based on minimal cut theory. Unlike many existing models that decompose the network into a series–parallel or parallel–series structure based on minimal cuts or minimal paths, the new model estimates the reliability by summing the linear and quadratic unreliability of each minimal cut set. Given component test data, the new model provides tight moment bounds for the network reliability estimate. Those moment bounds can be used to quantify the network estimation uncertainty propagating from component level estimates. Simulations and numerical examples show that the new model generally outperforms Esary-Proschan and Edge-Packing bounds, especially for high reliability systems.  相似文献   

15.
Service reliability and performance in grid system with star topology   总被引:2,自引:0,他引:2  
The paper considers grid computing systems in which the resource management systems (RMS) can divide service tasks into subtasks and send the subtasks to different resources for parallel execution. In order to provide desired level of service reliability the RMS can assign the same subtasks to several independent resources for parallel execution.The service reliability and performance indices are introduced and a fast numerical algorithm for their evaluation for arbitrary subtask distribution in grid with star architecture is presented. This algorithm is based on the universal generating function technique.Illustrative examples are presented.  相似文献   

16.
针对ad hoc网络各协议层的功能都相互关联的特点,提出了一种在ad hoc网络中基于网络平均时延最小的跨层自适应流量分配算法(CLATA)。该算法将网络层自适应流量分配信息传递给媒体接入控制(MAC)层,以改进MAC层中的冲突退避算法,实现网络平均时延最小化,提高网络的利用率。仿真实验结果表明,该算法可以动态调整链路之间的流量,并具有快速的自适应性,优化网络资源的利用。  相似文献   

17.
IEEE 802.15.4 is the prevailing standard for low-rate wireless personal area networks. It specifies the physical layer and medium access control sub-layer. Some emerging standards such as ZigBee define the network layer on top of these lower levels to support routing and multi-hop communication. Tree routing is a favourable basis for ZigBee routing because of its simplicity and limited use of resources. However, in data collection systems that are based on spanning trees rooted at a sink node, non-optimal route selection, congestion and uneven distribution of traffic in tree routing can adversely contribute to network performance and lifetime. The imbalance in workload can result in hotspot problems and early energy depletion of specific nodes that are normally the crucial routers of the network. The authors propose a novel light-weight routing protocol, energy aware multi-tree routing (EAMTR) protocol, to balance the workload of data gathering and alleviate the hotspot and single points of failure problems for high-density sink-type networks. In this scheme, multiple trees are formed in the initialisation phase and according to network traffic, each node selects the least congested route to the root node. The results of simulation and performance evaluation of EAMTR show significant improvement in network lifetime and traffic distribution.  相似文献   

18.
Recently, the Erebus attack has proved to be a security threat to the blockchain network layer, and the existing research has faced challenges in detecting the Erebus attack on the blockchain network layer. The cloud-based active defense and one-sidedness detection strategies are the hindrances in detecting Erebus attacks. This study designs a detection approach by establishing a ReliefF_WMRmR-based two-stage feature selection algorithm and a deep learning-based multimodal classification detection model for Erebus attacks and responding to security threats to the blockchain network layer. The goal is to improve the performance of Erebus attack detection methods, by combining the traffic behavior with the routing status based on multimodal deep feature learning. The traffic behavior and routing status were first defined and used to describe the attack characteristics at diverse stages of s leak monitoring, hidden traffic overlay, and transaction identity forgery. The goal is to clarify how an Erebus attack affects the routing transfer and traffic state on the blockchain network layer. Consequently, detecting objects is expected to become more relevant and sensitive. A two-stage feature selection algorithm was designed based on ReliefF and weighted maximum relevance minimum redundancy (ReliefF_WMRmR) to alleviate the overfitting of the training model caused by redundant information and noise in multiple source features of the routing status and traffic behavior. The ReliefF algorithm was introduced to select strong correlations and highly informative features of the labeled data. According to WMRmR, a feature selection framework was defined to eliminate weakly correlated features, eliminate redundant information, and reduce the detection overhead of the model. A multimodal deep learning model was constructed based on the multilayer perceptron (MLP) to settle the high false alarm rates incurred by multisource data. Using this model, isolated inputs and deep learning were conducted on the selected routing status and traffic behavior. Redundant intermodal information was removed because of the complementarity of the multimodal network, which was followed by feature fusion and output feature representation to boost classification detection precision. The experimental results demonstrate that the proposed method can detect features, such as traffic data, at key link nodes and route messages in a real blockchain network environment. Additionally, the model can detect Erebus attacks effectively. This study provides novelty to the existing Erebus attack detection by increasing the accuracy detection by 1.05%, the recall rate by 2.01%, and the F1-score by 2.43%.  相似文献   

19.
Internet of Things (IoT) defines a network of devices connected to the internet and sharing a massive amount of data between each other and a central location. These IoT devices are connected to a network therefore prone to attacks. Various management tasks and network operations such as security, intrusion detection, Quality-of-Service provisioning, performance monitoring, resource provisioning, and traffic engineering require traffic classification. Due to the ineffectiveness of traditional classification schemes, such as port-based and payload-based methods, researchers proposed machine learning-based traffic classification systems based on shallow neural networks. Furthermore, machine learning-based models incline to misclassify internet traffic due to improper feature selection. In this research, an efficient multilayer deep learning based classification system is presented to overcome these challenges that can classify internet traffic. To examine the performance of the proposed technique, Moore-dataset is used for training the classifier. The proposed scheme takes the pre-processed data and extracts the flow features using a deep neural network (DNN). In particular, the maximum entropy classifier is used to classify the internet traffic. The experimental results show that the proposed hybrid deep learning algorithm is effective and achieved high accuracy for internet traffic classification, i.e., 99.23%. Furthermore, the proposed algorithm achieved the highest accuracy compared to the support vector machine (SVM) based classification technique and k-nearest neighbours (KNNs) based classification technique.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号