首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 109 毫秒
1.
墙式网孔   总被引:4,自引:1,他引:3  
提出了一种新的网孔-墙式网孔.墙式网孔是一种三度网孔,它通过从普通四度网孔的每个节点删去一度而获得,其节点度节省了25%,但其直径同普通四度网孔几乎一样.它在计算能力上同普通四度网孔是等价的,因为四度网孔可以有效地嵌入到墙式网孔中,其伸张度为3,拥塞度为4.于是所有现存的基于四度网孔的并行算法都可以毫不费力地移植到三度网孔中且只有常数的减速比.墙式网孔可以通过边界节点回卷相连而产生墙式环托,它是点  相似文献   

2.
文章提出了一种新的互连网络模型,它可看作由普通网孔的每个结点删去一条边而产生。文中讨论了这种新式网孔的拓扑性能,并给出了有效的广播算法。  相似文献   

3.
本文讨论了使用C语言作为开发语言时对线性电路编程的算法,阐述了网孔分析的原理和齐次定理的求解,给出了程序流程、说明及若干实例。  相似文献   

4.
基于Wormhole路由的二维Mesh上的并行k-选择   总被引:2,自引:1,他引:1  
许胤龙  王洵  万颖瑜  陈国良 《计算机学报》1999,22(12):1309-1313
由于二维网孔机器的结构简单、规整,易于VLSI实现,使得它不仅成为许多理论研究的基础模型,而且还是许多并行机所采用的互连结构.Worm hole 路由技术的采用改进了二维网孔机器的通信能力.该文在带有Worm hole 路由技术的n×n 二维网孔机器上提出了一个时间复杂度为O(log2nloglogn)的并行k-选择算法,改进了该问题在Store-and-Forw ard 路由技术下的时间复杂度下界O(n).据已掌握的资料,该算法为最早的、非总线连接的二维网孔机器上的、时间复杂度为对数的多项式级的k-选择算法.  相似文献   

5.
六度网络是一类平面图网络结构,将平面以等边三角形的形式进行分割,包括六度网孔网络和六度环绕网络.六度网孔网络不是规则网络,其边缘节点与内部节点的度不相等.通过对六度网孔网络的边缘节点建立环绕边就形成了规则的六度环绕网络,每个节点的度为6.但是由于环绕边的存在,使得六度环绕网络的通信算法实现复杂,网络直径也非常难于计算.六度环绕网络被证实是一种Cayley图模型,具有良好的对称性.但是基于Cayley图的六度环绕网络的最优路由算法、广播算法还没有得到,该网络模型的具体直径值也是未解问题.针对基于Cayley图的六度环绕网络模型,文中给出了一种简单的最优路由算法和一种基于陪集图理论的广播算法,并给出该网络模型的网络直径确切值.  相似文献   

6.
网孔处理机阵列上最小生成树算法   总被引:1,自引:1,他引:0  
已知一加权无向图G(V,E),|V|=n.本文基于网孔处理机阵列,运用分而治之策略和数据归约技术给出了一种新的最小生成树算法.此算法需O(n~2/p)时间,使用了O(p)个处理机(1≤p≤n).当p=n时,此算法仅需O(n)时间和O(n)处理机.而目前基于同一计算模型上此问题的最好算法需O(n)时间和O(n~2)个处理机,因而这里给出的算法在使用处理机数目方面改进了O(n)因子.  相似文献   

7.
图的书式嵌入问题是从多层印刷电路板设计,容错多处理器阵列设计等许多应用领域中抽象出来的.本文改进了[1]中提出的平面栅格,任意树,X-树的书式嵌入结果.本文还提出了较优的洗牌交换网,树网,网孔树和立方互连环的书式嵌入方法.  相似文献   

8.
可重构造网孔机器上常数时间的最优异或算法及应用   总被引:2,自引:0,他引:2  
逻辑异或和前缀异或是基本的逻辑运算 ,经常应用于各种算法中 .该文在规模为 N× N的可重构造网孔机器上提出了求 N个逻辑位的并行异或和并行前缀异或算法 ,其运行时间均为常数 .基于该并行异或算法 ,文中还提出了相同的模型下在常数时间判定一给定点是否在 N条边构成的平面多边形中和判定一给定点是否在 N个平面构成的空间多面体中的并行算法 .就实质而言 ,这些算法都是相应问题的第一个处理器数为线性、运行时间为常数的并行算法  相似文献   

9.
h维墙式环托   总被引:1,自引:0,他引:1  
定义了h维墙不托hDWT:给定偶数ni≥4,i=1,2,…,h,h维墙式环托hDWT(n1,n2,…,nh)是一个(h 1)-正则的二部图,可由h维普通环托hDOT(n1,n2,…,nh)删除每个节点的h-1条连边而得到。它是节点对称的且有最优容错度h 1,若n1=maxni,则其直么hDOT的直径相同。hDOT可嵌入到hDWT中,其伸张度为3,拥塞度为min{h,4},因而hDOT上现有的算法都能移植到hDWT上,且有常数的减速比;提出一种简单的自主路由算法,该算法实现hDWT上的任意两个节点间的最短路径。通过把删除的度重新添加到每个节点上,得到增强型hDWT,它比hDOT有小得多的直径,且包含一些有用的互联拓扑结构。h维墙式网孔hDWT可由hDWT翻除所有的回卷边而得到,它的许多性质与hDWT的性质相类似。  相似文献   

10.
郁松年 《计算机学报》1994,17(6):469-472
本文基于三维网孔处理机阵列,运用分而治之策略和数据归约技术在加权无向图上给出了一种新的有效的最小生成树算法。  相似文献   

11.
随着3D模型技术的发展,3D模型的版权保护问题越来越被重视.提出了一种基于三角域上V系统的3D网格模型数字水印算法.该算法首先将模型上的顶点到模型中心的距离进行三角域上的一次V变换,然后对变换后的系数进行量化来嵌入水印.由于该距离是一种全局几何特征,且算法将足够多的比特水印信息分布到模型各处,从而使得该水印方案具有一定的鲁棒性.实验结果表明该算法能够抵抗一般的平移、旋转等类型的攻击和随机噪声攻击.  相似文献   

12.
We develop algorithms for mapping n-dimensional meshes on a star graph of degree n with expansion 1 and dilation 3. We show that an n-degree star graph can efficiently simulate an n-dimensional mesh.  相似文献   

13.
14.
Chemical graph theory is a branch of mathematics which combines graph theory and chemistry. Chemical reaction network theory is a territory of applied mathematics that endeavors to display the conduct of genuine compound frameworks. It pulled the research community due to its applications in theoretical and organic chemistry since 1960. Additionally, it also increases the interest the mathematicians due to the interesting mathematical structures and problems are involved. The structure of an interconnection network can be represented by a graph. In the network, vertices represent the processor nodes and edges represent the links between the processor nodes. Graph invariants play a vital feature in graph theory and distinguish the structural properties of graphs and networks. In this paper, we determined the newly introduced topological indices namely, first -degree Zagreb index, first -degree Zagreb index, second -degree Zagreb index, -degree Randic index, -degree atom-bond connectivity index, -degree geometric-arithmetic index, -degree harmonic index and -degree sum-connectivity index for honey comb derived network. In the analysis of the quantitative structure property relationships (QSPRs) and the quantitative structureactivity relationships (QSARs), graph invariants are important tools to approximate and predicate the properties of the biological and chemical compounds. Also, we give the numerical and graphical representation of our outcomes.  相似文献   

15.
最多叶子生成树问题的核化算法   总被引:1,自引:0,他引:1  
对算法领域的最多叶子生成树问题进行了深入研究,提出了对简单连通图2度节点的化简规则,并证明了不含2度节点的图的生成树的叶子节点数的下限为(N+6)/4,给出了构造这样一棵生成树的构造性方法.基于上述化简规则和所证明的结论,给出了最多叶子生成树问题的核化算法,该核化算法可以在O(n2)时间内得到一个4k-6大小的线性核.对于这样一个较小的核,将大大提高相关的参数算法和近似算法的性能.  相似文献   

16.
We devise a hybrid low-order method for Bingham pipe flows, where the velocity is discretized by means of one unknown per mesh face and one unknown per mesh cell which can be eliminated locally by static condensation. The main advantages are local conservativity and the possibility to use polygonal/polyhedral meshes. We exploit this feature in the context of adaptive mesh refinement to capture the yield surface by means of local mesh refinement and possible coarsening. We consider the augmented Lagrangian method to solve iteratively the variational inequalities resulting from the discrete Bingham problem, using piecewise constant fields for the auxiliary variable and the associated Lagrange multiplier. Numerical results are presented in pipes with circular and eccentric annulus cross-section for different Bingham numbers.  相似文献   

17.
This paper presents a novel algorithm for hierarchical random accessible mesh decompression. Our approach progressively decompresses the requested parts of a mesh without decoding less interesting parts. Previous approaches divided a mesh into independently compressed charts and a base coarse mesh. We propose a novel hierarchical representation of the mesh. We build this representation by using a boundary-based approach to recursively split the mesh in two parts, under the constraint that any of the two resulting submeshes should be reconstructible independently.
In addition to this decomposition technique, we introduce the concepts of opposite vertex and context dependant numbering . This enables us to achieve seemingly better compression ratios than previous work on quad and higher degree polygonal meshes. Our coder uses about 3 bits per polygon for connectivity and 14 bits per vertex for geometry using 12 bits quantification.  相似文献   

18.
In this paper, we present a scalable three-dimensional hybrid parallel Delaunay image-to-mesh conversion algorithm (PDR.PODM) for distributed shared memory architectures. PDR.PODM is able to explore parallelism early in the mesh generation process thanks to the aggressive speculative approach employed by the Parallel Optimistic Delaunay Mesh generation algorithm (PODM). In addition, it decreases the communication overhead and improves data locality by making use of a data partitioning scheme offered by the Parallel Delaunay Refinement algorithm (PDR). PDR.PODM supports fully functional volume grading by creating elements with varying size. Small elements are created near boundary or inside the critical regions in order to capture the fine features while big elements are created in the rest of the mesh. We tested PDR.PODM on Blacklight, a distributed shared memory (DSM) machine in Pittsburgh Supercomputing Center. For the uniform mesh generation, we observed a weak scaling speedup of 163.8 and above for up to 256 cores as opposed to PODM whose weak scaling speedup is only 44.7 on 256 cores. PDR.PODM scales well on uniform refinement cases running on DSM supercomputers. The end result is that PDR.PODM can generate 18 million elements per second as opposed to 14 million per second in our earlier work. The varying size version sharply reduces the number of elements compared to the uniform version and thus reduces the time to generate the mesh while keeping the same fidelity.  相似文献   

19.
Tools to support mesh adaptation on massively parallel computers   总被引:1,自引:0,他引:1  
The scalable execution of parallel adaptive analyses requires the application of dynamic load balancing to repartition the mesh into a set of parts with balanced work load and minimal communication. As the adaptive meshes being generated reach billions of elements and the analyses are performed on massively parallel computers with 100,000??s of computing cores, a number of complexities arise that need to be addressed. This paper presents procedures developed to deal with two of them. The first is a procedure to support multiple parts per processor which is used as the mesh increases in size and it is desirable to partition the mesh to a larger number of computing cores than are currently being used. The second is a predictive load balancing method that sets entity weights before dynamic load balancing steps so that the mesh is well balanced after the mesh adaptation step thus avoiding excessive memory spikes that would otherwise occur during mesh adaptation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号