首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we present DiSR, a distributed approach to topology discovery and defect mapping in a self-assembled nano network-on-chip. The main aim is to achieve the already-proven properties of segment-based deadlock freedom requiring neither a topology graph as input, nor a centralized algorithm to configure network paths. After introducing the conceptual elements and the execution model of DiSR, we show how the open-source Nanoxim platform has been used to evaluate the proposed approach in the process of discovering irregular network topology while establishing network segments. Comparison against a tree-based approach shows how DiSR still preserves some important properties (coverage, defect tolerance, scalability) while avoiding resource hungry solutions such as virtual channels and hardware redundancy. Finally, we propose a gate-level hardware implementation of the required control logic and storage for DiSR, demonstrating a relatively acceptable impact ranging from 10 to about 20% of the budget of transistors available for each node.  相似文献   

2.
为了解决大型分布式系统由集中管理导致的扩展性和鲁棒性差的问题,利用改进的结构化对等网组织分布式计算资源,构造一个SRDM(scalable resource discovery model,可扩展资源发现模型)。SRDM将逻辑空间中的节点分为主机节点和资源节点。主机节点对应分布式环境中的计算节点,用于存储peer关联信息,通过相容性hash映射到逻辑空间上;资源节点对应分布式环境中资源属性信息,其与逻辑空间的映射通过分段hash再合并的方法得到。通过对属性值采用位置保留hash方法,使改进后的DHT算法支持有效的资源节点范围查询和多属性范围查询。最后通过实验证明,基于改进DHT算法的资源发现方法比集中式的方法有更好的扩展性,更适用于大规模分布式系统下的资源发现。  相似文献   

3.
4.
Li  Yuan  Wang  Guoren  Zhao  Yuhai  Zhu  Feida  Wu  Yubao 《World Wide Web》2020,23(2):799-830
World Wide Web - In many real life network-based applications such as social relation analysis, Web analysis, collaborative network, road network and bioinformatics, the discovery of components...  相似文献   

5.
With the increased acceptance of electronic health records, we can observe the increasing interest in the application of data mining approaches within this field. This study introduces a novel approach for exploring and comparing temporal trends within different in-patient subgroups, which is based on associated rule mining using Apriori algorithm and linear model-based recursive partitioning. The Nationwide Inpatient Sample (NIS), Healthcare Cost and Utilization Project (HCUP), Agency for Healthcare Research and Quality was used to evaluate the proposed approach. This study presents a novel approach where visual analytics on big data is used for trend discovery in form of a regression tree with scatter plots in the leaves of the tree. The trend lines are used for directly comparing linear trends within a specified time frame. Our results demonstrate the existence of opposite trends in relation to age and sex based subgroups that would be impossible to discover using traditional trend-tracking techniques. Such an approach can be employed regarding decision support applications for policy makers when organizing campaigns or by hospital management for observing trends that cannot be directly discovered using traditional analytical techniques.  相似文献   

6.
Distributed shared memory for roaming large volumes   总被引:1,自引:0,他引:1  
We present a cluster-based volume rendering system for roaming very large volumes. This system allows to move a gigabyte-sized probe inside a total volume of several tens or hundreds of gigabytes in real-time. While the size of the probe is limited by the total amount of texture memory on the cluster, the size of the total data set has no theoretical limit. The cluster is used as a distributed graphics processing unit that both aggregates graphics power and graphics memory. A hardware-accelerated volume renderer runs in parallel on the cluster nodes and the final image compositing is implemented using a pipelined sort-last rendering algorithm. Meanwhile, volume bricking and volume paging allow efficient data caching. On each rendering node, a distributed hierarchical cache system implements a global software-based distributed shared memory on the cluster. In case of a cache miss, this system first checks page residency on the other cluster nodes instead of directly accessing local disks. Using two Gigabit Ethernet network interfaces per node, we accelerate data fetching by a factor of 4 compared to directly accessing local disks. The system also implements asynchronous disk access and texture loading, which makes it possible to overlap data loading, volume slicing and rendering for optimal volume roaming.  相似文献   

7.
Tensegrities consist of disjoint struts connected by tensile strings which maintain shape due to pre-stress stability. Because of their rigidity, foldability and deployability, tensegrities are becoming increasingly popular in engineering. Unfortunately few effective analytical methods for discovering tensegrity geometries exist. We introduce an evolutionary algorithm which produces large tensegrity structures, and demonstrate its efficacy and scalability relative to previous methods. A generative representation allows the discovery of underlying structural patterns. These techniques have produced the largest and most complex irregular tensegrities known in the field, paving the way toward novel solutions ranging from space antennas to soft robotics.  相似文献   

8.
The efficient mining of large, commercially credible, databases requires a solution to at least two problems: (a) better integration between existing Knowledge Discovery algorithms and popular DBMS; (b) ability to exploit opportunities for computational speedup such as data parallelism. Both problems need to be addressed in a generic manner, since the stated requirements of end-users cover a range of data mining paradigms, DBMS, and (parallel) platforms. In this paper we present a family of generic, set-based, primitive operations for Knowledge Discovery in Databases (KDD). We show how a number of well-known KDD classification metrics, drawn from paradigms such as Bayesian classifiers, Rule-Induction/Decision Tree algorithms, Instance-Based Learning methods, and Genetic Programming, can all be computed via our generic primitives. We then show how these primitives may be mapped into SQL and, where appropriate, optimised for good performance in respect of practical factors such as client–server communication overheads. We demonstrate how our primitives can support C4.5, a widely-used rule induction system. Performance evaluation figures are presented for commercially available parallel platforms, such as the IBM SP/2.  相似文献   

9.
We study efficient discovery of proximity word-association patterns, defined by a sequence of strings and a proximity gap, from a collection of texts with the positive and the negative labels. We present an algorithm that finds alld-stringsk-proximity word-association patterns that maximize the number of texts whose matching agree with their labels. It runs in expected time complexityO(k d−1n log d n) and spaceO(k d−1n) with the total lengthn of texts, if texts are uniformly random strings. We also show that the problem to find one of the best word-association patterns with arbitrarily many strings in MAX SNP-hard. Shinichi Shimozono, Ph.D.: He is an Associate Professor of the Department of Artificial Intelligence at Kyushu Institute of Technology Iizuka, Japan. He obtained the B.S. degree in Physics from Kyushu University, awarded M.S. degree from Graduate School of Information Science in Kyushu University, and his Dr. Sci. degree in 1996 from Kyushu University. His research interests are primarily in the design and analysis of algorithms for intractable problems. Hiroki Arimura, Ph.D.: He is an Associate Professor of the Department of Informatics at Kyushu University, Fukuoka, Japan. He is also a researcher with Precursory Research for Embryonic Science and Technology, Japan Science and Technology Corporation (JST) since 1999. He received the B.S. degree in 1988 in Physics, the M.S. degree in 1979 and the Dr.Sci. degree in 1994 in Information Systems from Kyushu University. His research interests include data mining, computational learning theory, and inductive logic programming. Setsuo Arikawa, Ph.D.: He is a Professor of the Department of Informatics and the Director of University Library at Kyushu University, Fukuoka, Japan. He received the B.S. degree in 1964, the M.S. degree in 1966 and the Dr.Sci. degree in 1969 all in Mathematics from Kyushu University. His research interests include Discovery Science, Algorithmic Learning Theory, Logic and Inference/Reasoning in AI, Pattern Matching Algorithms and Library Science. He is the principal investigator of the Discovery Science Project sponsored by the Grant-in Aid for Scientific Research on Priority Area from the Ministry of ESSC, Japan.  相似文献   

10.
Sequential pattern mining algorithms can often produce more accurate results if they work with specific constraints in addition to the support threshold. Many systems implement time-independent constraints by selecting qualified patterns. This selection cannot implement time-dependent constraints, because the support computation process must validate the time attributes of every data sequence during mining. Therefore, we propose a memory time-indexing approach, called METISP, to discover sequential patterns with time constraints including minimum-gap, maximum-gap, exact-gap, sliding window, and duration constraints. METISP scans the database into memory and constructs time-index sets for effective processing. METISP uses index sets and a pattern-growth strategy to mine patterns without generating any candidates or sub-databases. The index sets narrow down the search space to the sets of designated in-memory data sequences, and speed up the counting of potential items within the indicated ranges. Our comprehensive experiments show that METISP has better efficiency, even with low support and large databases, than the well-known GSP and DELISP algorithms. METISP scales up linearly with respect to database size.  相似文献   

11.
TheNielsen Opportunity Explorer tmproduct can be used by sales and trade marketing personnel within consumer packaged goods manufacturers to understand how their products are performing in the market place and find opportunities to sell more product, more profitably to the retailers. Opportunity Explorer uses data collected at the point-of-sale terminals, and by auditors of A. C. Nielsen. Opportunity Explorer uses a knowledge-base of market research expertise to analyze large databases and generate interactive reports using knowledge discovery templates, converting a large space of data into concise, inter-linkedinformation frames. Each information frame addresses specific business issues, and leads the user to seek related information by means of dynamically created hyperlinks.  相似文献   

12.
Solving large-scale scientific problems represents a challenging and large area in numerical optimisation. Devoted techniques may improve the results achieved for these problems. We aimed to design an specific optimisation technique for these problems. In this case, a new swarm-based algorithm based on bees foraging behaviour is presented. This system must rely on large computing infrastructures that present specific characteristics. We designed this algorithm for being executed on the grid. The resulting algorithm improves the results obtained for the large-scale problem described in the paper by other algorithms. It also delivers an optimal usage of the computational resources. This work represents one of the few evidences for solving real large-scale scientific problems with a devoted algorithm using large and complex computing infrastructures. We show the capabilities of this approach when solving these problems.  相似文献   

13.
14.
Recursive Distributed Rendezvous (ReDiR) is a service discovery mechanism for Distributed Hash Table (DHT) based Peer-to-Peer (P2P) overlay networks. One of the major P2P systems that has adopted ReDiR is Peer-to-Peer Session Initiation Protocol (P2PSIP), which is a distributed communication system being standardized in the P2PSIP working group of the Internet Engineering Task Force (IETF). In a P2PSIP overlay, ReDiR can be used for instance to discover Traversal Using Relays around NAT (TURN) relay servers needed by P2PSIP nodes located behind a Network Address Translator (NAT). In this paper, we study the performance of ReDiR in a P2PSIP overlay network. We focus on metrics such as service lookup and registration delays, failure rate, traffic load, and ReDiR’s ability to balance load between service providers and between nodes storing information about service providers.  相似文献   

15.
In this paper we present data structures and distributed algorithms for CSL model checking-based performance and dependability evaluation. We show that all the necessary computations are composed of series or sums of matrix-vector products. We discuss sparse storage structures for the required matrices and present efficient sequential and distributed disk-based algorithms for performing these matrix-vector products. We illustrate the effectivity of our approach in a number of case studies in which continuous-time Markov chains (generated in a distributed way from stochastic Petri net specifications) with several hundreds of millions of states are solved on a workstation cluster with 26 dual-processor nodes. We show details about the memory consumption, the solution times, and the speedup. The distributed message-passing algorithms have been implemented in a tool called PARSECS, that also takes care of the distributed Markov chain generation and that can also be used for distributed CTL model checking of Petri nets.  相似文献   

16.
Collaborative filtering (CF) is an effective technique addressing the information overloading problem, where each user is associated with a set of rating scores on a set of items. For a chosen target user, conventional CF algorithms measure similarity between this user and other users by utilizing pairs of rating scores on common rated items, but discarding scores rated by one of them only. We call these comparative scores as dual ratings, while the non-comparative scores as singular ratings. Our experiments show that only about 10% ratings are dual ones that can be used for similarity evaluation, while the other 90% are singular ones. In this paper, we propose SingCF approach, which attempts to incorporate multiple singular ratings, in addition to dual ratings, to implement collaborative filtering, aiming at improving the recommendation accuracy. We first estimate the unrated scores for singular ratings and transform them into dual ones. Then we perform a CF process to discover neighborhood users and make predictions for each target user. Furthermore, we provide a MapReduce-based distributed framework on Hadoop for significant improvement in efficiency. Experiments in comparison with the state-of-the-art methods demonstrate the performance gains of our approaches.  相似文献   

17.
We describe an architecture for distributed collaborative visualization that integrates video conferencing, distributed data management and grid technologies as well as tangible interaction devices for visualization. High-speed, low-latency optical networks support high-quality collaborative interaction and remote visualization of large data.  相似文献   

18.
Deductive databases have the ability to deduce new facts from a set of existing facts by using a set of rules. They are also useful in the integration of artificial intelligence and databases. However, when recursive rules are involved, the number of deduced facts can become too large to be practically stored, viewed or analyzed. This seriously hinders the usefulness of deductive databases. In order to overcome this problem, we propose four methods to discover characteristic rules from a large number of deduction results without actually having to store all the deduction results. This paper presents the first step in the application of knowledge discovery techniques to deductive databases with large numbers of deduction results  相似文献   

19.
针对目前在大规模IPv6骨干网拓扑发现中普遍存在效率低、无法保证覆盖率的问题,提出一种基于ICMPv6(Internet control messages protocol version 6) 的自学习选取探测目标点的网络拓扑发现方案,并采用IPv6 Source Routing机制解决网络拓扑发现中存在的cross-link问题和路由器多址问题,最后通过对全球IPv6骨干网和国内CERNET2骨干网拓扑发现证明,该方案在保证高覆盖率的前提下,大大提高了拓扑发现效率。  相似文献   

20.
分布式大数据控制受到信道数量影响易产生不同步现象,导致信道控制性能较差,设计一种云计算环境下分布式大数据多信道并行控制系统。系统硬件:节点处理模块由FPGA芯片以及抗干扰器组成;无线通信模块主要由射频芯片与无线收发器组成;USB模块由接口芯片、寄存器、存储芯片以及周边电路构成。系统软件:分布式大数据多信道数据存储与处理模块的构成为同步存储数据单元与数据多路实时处理单元;多信道并行控制模块主要由多信道并行管理单元、多信道状态扫查单元以及生成数据流单元构成。通过硬件与软件相结合实现了分布式大数据多信道并行控制。实验结果证明,分布式大数据信道平均传输速率数据则分布、保持的较为均匀,实现了性能提升。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号