首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 21 毫秒
1.
While the World Wide Web (www) may appear to be intrinsically scalable through the distribution of files across a series of decentralized servers, there are instances where this form of load distribution is both costly and resource intensive. In such cases it may be necessary to administer a centrally located and managed http server. Given the exponential growth of the internet in general, and www in particular, it is increasingly more difficult for persons and organizations to properly anticipate their future http server needs, both in human resources and hardware requirements. It is the purpose of this paper to outline the methodology used at the National Center for Supercomputing Applications in building a scalable World Wide Web server. The implementation described in the following pages allows for dynamic scalability by rotating through a pool of http servers that are alternately mapped to the hostname alias of the www server. The key components of this configuration include: (1) cluster of identically configured http servers; (2) use of Round-Robin DNS for distributing http requests across the cluster; (3) use of distributed File System mechanism for maintaining a synchronized set of documents across the cluster; and (4) method for administering the cluster.The result of this design is that we are able to add any number of servers to the available pool, dynamically increasing the load capacity of the virtual server. Implementation of this concept has eliminated perceived and real vulnerabilities in our single-server model that had negatively impacted our user community. This particular design has also eliminated the single point of failure inherent in our single-server configuration, increasing the likelihood for continued and sustained availability. while the load is currently distributed in an unpredictable and, at times, deleterious manner, early implementation and maintenance of this configuration have proven promising and effective.  相似文献   

2.
为了建立一个高可伸缩、能按需提供服务的云GIS服务平台,研究了云GIS服务平台的特征和需求。针对这些特征,利用可配置的服务器集群技术和分布式缓存技术,设计并实现了一个提供OGC标准服务的云GIS服务平台。通过实验验证了平台在处理高并发、计算密集型任务方面具有高可伸缩性和良好的加速比,并且在实际的GIS应用中验证了平台的可用性。  相似文献   

3.
基于集群的流媒体缓存代理服务器体系结构   总被引:1,自引:0,他引:1  
提出了基于集群的流媒体缓存代理服务器体系结构,并将基于内容的前端机负载平衡调度策略引入到流媒体缓存代理服务器的架构设计中,对系统设计原理及各组成模块结构和各模块间的消息通信机制进行了详细的分析阐述,主要采用C和C++语言在Linux操作系统下实现了原型系统,通过对原型系统进行测试,表明整个系统设计合理,性能突出,具有良好的稳定性和可扩展性。  相似文献   

4.
Traditional techniques that mainframes use to increase reliability -special hardware or custom software - are incompatible with commodity server requirements. The Total Reliability Using Scalable Servers (TRUSS) architecture, developed at Carnegie Mellon, aims to bring reliability to commodity servers. TRUSS features a distributed shared-memory (DSM) multiprocessor that incorporates computation and memory storage redundancy to detect and recover from any single point of transient or permanent failure. Because its underlying DSM architecture presents the familiar shared-memory programming model, TRUSS requires no changes to existing applications and only minor modifications to the operating system to support error recovery.  相似文献   

5.
Developing a distributed scalable Java component server   总被引:2,自引:0,他引:2  
We present here approaches for a distributed scalable Java component server. The first one uses a resource broker model, whereby the system is composed of one or several entry point servers, a resource broker and a set of participating servers. The resource broker gives the system its dynamic scalability and load balancing capability by notifying participants and providing information to the entry point servers. An experimental version of the server has been developed. Two other approaches based on Jini and JavaSpace are proposed. An experimental version of the latter one is also compared with the resource broker model.  相似文献   

6.
Media distribution over the Internet is a fact. Finding a way to protect content streaming is a hot topic. This paper presents both a new and efficient key multicasting protocol based on the use of orthogonal systems in vector spaces, and its scalable implementation by means of a platform independent server. The protocol outstands in exchanging a small number of tiny messages which helps the server to scale when facing a huge number of legitimate users.  相似文献   

7.
摘要:云计算数据中心越来越庞大,硬件规模也日益增大,而且还会有大量的计算资源、存储资源会出现在云端,促使出现了一大批十万级、百万级、乃至千万级服务器的数据中心,且服务器还可以增量扩展与增量部署,高能耗问题已经日益凸显,严重制约到云计算数据中心的可持续性发展。本文提出了一种新型的云计算数据中心可扩展服务器节能优化策略——效能优化策略,能够基于全局角度来降低能源消耗,优化服务器选择过程,并且还可促使不同服务器之间实现负载均衡。仿真实验结果表明:基于能耗大小来看,本文提出的效能优化策略要比DVFS策略、无迁移策略所对应的能耗分别节约15.23%、24.33%;基于迁移数来看,本文提出的效能优化策略要比DVFS策略所对应的迁移次数减少2425次,总之,本文提出的效能优化策略总体而言要明显比DVFS策略、无迁移策略更优越。  相似文献   

8.
This paper presents CONQUIRO a cluster based information retrieval engine. The main task of CONQUIRO is to organize documents in groups/clusters relevant to the request or query. The main purpose of CONQUIRO is to help to manage information in an efficient manner. CONQUIRO uses Machine learning algorithms (Clustering methods) as underlying technology. It has been equipped with hierarchical and non-hierarchical clustering algorithms both using Euclidean and cosine similarity as distance measures. Authors believe that CONQUIRO represents a solution to the problem of information management since CONQUIRO goes beyond just a ranked list of documents (Google like).  相似文献   

9.
Big data is an emerging term in the storage industry, and it is data analytics on big storage, i.e., Cloud-scale storage. In Cloud-scale (or EB-scale) file systems, load balancing in request workloads across a metadata server cluster is critical for avoiding performance bottlenecks and improving quality of services.Many good approaches have been proposed for load balancing in distributed file systems. Some of them pay attention to global namespace balancing, making metadata distribution across metadata servers as uniform as possible. However, they do not work well in skew request distributions, which impair load balancing but simultaneously increase the effectiveness of caching and replication. In this paper, we propose Cloud Cache (C2), an adaptive and scalable load balancing scheme for metadata server cluster in EB-scale file systems. It combines adaptive cache diffusion and replication scheme to cope with the request load balancing problem, and it can be integrated into existing distributed metadata management approaches to efficiently improve their load balancing performance. C2 runs as follows: 1) to run adaptive cache diffusion first, if a node is overloaded, loadshedding will be used; otherwise, load-stealing will be used; and 2) to run adaptive replication scheme second, if there is a very popular metadata item (or at least two items) causing a node be overloaded, adaptive replication scheme will be used, in which the very popular item is not split into several nodes using adaptive cache diffusion because of its knapsack property. By conducting performance evaluation in trace-driven simulations, experimental results demonstrate the efficiency and scalability of C2.  相似文献   

10.
Welling  G. Ott  M. Mathur  S. 《Micro, IEEE》2001,21(1):16-25
The Clara prototype architecture collocates routing and computational functionality within a network, providing a scalable, high-performance computing switch router for computational services. Multiple off-the-shelf PCs provide Clara with computational power to, for example, perform real-time transcoding of video with minimal overhead  相似文献   

11.
The GVIP (geometric and TV image processor) graphics processor, which creates and synthesizes computer graphics and TV images and meets the requirements of multi-media systems, is described. The hardware modules that make up this graphics processor include: a 32-bit embedded RISC processor, a Phong and Gouraud shading processor, a texture mapping processor, a hidden surface removal processor, an HDTV video image processor, a BitBlt processor, an imageprocessing module, and an outline font fill generator. These hardware modules fabricated using 0.8 m CMOS standard cells have been placed in three integrated circuit chips. The total number of gates used for one set of chips is approximately 350000.  相似文献   

12.
In this paper, we present ICICLE (Image ChainNet and Incremental Clustering Engine), a prototype system that we have developed to efficiently and effectively retrieve WWW images based on image semantics. ICICLE has two distinguishing features. First, it employs a novel image representation model called Weight ChainNet to capture the semantics of the image content. A new formula, called list space model, for computing semantic similarities is also introduced. Second, to speed up retrieval, ICICLE employs an incremental clustering mechanism, ICC (Incremental Clustering on ChainNet), to cluster images with similar semantics into the same partition. Each cluster has a summary representative and all clusters' representatives are further summarized into a balanced and full binary tree structure. We conducted an extensive performance study to evaluate ICICLE. Compared with some recently proposed methods, our results show that ICICLE provides better recall and precision. Our clustering technique ICC facilitates speedy retrieval of images without sacrificing recall and precision significantly.  相似文献   

13.
The dominant paradigm for searching and browsing large data stores is text-based: presenting a scrollable list of search results in response to textual search term input. While this works well for the Web, there is opportunity for improvement in the domain of personal information stores, which tend to have more heterogeneous data and richer metadata. In this paper, we introduce FacetMap, an interactive, query-driven visualization, generalizable to a wide range of metadata-rich data stores. FacetMap uses a visual metaphor for both input (selection of metadata facets as filters) and output. Results of a user study provide insight into tradeoffs between FacetMap's graphical approach and the traditional text-oriented approach  相似文献   

14.
Mercator: A scalable,extensible Web crawler   总被引:23,自引:0,他引:23  
Heydon  Allan  Najork  Marc 《World Wide Web》1999,2(4):219-229
This paper describes Mercator, a scalable, extensible Web crawler written entirely in Java. Scalable Web crawlers are an important component of many Web services, but their design is not welldocumented in the literature. We enumerate the major components of any scalable Web crawler, comment on alternatives and tradeoffs in their design, and describe the particular components used in Mercator. We also describe Mercator's support for extensibility and customizability. Finally, we comment on Mercator's performance, which we have found to be comparable to that of other crawlers for which performance numbers have been published.  相似文献   

15.
针对WWW服务的安全问题和广西壮族自治区科技厅的实际情况,提出了一种基于PKI技术的、适用于Windows IIS系统的S-HTTP实施方案。该方案同时实现了访问控制机制。  相似文献   

16.
Clustering, while systematically applied in anomaly detection, has a direct impact on the accuracy of the detection methods. Existing cluster-based anomaly detection methods are mainly based on spherical shape clustering. In this paper, we focus on arbitrary shape clustering methods to increase the accuracy of the anomaly detection. However, since the main drawback of arbitrary shape clustering is its high memory complexity, we propose to summarize clusters first. For this, we design an algorithm, called Summarization based on Gaussian Mixture Model (SGMM), to summarize clusters and represent them as Gaussian Mixture Models (GMMs). After GMMs are constructed, incoming new samples are presented to the GMMs, and their membership values are calculated, based on which the new samples are labeled as “normal” or “anomaly.” Additionally, to address the issue of noise in the data, instead of labeling samples individually, they are clustered first, and then each cluster is labeled collectively. For this, we present a new approach, called Collective Probabilistic Anomaly Detection (CPAD), in which, the distance of the incoming new samples and the existing SGMMs is calculated, and then the new cluster is labeled the same as of the closest cluster. To measure the distance of two GMM-based clusters, we propose a modified version of the Kullback–Libner measure. We run several experiments to evaluate the performances of the proposed SGMM and CPAD methods and compare them against some of the well-known algorithms including ABACUS, local outlier factor (LOF), and one-class support vector machine (SVM). The performance of SGMM is compared with ABACUS using Dunn and DB metrics, and the results indicate that the SGMM performs superior in terms of summarizing clusters. Moreover, the proposed CPAD method is compared with the LOF and one-class SVM considering the performance criteria of (a) false alarm rate, (b) detection rate, and (c) memory efficiency. The experimental results show that the CPAD method is noise resilient, memory efficient, and its accuracy is higher than the other methods.  相似文献   

17.
Argumentation mining is a recent challenge concerning the automatic extraction of arguments from unstructured textual corpora. Argumentation mining technologies are rapidly evolving and show a clear potential for application in diverse areas such as recommender systems, policy-making and the legal domain. There is a long-recognised need for tools that enable users to browse, visualise, search, and manipulate arguments and argument structures. There is, however, a lack of widely accessible tools. In this article we describe the technology behind MARGOT, the first online argumentation mining system designed to reach out to the wider community of potential users of these new technologies. We evaluate its performance and discuss its possible application in the analysis of content from various domains.  相似文献   

18.
Web2DB:一个通用WWW数据库接口系统   总被引:2,自引:0,他引:2  
文中介绍了一个WWW与数据库系统连接的通用接口系统Web2DB的设计和实现。  相似文献   

19.
The literature describes two high performance concurrent stack algorithms based on combining funnels and elimination trees. Unfortunately, the funnels are linearizable but blocking, and the elimination trees are non-blocking but not linearizable. Neither is used in practice since they perform well only at exceptionally high loads. The literature also describes a simple lock-free linearizable stack algorithm that works at low loads but does not scale as the load increases. The question of designing a stack algorithm that is non-blocking, linearizable, and scales well throughout the concurrency range, has thus remained open.  相似文献   

20.
This paper describes nagging, a technique for parallelizing search in a heterogeneous distributed computing environment. Nagging exploits the speedup anomaly often observed when parallelizing problems by playing multiple reformulations of the problem or portions of the problem against each other. Nagging is both fault tolerant and robust to long message latencies. In this paper, we show how nagging can be used to parallelize several different algorithms drawn from the artificial intelligence literature, and describe how nagging can be combined with partitioning, the more traditional search parallelization strategy. We present a theoretical analysis of the advantage of nagging with respect to partitioning, and give empirical results obtained on a cluster of 64 processors that demonstrate nagging's effectiveness and scalability as applied to A* search, β minimax game tree search, and the Davis–Putnam algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号