共查询到20条相似文献,搜索用时 10 毫秒
1.
Hussein Suleman Christopher Parker Muammar Omar 《International Journal on Digital Libraries》2008,9(2):115-124
Digital libraries and information management systems are increasingly being developed according to component models with well-defined
APIs and often with Web-accessible interfaces. In parallel with metadata access and harvesting, Web 2.0 mashups have demonstrated
the flexibility of developing systems as independent distributed components. It can be argued that such distributed components
also can be an enabler for scalability of service provision in medium to large systems. To test this premise, this article
discusses how an existing component framework was modified to include support for scalability. A set of lightweight services
and extensions were created to migrate and replicate services as the load changes. Experiments with the prototype system confirm
that this system can in fact be quite effective as an enabler of transparent and efficient scalability, without the need to
resort to complex middleware or substantial system reengineering. Finally, specific problems areas have been identified as
future avenues for exploration at the crucial intersection of digital libraries and high-performance computing. 相似文献
2.
SMT是FDDI协议的重要组成部分,它提供FDDI的站管理功能,负责网络的初始化、错误检测、故障的跟踪与恢复以及状态信,包的统计,它由两部分组成:帧服务和状态机,本文着重介绍和分析了它们的概念和工作原理。 相似文献
3.
Jose L. Bosque Oscar D. Robles Pablo Toharia Luis Pastor 《The Journal of supercomputing》2011,58(3):367-375
This paper presents a new expression for an isoefficiency function which can be applied both to homogeneous and heterogeneous systems. Using this new function, called H-isoefficiency, it is now possible to analyze the scalability of heterogeneous clusters. In order to show how this new metric can be used, a theoretical a priori analysis of the scalability of a Gauss Elimination algorithm is presented, together with a model evaluation which demonstrates the correlation between the theoretical analysis and the experimental results. 相似文献
4.
针对以往关于可扩展性研究中未充分考虑并行执行时间因素,可扩展性与并行执行时间的关系仍未研究清楚的问题,深入和全面研究延迟可扩展性和并行执行时间的关系,得出并证明了不同算法〖CD*2〗机器组合体在相同初始状态下进行延迟扩展后,若执行更快的组合体具有更好的延迟扩展性,则该组合体在扩展后仍将保持更快等重要结论。这些结论丰富了可扩展性和并行执行时间关系的研究内容,为并行计算延迟扩展获得理想扩展性能提供了理论依据。最后,通过对不同算法〖CD*2〗机器组合体进行扩展实验,进一步验证了结论的有效性。 相似文献
5.
流感知网络作为新型综合业务区分服务质量保障体系,为关键业务提供了端到端的性能保障。它需要维护每个流的状态信息并基于单流实现策略控制,因此可扩展性便成为其应用基础。流感知网络可扩展性问题可分解为每流调度可扩展性和进程流规模控制策略两个方面,证明单流调度策略独立于链路速率且调度流数量有限是核心内容,流规模的控制策略说明流感知网络是基于有限流进行控制和管理,体系具有应用可扩展性。 相似文献
6.
Scalability is a key factor of the design of distributed systems and parallel algorithms and machines. However, conventional scalabilities are designed for homogeneous parallel processing. There is no suitable and commonly accepted definition of scalability metric for heterogeneous systems. Isospeed scalability is a well-defined metric for homogeneous computing. This study extends the isospeed scalability metric to general heterogeneous computing systems. The proposed isospeed-efficiency model is suitable for both homogeneous and heterogeneous computing. Through theoretical analyses, we derive methodologies of scalability measurement and prediction for heterogeneous systems. Experimental results have verified the analytical results and confirmed that the proposed isospeed-efficiency scalability works well in both homogeneous and heterogeneous environments. 相似文献
7.
Scalability has become an attribute of paramount importance for computer systems used in business, scientific and engineering
applications. Although scalability has been widely discussed, especially for pure parallel computer systems, it conveniently
focuses on improving performance when increasing the number of computing processors. In fact, the term “scalable” is so much
abused that it has become a marketing tool for computer vendors independent of the system’s technical qualifications. Since
the primary objective of scalability analysis is to determine how well a system can work on larger problems with an increase
in its size, we introduce here a generic definition of scalability. For illustrative purposes only, we apply this definition
to PC clusters, a rather difficult subject due to their long communication latencies. Since scalability does not solely depend
on the system architecture but also on the application programs and their actual management by the run-time environment, for
the sake of illustration, we evaluate scalability for programs developed under the super-programming model (SPM) (Jin and
Ziavras in IEEE Trans. Parallel Distrib. Syst. 15(9):783–794, 2004; J. Parallel Distrib. Comput. 65(10):1281–1289, 2005; IEICE Trans. Inf. Syst. E87-D(7):1774–1781, 2004). 相似文献
8.
Jurisica I. Rogers P. Glasgow J.I. Collins R.J. Wolfley J.R. Luft J.R. DeTitta G.T. 《Intelligent Systems, IEEE》2001,16(6):26-34
This article describes the application of image analysis techniques to protein crystallization experiment classification. By applying knowledge discovery techniques to the analysis results, we can extract important crystallographic knowledge. 相似文献
9.
Larger, higher resolution displays can be used to increase the scalability of information visualizations. But just how much can scalability increase using larger displays before hitting human perceptual or cognitive limits? Are the same visualization techniques that are good on a single monitor also the techniques that are best when they are scaled up using large, high-resolution displays? To answer these questions we performed a controlled experiment on user performance time, accuracy, and subjective workload when scaling up data quantity with different space-time-attribute visualizations using a large, tiled display. Twelve college students used small multiples, embedded bar matrices, and embedded time-series graphs either on a 2 megapixel (Mp) display or with data scaled up using a 32 Mp tiled display. Participants performed various overview and detail tasks on geospatially-referenced multidimensional time-series data. Results showed that current designs are perceptually scalable because they result in a decrease in task completion time when normalized per number of data attributes along with no decrease in accuracy. It appears that, for the visualizations selected for this study, the relative comparison between designs is generally consistent between display sizes. However, results also suggest that encoding is more important on a smaller display while spatial grouping is more important on a larger display. Some suggestions for designers are provided based on our experience designing visualizations for large displays. 相似文献
10.
Running real-time applications with a variable-speed processor can result in scheduling anomalies and permanent overloads. A proposed computational model varies task response times continuously with processor speed, enabling the system to predictably scale its performance during voltage changes. Mutually exclusive resources and nonpreemptive code can generate scheduling anomalies in a processor with dynamic voltage scaling, causing tasks to increase their response times when the processor runs at higher speeds. Even worse, decreasing the speed can cause a permanent overload that degrades system performance in an uncontrolled fashion. Such problems can be efficiently handled through a set of kernel mechanisms, including cyclic asynchronous buffers and elastic scheduling that let system designers scale the performance of real-time applications as a function of processor speed. As successfully done in the SHaRK kernel, both CABs and elastic scheduling can be easily implemented on top of any real-time operating system, as a middleware layer, and they should be included in current standards to develop embedded systems with real-time and energy requirements. 相似文献
11.
Scott Joseph Niemetz Aina Preiner Mathias Nejati Saeed Ganesh Vijay 《International Journal on Software Tools for Technology Transfer (STTT)》2023,25(2):219-239
International Journal on Software Tools for Technology Transfer - This paper presents MachSMT, an algorithm selection tool for Satisfiability Modulo Theories (SMT) solvers. MachSMT supports the... 相似文献
12.
In the rollback recovery of large‐scale long‐running applications in a distributed environment, pessimistic message logging protocols enable failed processes to recover independently, though at the expense of logging every message synchronously during fault‐free execution. In contrast, coordinated checkpointing protocols avoid message logging, but they are poor in scalability with a sharply increased coordinating overhead as the system grows. With the aim of achieving efficient rollback recovery by trading off logging overhead and coordinating overhead, this paper suggests a partitioning of the system into clusters, and then presents a scheme to implement the conversion between these overheads. Using the proposed conversion, coordination can be introduced to reduce the unbearable logging overhead found in some systems, whereas proper logging can be employed to alleviate the unacceptable coordinating overhead in others. Furthermore, heuristics are introduced to address the issue of how to partition the system into clusters in order to speed up the recovery process and to improve recovery efficiency. Performance evaluation results indicate that our scheme can lower the overall system overhead effectively. Copyright © 2008 John Wiley & Sons, Ltd. 相似文献
13.
14.
In this paper, SNR scalable representations of video signals are studied. The investigated codecs are well suited for communications applications because they are all based on backward motion-compensated predictive coding, which provides the necessary low-delay property. In a very-low bit rate context (VLBR), the matching pursuits (MP) signal representation algorithm is used to represent the displaced frame difference (DFD) of each layer of a multilevel decomposition of the video signal. A number of conventional prediction schemes that can be generalized to any DFD representation technique are considered. They are compared with an original and MP specific DFD prediction method. Two scenari have been considered. In the first scenario, an enhancement layer is built on a base layer that has been encoded using a classical, i.e., nonscalable scheme. In that case, all methods appear to be comparable, In the second scenario, the fact that the base layer is used as a reference for an enhancement layer is taken into account to build it. In that case, the proposed MP prediction method clearly outperforms all other conventional approaches, Additional lessons can be drawn from this work. The same motion vectors can be used in both SNR layers, and the DFD prediction between layers improves coding efficiency. Moreover, the MP representation of the signal enable us to measure the predictability of the high SNR layer DFD from the low SNR layer DFD, i.e., to quantify the part of the low SNR layer information that also belongs to the high SNR layer 相似文献
15.
16.
Evaluating the scalability of distributed systems 总被引:1,自引:0,他引:1
Many distributed systems must be scalable, meaning that they must be economically deployable in a wide range of sizes and configurations. This paper presents a scalability metric based on cost-effectiveness, where the effectiveness is a function of the system's throughput and its quality of service. It is part of a framework which also includes a sealing strategy for introducing changes as a function of a scale factor, and an automated virtual design optimization at each scale factor. This is an adaptation of concepts for scalability measures in parallel computing. Scalability is measured by the range of scale factors that give a satisfactory value of the metric, and good scalability is a joint property of the initial design and the scaling strategy. The results give insight into the scaling capacity of the designs, and into how to improve the design. A rapid simple bound on the metric is also described. The metric is demonstrated in this work by applying it to some well-known idealized systems, and to real prototypes of communications software 相似文献
17.
针对H.264/AVC中复杂度无法满足终端设备异构性的问题,提出一种新的帧间模式选择算法。利用相邻宏块的空间相关性和时间相关性,进行模式预测,并通过复杂度伸缩因子,控制参与预测的模式的数目,使复杂度在20%到100%之间灵活变化。实验结果表明,该算法能在视频质量和复杂度之间形成良好的折中,适应从高端设备到低端设备的计算能力差异。 相似文献
18.
P2P (peer-to-peer) technology has proved itself an efficient and cost-effective solution to support large-scale multimedia streaming. Different from traditional P2P applications, the quality of P2P streaming is strictly determined by performance metrics such as streaming delay. To meet these requirements, previous studies resorted to intuitions and heuristics to construct peer selection solutions incorporating topology and proximity concerns. However, the impact of proximity-aware methodology and delay tolerance of peers on the scalability of P2P system remains an unanswered question. In this paper, we study this problem via an analytical approach. To address the challenge of incorporating Internet topology into P2P streaming analysis, we construct a H-sphere network model which maps the network topology from the space of discrete graph to the continuous geometric domain, meanwhile capturing the power-law property of Internet. Based on this model, we analyze a series of peer selection methods by evaluating their performance via key scalability metrics. Our analytical observations are further verified via simulation on Internet topologies. 相似文献
19.
Michael Patra Emma Falck Ilpo Vattulainen Mikko Karttunen 《Computer Physics Communications》2007,176(1):14-22
Typical biomolecular systems such as cellular membranes, DNA, and protein complexes are highly charged. Thus, efficient and accurate treatment of electrostatic interactions is of great importance in computational modeling of such systems. We have employed the GROMACS simulation package to perform extensive benchmarking of different commonly used electrostatic schemes on a range of computer architectures (Pentium-4, IBM Power 4, and Apple/IBM G5) for single processor and parallel performance up to 8 nodes—we have also tested the scalability on four different networks, namely Infiniband, GigaBit Ethernet, Fast Ethernet, and nearly uniform memory architecture, i.e. communication between CPUs is possible by directly reading from or writing to other CPUs' local memory. It turns out that the particle-mesh Ewald method (PME) performs surprisingly well and offers competitive performance unless parallel runs on PC hardware with older network infrastructure are needed. Lipid bilayers of sizes 128, 512 and 2048 lipid molecules were used as the test systems representing typical cases encountered in biomolecular simulations. Our results enable an accurate prediction of computational speed on most current computing systems, both for serial and parallel runs. These results should be helpful in, for example, choosing the most suitable configuration for a small departmental computer cluster. 相似文献
20.
Glenn R. Luecke Marina Kraeva Jing Yuan Silvia Spanoyannis 《Concurrency and Computation》2004,16(1):79-107
The purpose of this paper is to compare the communication performance and scalability of MPI communication routines on a Windows Cluster, a Linux Cluster, a Cray T3E‐600, and an SGI Origin 2000. All tests in this paper were run using various numbers of processors and two message sizes. In spite of the fact that the Cray T3E‐600 is about 7 years old, it performed best of all machines for most of the tests. The Linux Cluster with the Myrinet interconnect and Myricom's MPI performed and scaled quite well and, in most cases, performed better than the Origin 2000, and in some cases better than the T3E. The Windows Cluster using the Giganet Full Interconnect and MPI/Pro's MPI performed and scaled poorly for small messages compared with all of the other machines. Copyright © 2004 John Wiley & Sons, Ltd. 相似文献