首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 10 毫秒
1.
The idea of preplanning strings on disks which are merged together is investigated from a performance point of view. Schemes of internal buffer allocation, initial string creation by an internal sort, and string distribution on disks are evaluated. An algorithm is given for the construction of suboptimal merge trees called plannable merge trees. A cost model is presented for accurate preplanning which consists of detailed assumptions on disk allocation fork input disks andr-way merge planning. Timing considerations for sort and merge including hardware characteristics of moveable head disks show a significant gain of time compared to widely used sort/merge applications.  相似文献   

2.
Analytical workloads in data warehouses often include heavy joins where queries involve multiple fact tables in addition to the typical star-patterns, dimensional grouping and selections. In this paper we propose a new processing and storage framework called bitwise dimensional co-clustering (BDCC) that avoids replication and thus keeps updates fast, yet is able to accelerate all these foreign key joins, efficiently support grouping and pushes down most dimensional selections. The core idea of BDCC is to cluster each table on a mix of dimensions, each possibly derived from attributes imported over an incoming foreign key and this way creating foreign key connected tables with partially shared clusterings. These are later used to accelerate any join between two tables that have some dimension in common and additionally permit to push down and propagate selections (reduce I/O) and accelerate aggregation and ordering operations. Besides the general framework, we describe an algorithm to derive such a physical co-clustering database automatically and describe query processing and query optimization techniques that can easily be fitted into existing relational engines. We present an experimental evaluation on the TPC-H benchmark in the Vectorwise system, showing that co-clustering can significantly enhance its already high performance and at the same time significantly reduce the memory consumption of the system.  相似文献   

3.
网络流模型被广泛用于构建网络与网络服务的测试环境,其准确性直接影响各种业务的性能评估结果及在实际网络环境中的鲁棒性.随着电子商务及新型网络应用的普及,突发流现象已经成为现代互联网的主要特征之一.针对平稳网络流而设计的传统网络流模型已经难以有效地描述现代网络中突发流的时间结构性及统计属性,从而不能准确反映现代网络流的行为特征.为此,提出一种新的结构化双层隐马尔可夫模型用于模拟实际网络环境下的突发流,并设计了有效的模型参数推断算法及突发流合成方法.该模型通过结构化的2层隐马尔可夫过程描述突发流并实现仿真合成,使合成流可以重现实际突发流的时间结构性、统计特性及自相似性.实验表明,该模型可以有效合成突发流.  相似文献   

4.
The trends in parallel processing system design and deployment have been toward networked distributed systems such as cluster computing systems. Since the overall performance of such distributed systems often depends on the efficiency of their communication networks, performance analysis of the interconnection networks for such distributed systems is paramount. In this paper, we develop an analytical model, under non‐uniform traffic and in the presence of communication locality, for the m‐port n‐tree family interconnection networks commonly employed in large‐scale cluster computing systems. We use the proposed model to study two widely used interconnection networks flow control mechanism namely the wormhole and store&forward. The proposed analytical model is validated through comprehensive simulation. The results of the simulation demonstrated that the proposed model exhibits a good degree of accuracy for various system organizations and under different working conditions. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

5.
A comprehensive model for software rejuvenation   总被引:1,自引:0,他引:1  
Recently, the phenomenon of software aging, one in which the state of the software system degrades with time, has been reported. This phenomenon, which may eventually lead to system performance degradation and/or crash/hang failure, is the result of exhaustion of operating system resources, data corruption, and numerical error accumulation. To counteract software aging, a technique called software rejuvenation has been proposed, which essentially involves occasionally terminating an application or a system, cleaning its internal state and/or its environment, and restarting it. Since rejuvenation incurs an overhead, an important research issue is to determine optimal times to initiate this action. In this paper, we first describe how to include faults attributed to software aging in the framework of Gray's software fault classification (deterministic and transient), and study the treatment and recovery strategies for each of the fault classes. We then construct a semi-Markov reward model based on workload and resource usage data collected from the UNIX operating system. We identify different workload states using statistical cluster analysis, estimate transition probabilities, and sojourn time distributions from the data. Corresponding to each resource, a reward function is then defined for the model based on the rate of resource depletion in each state. The model is then solved to obtain estimated times to exhaustion for each resource. The result from the semi-Markov reward model are then fed into a higher-level availability model that accounts for failure followed by reactive recovery, as well as proactive recovery. This comprehensive model is then used to derive optimal rejuvenation schedules that maximize availability or minimize downtime cost.  相似文献   

6.
The Spidergon Network-on-Chip (NoC) was proposed to address the demand for a fixed and optimized communication infrastructure for cost-effective multi-processor Systems-on-Chip (MPSoC) development. To deal with the increasing diversity in quality of service requirements of SoC applications, the performance of this architecture needs to be improved. Virtual channels have traditionally been employed to enhance the performance of the interconnect networks. In this paper, we present analytical models to evaluate the message latency and network throughput in the Spidergon NoC and investigate the effect of employing virtual channels. Results obtained through simulation experiments show that the model exhibits a good degree of accuracy in predicting average message latency under various working conditions. Moreover an FPGA implementation of the Spidergon has been developed to provide an accurate analysis of the cost of employing virtual channels in this architecture.  相似文献   

7.
Throughput computing is based on chip multithreading processor design technology. In CMT technology, maximizing the amount of work accomplished per unit of time or other relevant resource, rather than minimizing the time needed to complete a given task or set of tasks, defines performance. By CMT standards, the best processor accomplishes the most work per second of time, per watt of expended power, per square millimeter of die area, and so on (that is, it operates most efficiently). The processor described is a member of Sun's first generation of CMT processors designed to efficiently execute network-facing workloads. Network-facing systems primarily service network clients and are often grouped together under die label "Web servers". The processor's dual-thread execution capability, compact die size, and minimal power consumption combine to produce high throughput performance per watt, per transistor, and per square millimeter of die area. Given the short design cycle Sun needed to create the processor, the result is a compelling early proof of the value of throughput computing.  相似文献   

8.
A comprehensive quality model for service-oriented systems   总被引:2,自引:0,他引:2  
In a service-oriented system, a quality (or Quality of Service) model is used (i) by service requesters to specify the expected quality levels of service delivery; (ii) by service providers to advertise quality levels that their services achieve; and (iii) by service composers when selecting among alternative services those that are to participate in a service composition. Expressive quality models are needed to let requesters specify quality expectations, providers advertise service qualities, and composers finely compare alternative services. Having observed many similarities between various quality models proposed in the literature, we review these and integrate them into a single quality model, called QVDP. We highlight the need for integration of priority and dependency information within any quality model for services and propose precise submodels for doing so. Our intention is for the proposed model to serve as a reference point for further developments in quality models for service-oriented systems. To this aim, we extend the part of the UML metamodel specialized for Quality of Service with QVDP concepts unavailable in UML.
Stéphane FaulknerEmail:

Ivan J. Jureta   has, after graduating, summa cum laude, received the Master in Management and Master of International Management, respectively, at the Université de Louvain, Belgium, and the London School of Economics, both in 2005. He is currently completing his Ph.D. thesis at the University of Namur, Belgium, under Prof. Stéphane Faulkner’s supervision. His thesis focuses on quality management of adaptable and open service-oriented systems enabling the Semantic Web. Caroline Herssens   received a Master Degree in Computer Science in 2005 at the Université de Louvain. In 2006, she graduated a Master in Business and Administration from the University of Louvain, with a supply chain management orientation. She is currently a teaching and research assistant and has started a Ph.D. thesis at the information systems research unit at Université de Louvain. Her research interests comprise service-oriented computing, conceptual modeling and information systems engineering. Stéphane Faulkner   is an Associate Professor in Technologies and Information Systems at the University of Namur (FUNDP) and an Invited Professor at the Louvain School of Management of the Université de Louvain (UCL). His current research interests revolve around requirements engineering and the development of modeling notations, systematic methods and tool support for the development of multi-agent systems, database and information systems.   相似文献   

9.
Typical request processing systems, such as web servers and database servers, try to accommodate all requests as fast as possible, which can be described as a Best-Effort approach. However, different application items may have different quality-of-service (QoS) requirements, and this can be viewed as an orthogonal concern to the basic system functionality. In this paper we propose the QoS-Broker, a middleware for delivering QoS over servers and applications. We show its architecture to support contracts over varied targets including queries, transactions, services or sessions, also allowing expressions on variables to be specified in those targets. We also discuss how the QoS-Broker implements basic strategies for QoS over workloads. Our experimental results illustrate the middleware by applying priority and weighted- fair-queuing based differentiation over clients and over transactions, and also admission control, using a benchmark as a case-study.  相似文献   

10.
Traditional approaches for storage devices simulation have been based on detailed and analytic models. However, analytic models are difficult to obtain and detailed models require a high computational cost which may be not affordable for large scale simulations (e.g. detailed data center simulations). In current systems like large clusters, grids, or clouds, performance and energy studies are critical, and fast simulations take an important role on them.A different approach is the black-box statistical modeling, where the storage device, its interface, and the interconnection mechanisms are modeled as a single stochastic process, defining the request response time as a random variable with an unknown distribution. A random variate generator can be built and integrated into a bigger simulation model. This approach allows to generate a simulation model for both real and synthetic complex workloads.This article describes a novel methodology that aims to build fast simulation models for storage devices. Our method uses as starting point a workload and produces a random variate generator which can be easily integrated into large scale simulation models. A comparison between our variate generator and the widely known simulation tool DiskSim, shows that our variate generator is faster, and can be as accurate as DiskSim for both performance and energy consumption predictions.  相似文献   

11.
Microsystem Technologies - At present, it has been developed in automation technology for many years. Especially, the Internet of Things technology is quite mature. However, in the Internet of...  相似文献   

12.
International Journal of Information Security - Nowadays, smart home devices like Amazon Echo and Google Home have reached mainstream popularity. Being in the homes of users, these devices are...  相似文献   

13.
面向边缘计算环境中设备信任度评估的准确性和时间开销等问题,提出一种边缘设备动态信任度评估模型—D T EM.首先,利用时间退化因子表达直接信任度时效性,引入满意度函数修正贝叶斯方程,并结合激励机制评估边缘设备间的直接信任度.其次,利用改进的灰关联分析法确定指标权重,解决了间接信任度评估过程中推荐设备权重的问题.最后,通...  相似文献   

14.
为提高应用层组播的性能,提出基于节点综合性能的总线型应用层组播模型(CPBM),CPBM通过引入分层结构和节点综合性能的概念,采用基于节点综合性能的动态规划方法使系统能随时间的变化自动调整各节点的位置,在每个域中,节点按照综合性能由高到低的顺序依次连成总线型域,综合性能最高的节点作为Leader 节点.仿真实验结果表明:与传统的NICE 模型相比,该模型具有更高的数据传输率和带宽利用率,在一定程度上提高了应用层组播的性能.  相似文献   

15.
There is a notable characteristic of the data access pattern: 80% I/O requests only access 20% data. This feature brings about the concept of hotspot data, which refer to the data in the most frequent requested area. The access to these hotspot data has direct influence upon the performance of the storage system's applications. Therefore, how to predict hotspot data is a critical research focus in the optimization of the storage system. In this paper, we propose a hotspot data prediction model based on a Zipf-like distribution, which can estimate and dynamically adjust parameters according to the present statistics of I/O access. We classify the hotspot data from every trace, and analyse the prediction rate through the classified hotspot data's characteristic. We synthesize the analysis results in different time granularities and hotspot data prediction queue lengths. Finally, we use block I/O traces to discuss the effectiveness of this model. The discussion and analysis results indicate that this model can predict the hotspot data efficiently.  相似文献   

16.
A comprehensive model for evaluating crossbar networks in which the memory bandwidth and processor acceptance probability are primary measures considered is presented. This analytical model includes all important network control policies, such as the bus arbitration and rejected request handling policies, as well as the home memory concept. Computer simulation validates the correctness of the model. It is confirmed that the home memory and dynamic bus arbitration policy improve the network performance  相似文献   

17.
冉峰  薛同莲 《微计算机信息》2007,23(19):135-137
提出了使用CMOS工艺的集成霍尔器件的一个新的电阻仿真模型.可用于设计集成霍尔传感器的整体设计,将磁信号转换为电信号.使用Verilog-A HDL语言实现,并且用Spectre电路仿真器执行,可实现宽温度范围应用.行为方程和行为参数建立在霍尔器件的基本理论和半导体物理的基础上并且经过电路仿真验证.为了提供对集成霍尔传感器的仿真方便性,使用了必要的Verilog-A HDL实现的磁场模型和霍尔模型.  相似文献   

18.
Risk management of a supply chain (SC) has a great influence on the stability of dynamic cooperation among SC partners and hence very important for the performance of the SC operations as a whole. A suitable decision-making model is the cornerstone for the efficiency of SC risk management. We propose in this paper a decision-making model based on the internal triggering and interactive mechanisms in an SC risk system, which takes into account dual cycles, the operational process cycle (OPC) and the product life cycle (PLC). We explore the inter-relationship among the two cycles, SC organizational performance factors (OPF) and available risk operational practice (ROP), as well as the risk managerial elements in OPC and PLC. In particular, three types of relationship, bilateral, unilateral and inter-circulative ones, are analyzed and verified. We build this dynamic relation into SC risk managerial logic and design a corresponding decision-making path. Based on the analytic network process (ANP), a methodology is designed for an optimal selection of risk management methods and tools. A numerical example is provided as an operational guideline for how to apply it to tailor operational tactics in SC risk management. The results verify that this strategic decision model is a feasible access to the suitable risk operational tactics for practitioners.  相似文献   

19.
The Journal of Supercomputing - Although Cloud computing is gaining popularity by supporting data analysis in an outsourced and cost-effective way, it brings serious privacy issues when sending the...  相似文献   

20.
Since the development of traditional home multimedia is yet to be improved, the various multimedia devices are used for playing media content. Under the advancement of modern science and technology, there are various formats of compact discs to store and play multimedia content, such as VCD, DVD, portable disks, etc., and the latest, Blu-ray disc. However, it is difficult for these devices to share the content without any configuration. In order to solve the problem of playing effectively, we propose a portable UPnP-based high performance content sharing system for supporting multimedia devices, which includes a content sharing server, and media players. The content sharing server can realize the share services and file control of the portable disk, iPod, DVD, digital TV, and other devices, so that users no longer need to carry out complex processes to install software and settings, as the media players can allow users to play the multimedia file on any media device.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号