首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 265 毫秒
1.
针对大数据离线分析类和交互式查询类负载,首先对这些负载的一些共性进行分析,提取出公共操作集,并对它们进行分组整理;然后在大数据平台上测试这些负载运行过程中的微体系结构特征,采用PCA和SimpleKMeans算法对这些体系结构特征参数进行降维和聚类处理。实验分析结果表明负载之间有公共的操作集,如Join和Cross Production;有些负载有相似的属性,如Difference和Projection共享相同的微体系结构特征。实验结果对于 处理器等硬件平台的设计以及应用程序的优化具有指导性的意义,并且为大数据基准测试平台的设计提供了参考。  相似文献   

2.
一种基于加速迭代的大数据集谱聚类方法   总被引:1,自引:1,他引:0  
传统谱聚类算法的诸多优点只适合小数据集。根据Laplacian矩阵的特点重新构造新的Gram矩阵,输入新构造矩阵的若干列,然后利用加速迭代法解决大数据集的谱聚类特征提取问题,使得在大数据集条件下,谱聚类算法只需要很小的空间复杂度就可达到非常快的计算速度。  相似文献   

3.
利用少量标签数据获得较高聚类精度的半监督聚类技术是近年来数据挖掘和机器学习领域的研究热点。但是现有的半监督聚类算法在处理极少量标签数据和多密度不平衡数据集时的聚类精度比较低。基于主动学习技术研究标签数据选取,提出了一个新的半监督聚类算法。该算法结合最小生成树聚类和主动学习思想,选取包含信息较多的数据点作为标签数据,使用类KNN思想对类标签进行传播。通过在UCI标准数据集和模拟数据集上的测试,结果表明提出的算法比其他算法在处理多密度、不平衡数据集时有更高精度且稳定的聚类结果。  相似文献   

4.
基于核的自适应K—Medoid聚类   总被引:2,自引:1,他引:1  
针对K-Medoid算法不能有效聚类大数据集和高维数据的弱点,将核学习方法引入到K-Medoid算法,提出了基于核的自适应K-Medoid算法.该算法利用核函数将输入空间样本映射到一个高维的特征空间,在这个核空间中进行K-Medoid聚类,在聚类过程中,数据可以自适应地加入到最适合它的簇当中,并且聚类结果与初始k个中心点的选取无关,该算法可以完成对大数据集和高维数据的聚类.实验结果表明,与K-Medoid算法相比,该算法具有较高的聚类准确率.  相似文献   

5.
牛科  张小琴  贾郭军 《计算机工程》2015,41(1):207-210,244
无监督学习聚类算法的性能依赖于用户在输入数据集上指定的距离度量,该距离度量直接影响数据样本之间的相似性计算,因此,不同的距离度量往往对数据集的聚类结果具有重要的影响。针对谱聚类算法中距离度量的选取问题,提出一种基于边信息距离度量学习的谱聚类算法。该算法利用数据集本身蕴涵的边信息,即在数据集中抽样产生的若干数据样本之间是否具有相似性的信息,进行距离度量学习,将学习所得的距离度量准则应用于谱聚类算法的相似度计算函数,并据此构造相似度矩阵。通过在UCI标准数据集上的实验进行分析,结果表明,与标准谱聚类算法相比,该算法的预测精度得到明显提高。  相似文献   

6.
针对目前聚类算法没有充分地利用输入知识,不便于知识的学习和增长的情形,提出在高维数据集的情况下,恰当地利用输入知识可以更准确有效地发现聚类,提出聚类的相关维集的概念,分析输入知识的特点,对带有输入知识的高维聚类算法进行研究,指导聚类的学习过程。  相似文献   

7.
对于具有多特征的复杂数据,使用子数据集作为聚类成员的输入并使用加权投票的聚类集成方法可以权衡不同聚类成员的质量,提高聚类的准确性和稳定性。针对子数据集的选择及权重的计算方式,提出了最小相关特征的子数据集选取方法,并基于特征关系分析比较了五种聚类成员的权重计算方法。实验结果表明,使用最小相关特征法选择每个聚类成员的输入数据,相比随机抽样法可提高聚类集成的准确率。基于五种权重计算方法的聚类集成准确率都比单聚类高,且时间消耗有明显差异。  相似文献   

8.
K-means算法采用欧氏距离进行数据点的划分,不能够准确地刻画数据集特征,而随机选取聚类中心点的机制,也不能获得好的聚类结果。为此,提出一种基于数据场的数据势能竞争与K-means算法融合的聚类算法。算法中定义了数据场的概念,利用局部最小距离进行数据聚合势能的竞争,然后利用势能熵提取基于数据集分布的最优截断距离,根据截断距离与斜率确定出簇中心点,实现K-means聚类。在UCI数据集上的测试结果表明,融合后的算法具有更好的聚类结果。  相似文献   

9.
为解决大规模数据集聚类过程中内存容量受限问题,提出了一种基于聚类个数约束的快速聚类算法,只需扫描一趟原始数据集,半径阈值随聚类过程动态变化;同时定义了一种包含分类属性取值频率信息的类间差异性度量,可用于混合属性数据集,时间复杂度与空间复杂度同数据集大小,属性个数近似成线性关系.在KDDCUP99数据集上的实验结果表明,提出的算法输入参数少,具有良好的聚类特性,可用于大规模数据集.  相似文献   

10.
针对目前聚类算法对大数据集的聚类分析中存在时间花费过大的问题,提出了一种基于最近邻相似性的数据集压缩算法。通过将若干个相似性最近邻的数据点划分成一个数据簇并随机选择簇头构成新的数据集,大大缩减了数据的规模。然后分别采用k-means算法和AP算法对压缩后的数据集进行聚类分析。实验结果表明,压缩后的数据集与原始数据集的聚类分析相比,在保证聚类准确率基本一致的前提下有效降低了聚类的花费时长,提高了算法的聚类性能,证明该数据集压缩算法在聚类分析中的有效性与可靠性。  相似文献   

11.
工业界、学术界,以及最终用户都急切需要一个大数据的评测基准, 用以评估现有的大数据系统,改进现有技术以及开发新的技术。回顾了近几年来大数据评测基准研发方面的主要工作。 对它们的特点和缺点进行了比较分析。在此基础上, 对研发新的大数据评测基准提出了一系列考虑因素:1)为了对整个大数据平台的不同子工具进行评测, 以及把大数据平台作为一个整体进行评测, 需要研发面向组件的评测基准和面向大数据平台整体的评测基准, 后者是前者的有机组合;2)工作负载除了SQL查询之外, 必须包含大数据分析任务所需要的各种复杂分析功能, 涵盖各类应用需求;3)在评测指标方面,除了性能指标(响应时间和吞吐量)之外, 还需要考虑其他指标的评测, 包括系统的可扩展性、容错性、节能性和安全性等。  相似文献   

12.
随着云计算的快速发展,云文件系统在云计算基础设施中扮演着越来越重要的角色。尽管目前业界已有不少面向云文件系统的性能评测工具,但大多数评测工具仅关注于传统的系统性能指标,比如IOPS和吞吐量,难以评估云文件系统在多租户环境下的性能隔离性。由于云环境I/O负载的动态性和异构性,所以准确评估云文件系统的隔离性变得更加具有挑战性。提出了一种新型的云文件系统隔离性度量模型,并在一个基准测试工具Porcupine中进行了实现。Porcupine通过模拟真实负载特征的I/O请求,实现对负载与性能的准确仿真并提高文件系统的测试效率。通过对Ceph文件系统的实验,验证了所提出的隔离性度量模型的有效性及准确性。  相似文献   

13.
To evaluate the performance of database applications and database management systems (DBMSs), we usually execute workloads of queries on generated databases of different sizes and then benchmark various measures such as respond time and throughput. This paper introduces MyBenchmark, a parallel data generation tool that takes a set of queries as input and generates database instances. Users of MyBenchmark can control the characteristics of the generated data as well as the characteristics of the resulting workload. Applications of MyBenchmark include DBMS testing, database application testing, and application-driven benchmarking. In this paper, we present the architecture and the implementation algorithms of MyBenchmark. Experimental results show that MyBenchmark is able to generate workload-aware databases for a variety of workloads including query workloads extracted from TPC-C, TPC-E, TPC-H, and TPC-W benchmarks.  相似文献   

14.
The Internet of Things (IoT) is an emerging technology paradigm where millions of sensors and actuators help monitor and manage physical, environmental, and human systems in real time. The inherent closed‐loop responsiveness and decision making of IoT applications make them ideal candidates for using low latency and scalable stream processing platforms. Distributed stream processing systems (DSPS) hosted in cloud data centers are becoming the vital engine for real‐time data processing and analytics in any IoT software architecture. But the efficacy and performance of contemporary DSPS have not been rigorously studied for IoT applications and data streams. Here, we propose RIoTBench , a real‐time IoT benchmark suite, along with performance metrics, to evaluate DSPS for streaming IoT applications. The benchmark includes 27 common IoT tasks classified across various functional categories and implemented as modular microbenchmarks. Further, we define four IoT application benchmarks composed from these tasks based on common patterns of data preprocessing, statistical summarization, and predictive analytics that are intrinsic to the closed‐loop IoT decision‐making life cycle. These are coupled with four stream workloads sourced from real IoT observations on smart cities and smart health, with peak streams rates that range from 500 to 10 000 messages/second from up to 3 million sensors. We validate the RIoTBench suite for the popular Apache Storm DSPS on the Microsoft Azure public cloud and present empirical observations. This suite can be used by DSPS researchers for performance analysis and resource scheduling, by IoT practitioners to evaluate DSPS platforms, and even reused within IoT solutions.  相似文献   

15.
With the explosive growth of information, more and more organizations are deploying private cloud systems or renting public cloud systems to process big data. However, there is no existing benchmark suite for evaluating cloud performance on the whole system level. To the best of our knowledge, this paper proposes the first benchmark suite CloudRank-D to benchmark and rank cloud computing systems that are shared for running big data applications. We analyze the limitations of previous metrics, e.g., floating point operations, for evaluating a cloud computing system, and propose two simple metrics: data processed per second and data processed per Joule as two complementary metrics for evaluating cloud computing systems. We detail the design of CloudRank-D that considers representative applications, diversity of data characteristics, and dynamic behaviors of both applications and system software platforms. Through experiments, we demonstrate the advantages of our proposed metrics. In several case studies, we evaluate two small-scale deployments of cloud computing systems using CloudRank-D.  相似文献   

16.
Big data analytics applications are increasingly deployed on cloud computing infrastructures,and it is still a big challenge to pick the optimal cloud configurations in a cost-effective way.In this paper,we address this problem with a high accuracy and a low overhead.We propose Apollo,a data-driven approach that can rapidly pick the optimal cloud configurations by reusing data from similar workloads.We first classify 12 typical workloads in BigDataBench by characterizing pairwise correlations in our offline benchmarks.When a new workload comes,we run it with several small datasets to rank its key characteristics and get its similar workloads.Based on the rank,we then limit the search space of cloud configurations through a classification mechanism.At last,we leverage a hierarchical regression model to measure which cluster is more suitable and use a local search strategy to pick the optimal cloud configurations in a few extra tests.Our evaluation on 12 typical workloads in HiBench shows that compared with state-of-the-art approaches,Apollo can improve up to 30% search accuracy,while reducing as much as 50% overhead for picking the optimal cloud configurations.  相似文献   

17.
混合事务与分析处理(hybridtransactionalanalyticalprocessing,HTAP)技术是一种基于一站式架构同时处理事务请求与查询分析请求的技术. HTAP技术不仅消除了从关系型事务数据库到数据仓库的数据抽取、转换和加载过程,还支持实时地分析最新事务数据.然而,为了同时处理OLTP与OLAP, HTAP系统也需要在系统性能与数据分析新鲜度之间做出取舍,这主要是因为高并发、短时延的OLTP与带宽密集型、高时延的OLAP访问模式不同且互相干扰.目前,主流的HTAP数据库主要以行列共存的方式来支持混合事务与分析处理,但是由于该类数据库面向不同的业务场景,所以它们的存储架构与处理技术各有不同.首先,全面调研HTAP数据库,总结它们主要的应用场景与优缺点,并根据存储架构对它们进行分类、总结与对比.现有综述工作侧重于基于行/列单格式存储的HTAP数据库以及基于Spark的松耦合HTAP系统,而这里侧重于行列共存的实时HTAP数据库.特别地,凝炼了主流HTAP数据库关键技术,包括数据组织技术、数据同步技术、查询优化技术、资源调度技术这4个部分.同时总结分析了HTAP数据库构...  相似文献   

18.
Computer system design studies traditionally involve only small collections of benchmarks. Detailed benchmark analysis is extremely time consuming and requires a large amount of human and machine resources. Therefore, it is essential that the benchmark collection be representative of the customer workloads for which an architecture is developed. In recent work, interworkload distances have been proposed as a way of characterizing workload similarity. These distances are based on measurable/computable program characteristics, such as instruction mix or dependence distance. In the literature, these characteristics enter the distances symmetrically. We observe that the program behavior impact of different characteristics varies significantly. We propose a method of estimating the program behavior impact via a regression model. Its components then enter the distance definition directly, thus emphasizing high-impact characteristics. We also propose a data collection methodology that can be deployed at a customer site without requiring code instrumentation and/or a detailed simulation setup. We build a dataset consisting of 84 program characteristics for each of the 106 workloads and apply the proposed distance methodology to it.  相似文献   

19.
The importance of reporting is ever increasing in today’s fast-paced market environments and the availability of up-to-date information for reporting has become indispensable. Current reporting systems are separated from the online transaction processing systems (OLTP) with periodic updates pushed in. A pre-defined and aggregated subset of the OLTP data, however, does not provide the flexibility, detail, and timeliness needed for today’s operational reporting. As technology advances, this separation has to be re-evaluated and means to study and evaluate new trends in data storage management have to be provided. This article proposes a benchmark for combined OLTP and operational reporting, providing means to evaluate the performance of enterprise data management systems for mixed workloads of OLTP and operational reporting queries. Such systems offer up-to-date information and the flexibility of the entire data set for reporting. We describe how the benchmark provokes the conflicts that are the reason for separating the two workloads on different systems. In this article, we introduce the concepts, logical data schema, transactions and queries of the benchmark, which are entirely based on the original data sets and real workloads of existing, globally operating enterprises.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号