首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
Time series motifs are approximately repeated subsequences found within a longer time series. They have been in the literature since 2002, but recently they have begun to receive significant attention in research and industrial communities. This is perhaps due to the growing realization that they implicitly offer solutions to a host of time series problems, including rule discovery, anomaly detection, density estimation, semantic segmentation, summarization, etc. Recent work has improved the scalability so exact motifs can be computed on datasets with up to a million data points in tenable time. However, in some domains, for example seismology or climatology, there is an immediate need to address even larger datasets. In this work, we demonstrate that a combination of a novel algorithm and a high-performance GPU allows us to significantly improve the scalability of motif discovery. We demonstrate the scalability of our ideas by finding the full set of exact motifs on a dataset with one hundred and forty-three million subsequences, which is by far the largest dataset ever mined for time series motifs/joins; it requires ten quadrillion pairwise comparisons. Furthermore, we demonstrate that our algorithm can produce actionable insights into seismology and ethology.  相似文献   

2.
已有的变长模体发现算法存在速度慢、可扩展性较差,且结果中包含过短、过长和平凡匹配等无意义模体的问题。本文提出一种基于Matrix Profile的时间序列变长模体挖掘算法。该算法使用STOMP算法作为子程序,使用结合了增量计算的下界距离来加速候选模体提取过程;采用长度相似性条件和模体分组等价类方法踢除过短、过长和平凡匹配等无意义的模体。在数据集UCR上的实验表明,提出的算法在发现变长模体时,能够有效地过滤无意义模体,且具有较高的效率和准确率。  相似文献   

3.
The detection of very similar patterns in a time series, commonly called motifs, has received continuous and increasing attention from diverse scientific communities. In particular, recent approaches for discovering similar motifs of different lengths have been proposed. In this work, we show that such variable-length similarity-based motifs cannot be directly compared, and hence ranked, by their normalized dissimilarities. Specifically, we find that length-normalized motif dissimilarities still have intrinsic dependencies on the motif length, and that lowest dissimilarities are particularly affected by this dependency. Moreover, we find that such dependencies are generally non-linear and change with the considered data set and dissimilarity measure. Based on these findings, we propose a solution to rank (previously obtained) motifs of different lengths and measure their significance. This solution relies on a compact but accurate model of the dissimilarity space, using a beta distribution with three parameters that depend on the motif length in a non-linear way. We believe the incomparability of variable-length dissimilarities could have an impact beyond the field of time series, and that similar modeling strategies as the one used here could be of help in a more broad context and in diverse application scenarios.  相似文献   

4.
A streaming time series is a continuous and unbounded group of chronological observations that are found in many scientific and business applications. Motifs that are frequent subsequences are highly representative for the time series and play an important role in time series mining. Discovering motifs in time series has received much attention during recent years, and several algorithms have been proposed to solve this problem. However, these algorithms can only find motifs with a predefined length, which greatly affects their performance and practicality. Recent algorithms can discover motifs with different lengths, but require multiple scanning of the time series and are thus not applicable to streaming time series. In addition, it is difficult to determine the optimal length of interesting motifs; a suboptimal choice results in missing the key motifs or having too many redundant motifs. To overcome this challenge, we introduce the notion of a \(closed\) motif; a motif is \(closed\) if there is no motif with a longer length having the same number of occurrences. We propose a novel algorithm \(closedMotif\) to discover closed motifs in a single scan for streaming time series. We also use the nearest neighbor classifier with the most distinctive closed motifs to validate their potential in time series classification. Extensive experiments show that our approach can efficiently discover motifs with different lengths. In addition, our closed-motif-based classifier is shown to be more accurate than \(Logical\text{- }Shapelet\) , a state-of-the-art time series classifier. Finally, we demonstrate the scalability of \(closedMotif\) on several large datasets in diverse domains like video surveillance, sensor networks, and biometrics.  相似文献   

5.
The problem of anomaly detection in time series has received a lot of attention in the past two decades. However, existing techniques cannot locate where the anomalies are within anomalous time series, or they require users to provide the length of potential anomalies. To address these limitations, we propose a self-learning online anomaly detection algorithm that automatically identifies anomalous time series, as well as the exact locations where the anomalies occur in the detected time series. In addition, for multivariate time series, it is difficult to detect anomalies due to the following challenges. First, anomalies may occur in only a subset of dimensions (variables). Second, the locations and lengths of anomalous subsequences may be different in different dimensions. Third, some anomalies may look normal in each individual dimension but different with combinations of dimensions. To mitigate these problems, we introduce a multivariate anomaly detection algorithm which detects anomalies and identifies the dimensions and locations of the anomalous subsequences. We evaluate our approaches on several real-world datasets, including two CPU manufacturing data from Intel. We demonstrate that our approach can successfully detect the correct anomalies without requiring any prior knowledge about the data.  相似文献   

6.
The goal of analyzing a time series database is to find whether and how frequent a periodic pattern is repeated within the series. Periodic pattern mining is the problem that regards temporal regularity. However, most of the existing algorithms have a major limitation in mining interesting patterns of users interest, that is, they can mine patterns of specific length with all the events sequentially one after another in exact positions within this pattern. Though there are certain scenarios where a pattern can be flexible, that is, it may be interesting and can be mined by neglecting any number of unimportant events in between important events with variable length of the pattern. Moreover, existing algorithms can detect only specific type of periodicity in various time series databases and require the interaction from user to determine periodicity. In this paper, we have proposed an algorithm for the periodic pattern mining in time series databases which does not rely on the user for the period value or period type of the pattern and can detect all types of periodic patterns at the same time, indeed these flexibilities are missing in existing algorithms. The proposed algorithm facilitates the user to generate different kinds of patterns by skipping intermediate events in a time series database and find out the periodicity of the patterns within the database. It is an improvement over the generating pattern using suffix tree, because suffix tree based algorithms have weakness in this particular area of pattern generation. Comparing with the existing algorithms, the proposed algorithm improves generating different kinds of interesting patterns and detects whether the generated pattern is periodic or not. We have tested the performance of our algorithm on both synthetic and real life data from different domains and found a large number of interesting event sequences which were missing in existing algorithms and the proposed algorithm was efficient enough in generating and detecting periodicity of flexible patterns on both types of data.  相似文献   

7.
This paper presents a new parallel transmission framework for reliable multimedia data transmission over spectrally shaped channels using multicarrier modulation. We propose to transmit source data layers of different perceptual importance in parallel, each occupying a number of subchannels. New loading algorithms are developed to efficiently allocate the available resources, e.g., transmitted power and bit rate, to the subchannels according to the source layers they transmit. Instead of making the bit error rate of all the subchannels equal as in most existing loading algorithms, the proposed algorithm assigns different error performance to the subchannels to achieve unequal error protection for different layers. The channel induced distortion in mean-square sense is minimized. We show that the proposed system can be applied nicely to both fixed length coding and variable-length coding. Asymptotic gains with respect to channel distortion are also derived. Numerical examples show that the proposed algorithm achieves significant performance improvement compared to the existing work, especially for spectrally shaped channels commonly used in in ADSL systems  相似文献   

8.
We propose a method for the data‐driven inference of temporal evolutions of physical functions with deep learning. More specifically, we target fluid flow problems, and we propose a novel LSTM‐based approach to predict the changes of the pressure field over time. The central challenge in this context is the high dimensionality of Eulerian space‐time data sets. We demonstrate for the first time that dense 3D+time functions of physics system can be predicted within the latent spaces of neural networks, and we arrive at a neural‐network based simulation algorithm with significant practical speed‐ups. We highlight the capabilities of our method with a series of complex liquid simulations, and with a set of single‐phase buoyancy simulations. With a set of trained networks, our method is more than two orders of magnitudes faster than a traditional pressure solver. Additionally, we present and discuss a series of detailed evaluations for the different components of our algorithm.  相似文献   

9.
李玫  高庆  马森  张世琨  胡文蕙  张兴明 《软件学报》2021,32(7):2242-2259
代码相似性检测(Code Similarity Detection)是软件工程领域的基本任务之一,其在剽窃检测、许可证违反检测、软件复用分析以及漏洞发现等方向均有重要作用.随着软件开源化的普及以及开源代码量的高速增长,开源代码在各个领域的应用日益频繁,给传统的代码相似性检测方法带来了新的挑战.现有的一些基于词法、语法、语义的检测方法存在算法较为复杂,对解析工具有依赖性,消耗资源高,可移植性差,候选对比项数量较多等问题,在大规模代码库上有一定局限性.基于相似哈希(simhash)指纹的代码相似性检测算法将代码降维至一个指纹,能够在数据集规模较大的情况下实现快速相似文件检索,并通过海明距离阈值控制匹配结果的相似度范围.通过实验对现有的基于代码行粒度的相似哈希算法进行验证,发现其在大规模数据集下存在行覆盖问题,即高频行特征对低频行特征的覆盖现象,导致结果精确度较低.受TF-IDF算法思想启发,针对上述问题创新性地提出了分语言行筛选优化方法,通过各种语言的行筛选器对代码文件行序列进行筛选,从而消除高频出现但语义信息包含较少的行对结果的影响.对改进前后方法进行一系列对比实验,结果表明改进后的方法在海明距离阈值为0至8的情况下均能够实现高精确度的相似文件对检索,阈值为8时在两个数据集下的精确度较改进前的方法分别提升了98.6%和52.2%.在本文建立的130万个开源项目,386486112个项目文件的大规模代码库上进行实验,验证了本文的方法能够快速检测出待测文件的相似文件结果,平均单个文件检测时间为0.43s,并取得了97%以上的检测精度.  相似文献   

10.
Time series motifs are sets of very similar subsequences of a long time series. They are of interest in their own right, and are also used as inputs in several higher-level data mining algorithms including classification, clustering, rule-discovery and summarization. In spite of extensive research in recent years, finding time series motifs exactly in massive databases is an open problem. Previous efforts either found approximate motifs or considered relatively small datasets residing in main memory. In this work, we leverage off previous work on pivot-based indexing to introduce a disk-aware algorithm to find time series motifs exactly in multi-gigabyte databases which contain on the order of tens of millions of time series. We have evaluated our algorithm on datasets from diverse areas including medicine, anthropology, computer networking and image processing and show that we can find interesting and meaningful motifs in datasets that are many orders of magnitude larger than anything considered before.  相似文献   

11.
This paper describes Seeker, a platform for large-scale text analytics, and SemTag, an application written on the platform to perform automated semantic tagging of large corpora. We apply SemTag to a collection of approximately 264 million web pages, and generate approximately 434 million automatically disambiguated semantic tags, published to the web as a label bureau providing metadata regarding the 434 million annotations. To our knowledge, this is the largest scale semantic tagging effort to date.We describe the Seeker platform, discuss the architecture of the SemTag application, describe a new disambiguation algorithm specialized to support ontological disambiguation of large-scale data, evaluate the algorithm, and present our final results with information about acquiring and making use of the semantic tags. We argue that automated large-scale semantic tagging of ambiguous content can bootstrap and accelerate the creation of the semantic web.  相似文献   

12.
针对时间序列模体发现算法计算复杂,并且无法发现多实例模体的问题,提出基于子序列全连接和最大团的时间序列模体发现(TSSJMC)算法。首先,使用快速时间序列子序列全连接算法求得所有子序列之间的距离,生成距离矩阵;然后,设置相似性阈值,将距离矩阵转化为邻接矩阵,构造子序列相似图;最后采用最大团搜索算法从相似图中搜索最大团,最大团的顶点对应的时间序列为包含最多实例的模体。在公开的时间序列数据集上进行实验,选用已有的能够发现多实例模体的Brute Force和Random Projection算法作为对比对象,分别从准确性、效率、可扩展性和鲁棒性对TSSJMC算法进行分析并获得了客观的评判结果。实验结果表明,与Random Projection算法相比,TSSJMC算法在效率、可扩展性和鲁棒性法方面均有明显优势;与Brute Force算法相比,TSSJMC算法发现的模体实例数量虽略低,但其效率和可扩展性都优于Brute Force算法。因此,TSSJMC是质量和效率相平衡的算法。  相似文献   

13.
14.
15.
Time series prediction for higher future horizons is of great importance and has increasingly aroused interest among both scholars and practitioners. Compared to one-step-ahead prediction, multi-step-ahead prediction encounters higher dose of uncertainty arising from various facets, including accumulation of errors and lack of information. Many existing studies draw attention to the former issue, while relatively overlook the latter one. Inspired by this discovery, a new multi-task learning algorithm, called the MultiTL-KELM algorithm for short, is proposed for multi-step-ahead time series prediction in this work, where the long-ago data is utilized to provide more information for the current prediction task. The time-varying quality of time-series data usually gives rise to a wide variability between data over long time span, making it difficult to ensure the assumption of identical distribution. How to make the most of, rather than discard the abundant old data, and transfer more useful knowledge to current prediction is one of the main concerns of our proposed MultiTL-KELM algorithm. Besides, unlike typical iterated or direct strategies, MultiTL-KELM regards predictions of different horizons as different tasks. Knowledge from one task can benefit others, enabling it to explore the relatedness among horizons. Based upon its design scheme, MultiTL-KELM alleviates the accumulation error problem of iterated strategy and the time consuming of direct strategies. The proposed MultiTL-KELM algorithm has been compared with several other state-of-the-art algorithms, and its effectiveness has been numerically confirmed by the experiments we conducted on four synthetic and two real-world benchmark time series datasets.  相似文献   

16.
Community structure has become one of the central studies of the topological structure of complex networks in the past decades. Although many advanced approaches have been proposed to identify community structure, those state-of-the-art methods still lack efficiency in terms of a balance between stability, accuracy and computation time. Here, we propose an algorithm with different stages, called TJA-net, to efficiently identify communities in a large network with a good balance between accuracy, stability and computation time. First, we propose an initial labeling algorithm, called ILPA, combining K-nearest neighbor (KNN) and label propagation algorithm (LPA). To produce a number of sub-communities automatically, ILPA iteratively labels a node in a network using the labels of its adjacent nodes and their index of closeness. Next, we merge sub-communities using the mutual membership of two communities. Finally, a refinement strategy is designed for modifying the label of the wrongly clustered nodes at boundaries. In our approach, we propose and use modularity density as the objective function rather than the commonly used modularity. This can deal with the issue of the resolution limit for different network structures enhancing the result precision. We present a series of experiments with artificial and real data set and compare the results obtained by our proposed algorithm with the ones obtained by the state-of-the-art algorithms, which shows the effectiveness of our proposed approach. The experimental results on large-scale artificial networks and real networks illustrate the superiority of our algorithm.  相似文献   

17.
Catalogs of periodic variable stars contain large numbers of periodic light-curves (photometric time series data from the astrophysics domain). Separating anomalous objects from well-known classes is an important step towards the discovery of new classes of astronomical objects. Most anomaly detection methods for time series data assume either a single continuous time series or a set of time series whose periods are aligned. Light-curve data precludes the use of these methods as the periods of any given pair of light-curves may be out of sync. One may use an existing anomaly detection method if, prior to similarity calculation, one performs the costly act of aligning two light-curves, an operation that scales poorly to massive data sets. This paper presents PCAD, an unsupervised anomaly detection method for large sets of unsynchronized periodic time-series data, that outputs a ranked list of both global and local anomalies. It calculates its anomaly score for each light-curve in relation to a set of centroids produced by a modified k-means clustering algorithm. Our method is able to scale to large data sets through the use of sampling. We validate our method on both light-curve data and other time series data sets. We demonstrate its effectiveness at finding known anomalies, and discuss the effect of sample size and number of centroids on our results. We compare our method to naive solutions and existing time series anomaly detection methods for unphased data, and show that PCAD’s reported anomalies are comparable to or better than all other methods. Finally, astrophysicists on our team have verified that PCAD finds true anomalies that might be indicative of novel astrophysical phenomena.  相似文献   

18.
The synapsing variable-length crossover (SVLC) algorithm provides a biologically inspired method for performing meaningful crossover between variable-length genomes. In addition to providing a rationale for variable-length crossover, it also provides a genotypic similarity metric for variable-length genomes, enabling standard niche formation techniques to be used with variable-length genomes. Unlike other variable-length crossover techniques which consider genomes to be rigid inflexible arrays and where some or all of the crossover points are randomly selected, the SVLC algorithm considers genomes to be flexible and chooses nonrandom crossover points based on the common parental sequence similarity. The SVLC algorithm recurrently "glues" or synapses homogenous genetic subsequences together. This is done in such a way that common parental sequences are automatically preserved in the offspring with only the genetic differences being exchanged or removed, independent of the length of such differences. In a variable-length test problem, the SVLC algorithm compares favorably with current variable-length crossover techniques. The variable-length approach is further advocated by demonstrating how a variable-length genetic algorithm (GA) can obtain a high fitness solution in fewer iterations than a traditional fixed-length GA in a two-dimensional vector approximation task  相似文献   

19.
Grammatical evolution   总被引:4,自引:0,他引:4  
We present grammatical evolution, an evolutionary algorithm that can evolve complete programs in an arbitrary language using a variable-length binary string. The binary genome determines which production rules in a Backus-Naur form grammar definition are used in a genotype-to-phenotype mapping process to a program. We demonstrate how expressions and programs of arbitrary complexity may be evolved and compare its performance to genetic programming  相似文献   

20.
The forecasting process of real-world time series has to deal with especially unexpected values, commonly known as outliers. Outliers in time series can lead to unreliable modeling and poor forecasts. Therefore, the identification of future outlier occurrence is an essential task in time series analysis to reduce the average forecasting error. The main goal of this work is to predict the occurrence of outliers in time series, based on the discovery of motifs. In this sense, motifs will be those pattern sequences preceding certain data marked as anomalous by the proposed metaheuristic in a training set. Once the motifs are discovered, if data to be predicted are preceded by any of them, such data are identified as outliers, and treated separately from the rest of regular data. The forecasting of outlier occurrence has been added as an additional step in an existing time series forecasting algorithm (PSF), which was based on pattern sequence similarities. Robust statistical methods have been used to evaluate the accuracy of the proposed approach regarding the forecasting of both occurrence of outliers and their corresponding values. Finally, the methodology has been tested on six electricity-related time series, in which most of the outliers were properly found and forecasted.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号