首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Principal component analysis (PCA) is often applied to dimensionality reduction for time series data mining. However, the principle of PCA is based on the synchronous covariance, which is not very effective in some cases. In this paper, an asynchronism-based principal component analysis (APCA) is proposed to reduce the dimensionality of univariate time series. In the process of APCA, an asynchronous method based on dynamic time warping (DTW) is developed to obtain the interpolated time series which derive from the original ones. The correlation coefficient or covariance between the interpolated time series represents the correlation between the original ones. In this way, a novel and valid principal component analysis based on the asynchronous covariance is achieved to reduce the dimensionality. The results of several experiments demonstrate that the proposed approach APCA outperforms PCA for dimensionality reduction in the field of time series data mining.  相似文献   

2.
Catalogs of periodic variable stars contain large numbers of periodic light-curves (photometric time series data from the astrophysics domain). Separating anomalous objects from well-known classes is an important step towards the discovery of new classes of astronomical objects. Most anomaly detection methods for time series data assume either a single continuous time series or a set of time series whose periods are aligned. Light-curve data precludes the use of these methods as the periods of any given pair of light-curves may be out of sync. One may use an existing anomaly detection method if, prior to similarity calculation, one performs the costly act of aligning two light-curves, an operation that scales poorly to massive data sets. This paper presents PCAD, an unsupervised anomaly detection method for large sets of unsynchronized periodic time-series data, that outputs a ranked list of both global and local anomalies. It calculates its anomaly score for each light-curve in relation to a set of centroids produced by a modified k-means clustering algorithm. Our method is able to scale to large data sets through the use of sampling. We validate our method on both light-curve data and other time series data sets. We demonstrate its effectiveness at finding known anomalies, and discuss the effect of sample size and number of centroids on our results. We compare our method to naive solutions and existing time series anomaly detection methods for unphased data, and show that PCAD’s reported anomalies are comparable to or better than all other methods. Finally, astrophysicists on our team have verified that PCAD finds true anomalies that might be indicative of novel astrophysical phenomena.  相似文献   

3.
A periodic time series analysis is explored in the context of unobserved components time series models that include stochastic time functions for trend, seasonal and irregular effects. Periodic time series models allow dynamic characteristics (autocovariances) to depend on the period of the year, month, week or day. In the standard multivariate approach one can interpret a periodic time series analysis as a simultaneous treatment of typically yearly time series where each series is related to a particular season. Here, the periodic analysis applies to a vector of monthly time series related to each day of the month. Particular focus is on the forecasting performance and therefore on the underlying periodic forecast function, defined by the in-sample observation weights for producing (multi-step) forecasts. These weight patterns facilitate the interpretation of periodic model extensions. A statistical state space approach is used to estimate the model and allows for irregularly spaced observations in daily time series. Recent algorithms are adopted for the computation of observation weights for forecasting based on state space models with regressor variables. The methodology is illustrated for daily Dutch tax revenues that appear to have periodic dynamic properties. The dimension of our periodic unobserved components model is relatively large as we allow each element (day) of the vector of monthly time series to have a changing seasonal pattern. Nevertheless, even with only five years of data we find that the increased periodic flexibility can help in out-of-sample forecasting for two extra years of data.  相似文献   

4.
In this work, gene expression time series models have been constructed by using principal component analysis (PCA) and neural network (NN). The main contribution of this paper is to develop a methodology for modeling numerical gene expression time series. The PCA-NN prediction models are compared with other popular continuous prediction methods. The proposed model can give us the extracted features from the gene expressions time series and the orders of the prediction accuracies. Therefore, the model can help practitioners to gain a better understanding of a cell cycle, and to find the dependency of genes, which is useful for drug discoveries. Based on the results of two public real datasets, the PCA-NN method outperforms the other continuous prediction methods. In the time series model, we adapt Akaike's information criteria (AIC) tests and cross-validation to select a suitable NN model to avoid the overparameterized problem.  相似文献   

5.
The traffic density situation in a traffic network, especially traffic congestion, exhibits characteristics similar to thermodynamic heat conduction, e.g., the traffic congestion in one section can be conducted to other adjacent sections of the traffic network sequentially. Analyzing this conduction facilitates the forecasting of future traffic situation; therefore, a navigation system can reduce traffic congestion and improve transportation mobility. This study describes a methodology for traffic conduction analysis modeling based on extracting important time-related conduction rules using a type of evolutionary algorithm named Genetic Network Programming (GNP). The extracted rules construct a useful model for forecasting future traffic situations and analyzing traffic conduction. The proposed methodology was implemented and experimentally evaluated using a large scale real-time traffic simulator, SOUND/4U.  相似文献   

6.
为克服已有时序数据周期模式挖掘算法中限制过多过严的问题。提出一种可以应用于弱限制的、不需要给出目标模式或周期长度的、借鉴蚁群算法思想的用于时序数据周期模式的挖掘算法,使得需要验证的怀疑周期消减,算法效率提高。  相似文献   

7.
降低漏报率和误检率是网络流量异常检测的难点问题之一。本文提出了一种大规模通信网络流量异常特征分析的多时间序列数据挖掘方法,把多个网络流量特征参数构成的时间序列作为一个整体进行分析研究,进行多时间序列数据挖掘产生网络流量异常相关的有效关联规则,对整个通信网络的安全威胁进行准确地描述。Abilene网络数据验证了本文的方法。  相似文献   

8.
In this paper, we define time series query filtering, the problem of monitoring the streaming time series for a set of predefined patterns. This problem is of great practical importance given the massive volume of streaming time series available through sensors, medical patient records, financial indices and space telemetry. Since the data may arrive at a high rate and the number of predefined patterns can be relatively large, it may be impossible for the comparison algorithm to keep up. We propose a novel technique that exploits the commonality among the predefined patterns to allow monitoring at higher bandwidths, while maintaining a guarantee of no false dismissals. Our approach is based on the widely used envelope-based lower-bounding technique. As we will demonstrate on extensive experiments in diverse domains, our approach achieves tremendous improvements in performance in the offline case, and significant improvements in the fastest possible arrival rate of the data stream that can be processed with guaranteed no false dismissals. As a further demonstration of the utility of our approach, we demonstrate that it can make semisupervised learning of time series classifiers tractable. Li Wei is a Ph.D. candidate in the Department of Computer Science & Engineering at the University of California, Riverside. She received her B.S. and M.S. degrees from Fudan University, China. Her research interests include data mining and information retrieval. Eamonn Keogh is an Assistant Professor of computer science at the University of California, Riverside. His research interests include data mining, machine learning and information retrieval. Several of his papers have won best paper awards, including papers at SIGKDD and SIGMOD. Dr. Keogh is the recipient of a 5-year NSF Career Award for “Efficient Discovery of Previously Unknown Patterns and Relationships in Massive Time Series Databases”. Helga Van Herle is an Assistant Clinical Professor of medicine at the Division of Cardiology of the Geffen School of Medicine at UCLA. She received her M.D. from UCLA in 1993; completed her residency in internal medicine at the New York Hospital (Cornell University; 1993–1996) and her cardiology fellowship at UCLA (1997–2001). Dr. Van Herle holds an M.Sc. in bioengineering from Columbia University (1987) and a B.Sc. in chemical engineering from UCLA (1985). Agenor Mafra-Neto, Ph.D., is the CEO of ISCA Technologies, Inc., in California and the founder of ISCA Technologies, LTDA, in Brazil. His research interests include the analysis of insect behavior and communication systems, the manipulation of insect behavior, and the automation of pest monitoring and pest control. Dr. Mafra-Neto is currently coordinating the deployment of area-wide smart sensor and effector networks to micromanage agricultural and public health pests in the field in an automatic fashion. Russell J. Abbott is a Professor of computer science at California State University, Los Angeles, and a member of the staff at the Aerospace Corporation, El Segundo, CA. His primary interests are in the field of complex systems. He is currently organizing a workshop to bring together people working in the fields of complex systems and systems engineering.  相似文献   

9.
Time series clustering has been shown effective in providing useful information in various domains. There seems to be an increased interest in time series clustering as part of the effort in temporal data mining research. To provide an overview, this paper surveys and summarizes previous works that investigated the clustering of time series data in various application domains. The basics of time series clustering are presented, including general-purpose clustering algorithms commonly used in time series clustering studies, the criteria for evaluating the performance of the clustering results, and the measures to determine the similarity/dissimilarity between two time series being compared, either in the forms of raw data, extracted features, or some model parameters. The past researchs are organized into three groups depending upon whether they work directly with the raw data either in the time or frequency domain, indirectly with features extracted from the raw data, or indirectly with models built from the raw data. The uniqueness and limitation of previous research are discussed and several possible topics for future research are identified. Moreover, the areas that time series clustering have been applied to are also summarized, including the sources of data used. It is hoped that this review will serve as the steppingstone for those interested in advancing this area of research.  相似文献   

10.
In this work we introduce the new problem of finding time seriesdiscords. Time series discords are subsequences of longer time series that are maximally different to all the rest of the time series subsequences. They thus capture the sense of the most unusual subsequence within a time series. While discords have many uses for data mining, they are particularly attractive as anomaly detectors because they only require one intuitive parameter (the length of the subsequence) unlike most anomaly detection algorithms that typically require many parameters. While the brute force algorithm to discover time series discords is quadratic in the length of the time series, we show a simple algorithm that is three to four orders of magnitude faster than brute force, while guaranteed to produce identical results. We evaluate our work with a comprehensive set of experiments on diverse data sources including electrocardiograms, space telemetry, respiration physiology, anthropological and video datasets. Eamonn Keogh is an Assistant Professor of computer science at the University of California, Riverside. His research interests include data mining, machine learning and information retrieval. Several of his papers have won best paper awards, including papers at SIGKDD and SIGMOD. Dr. Keogh is the recipient of a 5-year NSF Career Award for “Efficient discovery of previously unknown patterns and relationships in massive time series databases.” Jessica Lin is an Assistant Professor of information and software engineering at George Mason University. She received her Ph.D. from the University of California, Riverside. Her research interests include data mining and informational retrieval. Sang-Hee Lee is a paleoanthropologist at the University of California, Riverside. Her research interests include the evolution of human morphological variation and how different mechanisms (such as taxonomy, sex, age, and time) explain what is observed in fossil data. Dr. Lee obtained her Ph.D. in anthropology from the University of Michigan in 1999. Helga Van Herle is an Assistant Clinical Professor of medicine at the Division of Cardiology of the Geffen School of Medicine at UCLA. She received her M.D. from UCLA in 1993; completed her residency in internal medicine at the New York Hospital (Cornell University, 1993–1996) and her cardiology fellowship at UCLA (1997–2001). Dr. Van Herle holds a M.Sc. in bioengineering from Columbia University (1987) and a B.Sc. in Chemical Engineering from UCLA (1985)  相似文献   

11.
iSAX: disk-aware mining and indexing of massive time series datasets   总被引:1,自引:0,他引:1  
Current research in indexing and mining time series data has produced many interesting algorithms and representations. However, the algorithms and the size of data considered have generally not been representative of the increasingly massive datasets encountered in science, engineering, and business domains. In this work, we introduce a novel multi-resolution symbolic representation which can be used to index datasets which are several orders of magnitude larger than anything else considered in the literature. To demonstrate the utility of this representation, we constructed a simple tree-based index structure which facilitates fast exact search and orders of magnitude faster, approximate search. For example, with a database of one-hundred million time series, the approximate search can retrieve high quality nearest neighbors in slightly over a second, whereas a sequential scan would take tens of minutes. Our experimental evaluation demonstrates that our representation allows index performance to scale well with increasing dataset sizes. Additionally, we provide analysis concerning parameter sensitivity, approximate search effectiveness, and lower bound comparisons between time series representations in a bit constrained environment. We further show how to exploit the combination of both exact and approximate search as sub-routines in data mining algorithms, allowing for the exact mining of truly massive real world datasets, containing tens of millions of time series.  相似文献   

12.
Experiencing SAX: a novel symbolic representation of time series   总被引:15,自引:3,他引:15  
Many high level representations of time series have been proposed for data mining, including Fourier transforms, wavelets, eigenwaves, piecewise polynomial models, etc. Many researchers have also considered symbolic representations of time series, noting that such representations would potentiality allow researchers to avail of the wealth of data structures and algorithms from the text processing and bioinformatics communities. While many symbolic representations of time series have been introduced over the past decades, they all suffer from two fatal flaws. First, the dimensionality of the symbolic representation is the same as the original data, and virtually all data mining algorithms scale poorly with dimensionality. Second, although distance measures can be defined on the symbolic approaches, these distance measures have little correlation with distance measures defined on the original time series. In this work we formulate a new symbolic representation of time series. Our representation is unique in that it allows dimensionality/numerosity reduction, and it also allows distance measures to be defined on the symbolic approach that lower bound corresponding distance measures defined on the original series. As we shall demonstrate, this latter feature is particularly exciting because it allows one to run certain data mining algorithms on the efficiently manipulated symbolic representation, while producing identical results to the algorithms that operate on the original data. In particular, we will demonstrate the utility of our representation on various data mining tasks of clustering, classification, query by content, anomaly detection, motif discovery, and visualization.  相似文献   

13.
李海林 《控制与决策》2015,30(3):441-447
针对高维特性对多元时间序列数据挖掘过程和结果的影响,以及传统主成分分析方法在多元时间序列数据特征表示上的局限性,提出一种基于变量相关性的多元时间序列数据特征表示方法。通过协方差矩阵描述每个多元时间序列的分布特征和变量相关关系,利用主成分分析方法对综合协方差矩阵进行主元分析,进而实现多元时间序列的数据降维和特征表示。实验结果表明,所提出的方法不仅能提高多元时间序列数据挖掘的质量,还可以对不等长多元时间序列进行快速有效的挖掘。  相似文献   

14.
A review on time series data mining   总被引:5,自引:0,他引:5  
Time series is an important class of temporal data objects and it can be easily obtained from scientific and financial applications. A time series is a collection of observations made chronologically. The nature of time series data includes: large in data size, high dimensionality and necessary to update continuously. Moreover time series data, which is characterized by its numerical and continuous nature, is always considered as a whole instead of individual numerical field. The increasing use of time series data has initiated a great deal of research and development attempts in the field of data mining. The abundant research on time series data mining in the last decade could hamper the entry of interested researchers, due to its complexity. In this paper, a comprehensive revision on the existing time series data mining research is given. They are generally categorized into representation and indexing, similarity measure, segmentation, visualization and mining. Moreover state-of-the-art research issues are also highlighted. The primary objective of this paper is to serve as a glossary for interested researchers to have an overall picture on the current time series data mining development and identify their potential research direction to further investigation.  相似文献   

15.
Detecting anomalies in time series in real time can be challenging, in particular when anomalies can manifest themselves at different time scales and need to be detected with minimal latency. The need for lightweight real-time algorithms has risen in the context of Cloud computing, where thousands of devices are monitored and deviations from normal behaviour must be detected to prevent incidents. However, this need has yet to be addressed in a way that actually scales to the size of today’s network infrastructures.Typically, time series generated by human activity often exhibit daily and weekly patterns creating long-term dependencies that are difficult to process. In such cases, the euclidean distance between subsequences of the time series, or euclidean anomaly score, can be a very effective tool to achieve good detection within constrained latency; however, this computation has a quadratic complexity and a computational footprint too high for any realistic application.In this paper, we propose SCHEDA (Sampled Causal Heuristics for Euclidean Distance Approximation), a collection of three heuristics designed to approximate the euclidean anomaly score with a low computational footprint in time series with long-term dependencies. Our design goals are a low computational cost, the possibility of real-time operation and the absence of tuning parameters. We benchmark SCHEDA against ARIMA and the euclidean distance and show that in typical monitoring scenarios, it outperforms both at only a fraction of the computational cost.  相似文献   

16.
E.  T.D.  R. de J.  M.P.  N. 《Digital Signal Processing》2008,18(6):1029-1044
This paper presents a novel algorithm, called the DFSWT, and its FPGA-based hardware processing unit for frequency estimation of a time series main periodic component. Since the DFSWT uses just additions and subtractions, it is simpler to compute than the FFT, and since its spectrum is a frequency function, it is more intuitive than the Walsh transform. The results show that the proposed algorithm is very efficient in detecting the frequency of the main periodic component, even in low SNR. The proposed hardware processing unit is 3 orders of magnitude faster than its respective software implementation and presents advantages regarding to power consumption, footprint, and computation speed against highly optimized commercially available FFT cores.  相似文献   

17.
Time series data mining (TSDM) techniques permit exploring large amounts of time series data in search of consistent patterns and/or interesting relationships between variables. TSDM is becoming increasingly important as a knowledge management tool where it is expected to reveal knowledge structures that can guide decision making in conditions of limited certainty. Human decision making in problems related with analysis of time series databases is usually based on perceptions like “end of the day”, “high temperature”, “quickly increasing”, “possible”, etc. Though many effective algorithms of TSDM have been developed, the integration of TSDM algorithms with human decision making procedures is still an open problem. In this paper, we consider architecture of perception-based decision making system in time series databases domains integrating perception-based TSDM, computing with words and perceptions, and expert knowledge. The new tasks which should be solved by the perception-based TSDM methods to enable their integration in such systems are discussed. These tasks include: precisiation of perceptions, shape pattern identification, and pattern retranslation. We show how different methods developed so far in TSDM for manipulation of perception-based information can be used for development of a fuzzy perception-based TSDM approach. This approach is grounded in computing with words and perceptions permitting to formalize human perception-based inference mechanisms. The discussion is illustrated by examples from economics, finance, meteorology, medicine, etc.  相似文献   

18.
针对动态时序数据部分周期模式挖掘过程存在的计算复杂度过高和扩展性差等问题,提出了一种结合多尺度理论的时间序列部分周期模式挖掘算法(MSI-PPPGrowth),所提算法充分利用了时序数据客观存在的时间多尺度特性,将多尺度理论引入时序数据的部分周期模式挖掘过程。首先,将尺度划分后的原始数据以及增量时序数据作为更细粒度的基准尺度数据集进行独立挖掘;然后,利用不同尺度数据间的相关性实现尺度转换,以间接获取动态更新后的数据集对应的全局频繁模式,从而避免了原始数据集的重复扫描和树结构的不断调整。其中,基于克里金法并考虑时序周期性设计了一个新的频繁缺失计数估计模型(PJK-EstimateCount),以有效估计在尺度转换过程中的缺失项支持度计数。实验结果表明,MSI-PPPGrowth具有良好的可扩展性和实时性,尤其是对于稠密数据集,其性能优势更为突出。  相似文献   

19.
一个高效的多变量时间序列聚类算法   总被引:1,自引:0,他引:1       下载免费PDF全文
时间序列聚类分析是数据挖掘研究的一个重要内容。已有的聚类算法大多采用k均值对低维数据进行聚类,不能对高维多变量时间序列(MTS)数据进行有效聚类。提出一种高效的多变量时间序列聚类算法PCA-CLUSTER,首先利用主成分分析对MTS数据降维;选取MTS数据的主成分序列进行K近邻聚类分析。理论分析和实验结果表明算法可以有效解决MTS数据聚类问题。  相似文献   

20.
Discrimination of locally stationary time series using wavelets   总被引:1,自引:0,他引:1  
Time series are sometimes generated by processes that change suddenly from one stationary regime to another, with no intervening periods of transition of any significant duration. A good example of this is provided by seismic data, namely, waveforms of earthquakes and explosions. In order to classify an unknown event as either an earthquake or an explosion, statistical analysts might be helped by having at their disposal an automatic means of identifying, at any time, which pattern prevails. Several authors have proposed methods to tackle this problem by combining the techniques of spectral analysis with those of discriminant analysis. The goal is to develop a discriminant scheme for locally stationary time series such as earthquake and explosion waveforms, by combining the techniques of wavelet analysis with those of discriminant analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号