首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
传统的金融时间序列预测方法以精确的输入数据为研究对象,在建立回归模型的基础上做单步或多步预测,预测结果是一个或多个具体的值.由于金融市场的复杂性,传统的预测方法可靠度较低.提出将金融时间序列模糊信息粒化成一个模糊粒子序列,运用支持向量机对模糊粒子的上下界进行回归,然后应用回归所得到的模型分别对上下界进行单步预测,从而将预测的结果限定在一个范围之内.这是一种全新的思路.以上证指数周收盘指数为实验数据,实验结果表明了这种方法的有效性.  相似文献   

2.
This paper proposes a novel technique for clustering and classification of object trajectory-based video motion clips using spatiotemporal function approximations. Assuming the clusters of trajectory points are distributed normally in the coefficient feature space, we propose a Mahalanobis classifier for the detection of anomalous trajectories. Motion trajectories are considered as time series and modelled using orthogonal basis function representations. We have compared three different function approximations – least squares polynomials, Chebyshev polynomials and Fourier series obtained by Discrete Fourier Transform (DFT). Trajectory clustering is then carried out in the chosen coefficient feature space to discover patterns of similar object motions. The coefficients of the basis functions are used as input feature vectors to a Self- Organising Map which can learn similarities between object trajectories in an unsupervised manner. Encoding trajectories in this way leads to efficiency gains over existing approaches that use discrete point-based flow vectors to represent the whole trajectory. Our proposed techniques are validated on three different datasets – Australian sign language, hand-labelled object trajectories from video surveillance footage and real-time tracking data obtained in the laboratory. Applications to event detection and motion data mining for multimedia video surveillance systems are envisaged.  相似文献   

3.
A key challenge for neural modeling is to explain how a continuous stream of multimodal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrate-and-fire neurons in real time. We propose a new computational model for real-time computing on time-varying input that provides an alternative to paradigms based on Turing machines or attractor neural networks. It does not require a task-dependent construction of neural circuits. Instead, it is based on principles of high-dimensional dynamical systems in combination with statistical learning theory and can be implemented on generic evolved or found recurrent circuitry. It is shown that the inherent transient dynamics of the high-dimensional dynamical system formed by a sufficiently large and heterogeneous neural circuit may serve as universal analog fading memory. Readout neurons can learn to extract in real time from the current state of such recurrent neural circuit information about current and past inputs that may be needed for diverse tasks. Stable internal states are not required for giving a stable output, since transient internal states can be transformed by readout neurons into stable target outputs due to the high dimensionality of the dynamical system. Our approach is based on a rigorous computational model, the liquid state machine, that, unlike Turing machines, does not require sequential transitions between well-defined discrete internal states. It is supported, as the Turing machine is, by rigorous mathematical results that predict universal computational power under idealized conditions, but for the biologically more realistic scenario of real-time processing of time-varying inputs. Our approach provides new perspectives for the interpretation of neural coding, the design of experiments and data analysis in neurophysiology, and the solution of problems in robotics and neurotechnology.  相似文献   

4.
In this paper, an integrated model based on efficient extreme learning machine (EELM) and differential evolution (DE) is proposed to predict chaotic time series. In the proposed model, a novel learning algorithm called EELM is presented and used to model the chaotic time series. The EELM inherits the basic idea of extreme learning machine (ELM) in training single hidden layer feedforward networks, but replaces the commonly used singular value decomposition with a reduced complete orthogonal decomposition to calculate the output weights, which can achieve a much faster learning speed than ELM. Moreover, in order to obtain a more accurate and more stable prediction performance for chaotic time series prediction, this model abandons the traditional two-stage modeling approach and adopts an integrated parameter selection strategy which employs a modified DE algorithm to optimize the phase space reconstruction parameters of chaotic time series and the model parameter of EELM simultaneously based on a hybrid validation criterion. Experimental results show that the proposed integrated prediction model can not only provide stable prediction performances with high efficiency but also achieve much more accurate prediction results than its counterparts for chaotic time series prediction.  相似文献   

5.
Techniques for understanding video object motion activity are becoming increasingly important with the widespread adoption of CCTV surveillance systems. Motion trajectories provide rich spatiotemporal information about an object's activity. This paper presents a novel technique for clustering of object trajectory-based video motion clips using basis function approximations. Motion cues can be extracted using a tracking algorithm on video streams from video cameras. In the proposed system, trajectories are treated as time series and modelled using orthogonal basis function representation. Various function approximations have been compared including least squares polynomial, Chebyshev polynomials, piecewise aggregate approximation, discrete Fourier transform (DFT), and modified DFT (DFT-MOD). A novel framework, namely iterative hierarchical semi-agglomerative clustering using learning vector quantization (Iterative HSACT-LVQ), is proposed for learning of patterns in the presence of significant number of anomalies in training data. In this context, anomalies are defined as atypical behavior patterns that are not represented by sufficient samples in training data and are infrequently occurring or unusual. The proposed algorithm does not require any prior knowledge about the number of patterns hidden in unclassified dataset. Experiments using complex real-life trajectory datasets demonstrate the superiority of our proposed Iterative HSACT-LVQ-based motion learning technique compared to other recent approaches.  相似文献   

6.
7.
预测是用科学的方法和手段时事物的发展趋势和未来状态进行估量的技术.为了弥补传统方法和技术的不足,各种机器学习技术越来越多地应用于预测的研究中.讨论了在风险预测这一特定领域,应用基于案例的推理(CBR,Case based Reasoning)、支持向量机(SVM,Support Vectot Machine)以及人工神经网络(ANN,Artificial Neural Network)等机器学习方法来进行预测的技术.同时,以我们的工作为基础,详细论述了在信贷风险预测和工程评标中基于机器学习预测模型的应用.  相似文献   

8.
炉温的实时预测技术对高炉运转具有重要意义。在高炉炼铁过程中,通常以铁水硅含量来表征高炉热状态。针对硅含量预测效率和精度不足的问题,提出主成分分析和粒子群改进的极限学习机相结合的方法对高炉铁水硅含量进行预测。由于影响铁水硅含量的因素众多,且各因素之间相互影响,通过主成分分析对影响硅含量的输入变量进行降维处理。利用粒子群算法来优化极限学习机的权值和阈值,并以均方根误差作为适应度函数建立预测模型。将提取出的主成分作为模型输入,铁水硅含量作为模型输出。最后比较了极限学习机算法和粒子群改进的极限学习机,实验结果表明改进后的预测模型提高了硅含量预测的准确度,上述方法可为高炉的生产操作提供参考。  相似文献   

9.
近年来,深度学习算法在众多有监督学习问题上取得了卓越的成果,其在精度、效率和智能化等方面的性能远超传统机器学习算法,部分甚至超越了人类水平。当前,深度学习研究者的研究兴趣逐渐从监督学习转移到强化学习、半监督学习以及无监督学习领域。视频预测算法,因其可以利用海量无标注自然数据去学习视频的内在表征,且在机器人决策、无人驾驶和视频理解等领域具有广泛的应用价值,近两年来得到快速发展。本文论述了视频预测算法的发展背景和深度学习的发展历史,简要介绍了人体动作、物体运动和移动轨迹的预测,重点介绍了基于深度学习的视频预测的主流方法和模型,最后总结了当前该领域存在的问题和发展前景。  相似文献   

10.
Sparse Bayesian learning for efficient visual tracking   总被引:4,自引:0,他引:4  
This paper extends the use of statistical learning algorithms for object localization. It has been shown that object recognizers using kernel-SVMs can be elegantly adapted to localization by means of spatial perturbation of the SVM. While this SVM applies to each frame of a video independently of other frames, the benefits of temporal fusion of data are well-known. This is addressed here by using a fully probabilistic relevance vector machine (RVM) to generate observations with Gaussian distributions that can be fused over time. Rather than adapting a recognizer, we build a displacement expert which directly estimates displacement from the target region. An object detector is used in tandem, for object verification, providing the capability for automatic initialization and recovery. This approach is demonstrated in real-time tracking systems where the sparsity of the RVM means that only a fraction of CPU time is required to track at frame rate. An experimental evaluation compares this approach to the state of the art showing it to be a viable method for long-term region tracking.  相似文献   

11.
This paper investigates the problem of decentralized model reference adaptive control (MRAC) for a class of large scale interconnected systems with both state and input delays. The upper bounds of the interconnection terms are considered to be unknown. Time varying delays in the nonlinear interconnection terms are bounded and nonnegative continuous functions and their derivatives are not necessarily less than one. For exact prediction, a decentralized adaptive state observer is designed and a nested predictor based approach is established to predict the future states of the input delay compensation. It is shown that the solutions of uncertain large‐scale time‐delay interconnected systems converge uniformly exponentially to a desired small ball. The effectiveness of the proposed approaches is illustrated by two examples.  相似文献   

12.
Time Series Prediction Using Support Vector Machines: A Survey   总被引:10,自引:0,他引:10  
Time series prediction techniques have been used in many real-world applications such as financial market prediction, electric utility load forecasting , weather and environmental state prediction, and reliability forecasting. The underlying system models and time series data generating processes are generally complex for these applications and the models for these systems are usually not known a priori. Accurate and unbiased estimation of the time series data produced by these systems cannot always be achieved using well known linear techniques, and thus the estimation process requires more advanced time series prediction algorithms. This paper provides a survey of time series prediction applications using a novel machine learning approach: support vector machines (SVM). The underlying motivation for using SVMs is the ability of this methodology to accurately forecast time series data when the underlying system processes are typically nonlinear, non-stationary and not defined a-priori. SVMs have also been proven to outperform other non-linear techniques including neural-network based non-linear prediction techniques such as multi-layer perceptrons.The ultimate goal is to provide the reader with insight into the applications using SVM for time series prediction, to give a brief tutorial on SVMs for time series prediction, to outline some of the advantages and challenges in using SVMs for time series prediction, and to provide a source for the reader to locate books, technical journals, and other online SVM research resources.  相似文献   

13.
Incorporating the quantity and variety of observations in atmospheric and oceanographic assimilation and prediction models has become an increasingly complex task. Data assimilation allows for uneven spatial and temporal data distribution and redundancy to be addressed so that the models can ingest massive data sets. Traditional data assimilation methods introduce Kalman filters and variational approaches. This study introduces a family of algorithms, motivated by advances in machine learning. These algorithms provide an alternative approach to incorporating new observations into the analysis forecast cycle. The application of kernel methods to processing the states of a quasi-geostrophic numerical model is intended to demonstrate the feasibility of the method as a proof-of-concept. The speed, efficiency, accuracy and scalability in recovering unperturbed state trajectories establishes the viability of machine learning for data assimilation.  相似文献   

14.
In this article, the brain emotional learning-based pattern recognizer (BELPR) is proposed to solve multiple input–multiple output classification and chaotic time series prediction problems. BELPR is based on an extended computational model of the human brain limbic system that consists of an emotional stimuli processor. The BELPR is model free and learns the patterns in a supervised manner and evaluates the output(s) using the activation function tansig. In the numerical studies, various comparisons are made between BELPR and a multilayer perceptron (MLP) with a back-propagation learning algorithm. The methods are tested to classify 12 UCI (University of California, Irvine) machine learning data sets and to predict activity indices of the Earth's magnetosphere. The main features of BELPR are higher accuracy, decreased time and spatial complexity, and faster training.  相似文献   

15.
利用Oracle数据库中的数据挖掘选件(Oracle Data Mining,ODM),并使用存储在Oracle数据库中的时间序列数据,可构建预测时间序列未来值的支持向量机(Support Vector Machines,SVM)模型。建模时,需去除时间序列中的趋势,将目标属性标准化,确定包含延迟变量窗口的尺寸,利用机器学习方法,由时间序列历史数据得出SVM预测模型。与传统时间序列预测模型相比,SVM预测模型能够揭示时间序列的非线性、非平稳性和随机性,从而得到较高的预测精度。  相似文献   

16.
为解决复杂时间序列的预测问题,针对目前过程神经网络的输入为多个连续的时变函数,而许多实际问题的输入为多个序列的离散值,提出一种基于离散输入的过程神经网络模型及学习算法;并以太阳黑子数实际数据为例对太阳黑子数时间序列进行预测,仿真结果表明该模型具有很好的逼近和预测能力。  相似文献   

17.
潘宇雄  任章  李清东 《控制与决策》2014,29(12):2297-2300
为了对涡扇发动机的运行参数变化进行实时高精度预测,提出一种基于动态贝叶斯最小二乘支持向量机(LS-SVM)的时间序列预测算法。该算法将贝叶斯证据框架理论用于推断LS-SVM的初始模型参数;然后,利用样本增减迭代学习算法实现LS-SVM的参数动态调整。对某型涡扇发动机的摩擦力矩时间序列进行动态预测,并与动态LS-SVM模型的预测结果进行比较。结果显示,动态贝叶斯LS-SVM具有较好的预测精度。  相似文献   

18.

This paper proposes a novel method for robust visual tracking of arbitrary objects, based on the combination of image-based prediction and position refinement by weighted correlation. The effectiveness of the proposed approach is demonstrated on a challenging set of dynamic video sequences, extracted from the final of triple jump at the London 2012 Summer Olympics. A comparison is made against five baseline tracking systems. The novel system shows remarkable superior performances with respect to the other methods, in all considered cases characterized by changing background, and a large variety of articulated motions. The novel architecture, from here onward named 2D Recurrent Neural Network (2D-RNN), is derived from the well-known recurrent neural network model and adopts nearest neighborhood connections between the input and context layers in order to store the temporal information content of the video. Starting from the selection of the object of interest in the first frame, neural computation is applied to predict the position of the target in each video frame. Normalized cross-correlation is then applied to refine the predicted target position. 2D-RNN ensures limited complexity, great adaptability and a very fast learning time. At the same time, it shows on the considered dataset fast execution times and very good accuracy, making this approach an excellent candidate for automated analysis of complex video streams.

  相似文献   

19.
Techniques for understanding video object motion activity are becoming increasingly important with the widespread adoption of CCTV surveillance systems. Motion trajectories provide rich spatiotemporal information about an object's activity. This paper presents a novel technique for clustering and classification of motion. In the proposed motion learning system, trajectories are treated as time series and modelled using modified DFT (discrete fourier transform)-based coefficient feature space representation. A framework (iterative HSACT-LVQ (hierarchical semi-agglomerative clustering-learning vector quantization)) is proposed for learning of patterns in the presence of significant number of anomalies in training data. A novel modelling technique, referred to as m-Mediods, is also proposed that models the class containing n members with m Mediods. Once the m-Mediods-based model for all the classes have been learnt, the classification of new trajectories and anomaly detection can be performed by checking the closeness of said trajectory to the models of known classes. A mechanism based on agglomerative approach is proposed for anomaly detection. Our proposed techniques are validated using variety of simulated and complex real life trajectory data sets.  相似文献   

20.
ABSTRACT

Recent advances in robotics and measurement technologies have enabled biologists to record the trajectories created by animal movements. In this paper, we convert time series of animal trajectories into sequences of finite symbols, and then propose a machine learning method for gaining biological insight from the trajectory data in the form of symbol sequences. The proposed method is used for training a classifier which differentiates between the trajectories of two groups of animals such as male and female. The classifier is represented in the form of a sparse linear combination of subsequence patterns, and we call the classifier an S3P-classifier. The trained S3P-classifier is easy to interpret because each coefficient represents the specificity of the subsequence patterns in either of the two classes of animal trajectories. However, fitting an S3P-classifier is computationally challenging because the number of subsequence patterns is extremely large. The main technical contribution in this paper is the development of a novel algorithm for overcoming this computational difficulty by combining a sequential mining technique with a recently developed convex optimization technique called safe screening. We demonstrate the effectiveness of the proposed method by applying it to three animal trajectory data analysis tasks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号