首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
Statistical models for spatio-temporal data are increasingly used in environmetrics, climate change, epidemiology, remote sensing and dynamical risk mapping. Due to the complexity of the relationships among the involved variables and dimensionality of the parameter set to be estimated, techniques for model definition and estimation which can be worked out stepwise are welcome. In this context, hierarchical models are a suitable solution since they make it possible to define the joint dynamics and the full likelihood starting from simpler conditional submodels. Moreover, for a large class of hierarchical models, the maximum likelihood estimation procedure can be simplified using the Expectation–Maximization (EM) algorithm.In this paper, we define the EM algorithm for a rather general three-stage spatio-temporal hierarchical model, which includes also spatio-temporal covariates. In particular, we show that most of the parameters are updated using closed forms and this guarantees stability of the algorithm unlike the classical optimization techniques of the Newton–Raphson type for maximizing the full likelihood function. Moreover, we illustrate how the EM algorithm can be combined with a spatio-temporal parametric bootstrap for evaluating the parameter accuracy through standard errors and non-Gaussian confidence intervals.To do this a new software library in form of a standard R package has been developed. Moreover, realistic simulations on a distributed computing environment allow us to discuss the algorithm properties and performance also in terms of convergence iterations and computing times.  相似文献   

3.
The critical dimensions in describing space–time activities are “what“, “where”, “when”, and “who”, which are frequently applied to collect data about basic functions people perform in space in the course of a day. Collecting data about these dimensions using activity-based surveys has presented researchers with a number of technical and social limitations, ranging from the restricted period of time participants have to record their activities to the level of accuracy with which participants complete a survey. This paper proposes a new streaming data processing workflow for querying space–time activities (STA) as a by-product of microblogging communication. It allows exploring a large volume of geotagged tweets to discover STA patterns of daily life in a systematic manner. A sequence of tasks have been implemented using different cloud-based computing resources for handling over one million of daily geotagged tweets from Canada for a period of six months. The STA patterns have revealed activity choices that might be attributable to personal motivations for communicating an activity in social networks.  相似文献   

4.
5.
The increasing demand for higher resolution images and higher frame rate videos will always pose a challenge to computational power when real-time performance is required to solve the stereo-matching problem in 3D reconstruction applications. Therefore, the use of asymptotic analysis is necessary to measure the time and space performance of stereo-matching algorithms regardless of the size of the input and of the computational power available. In this paper, we survey several classic stereo-matching algorithms with regard to time–space complexity. We also report running time experiments for several algorithms that are consistent with our complexity analysis. We present a new dense stereo-matching algorithm based on a greedy heuristic path computation in disparity space. A procedure which improves disparity maps in depth discontinuity regions is introduced. This procedure works as a post-processing step for any technique that solves the dense stereo-matching problem. We prove that our algorithm and post-processing procedure have optimal O(n) time–space complexity, where n is the size of a stereo image. Our algorithm performs only a constant number of computations per pixel since it avoids a brute force search over the disparity range. Hence, our algorithm is faster than “real-time” techniques while producing comparable results when evaluated with ground-truth benchmarks. The correctness of our algorithm is demonstrated with experiments in real and synthetic data.  相似文献   

6.
This paper considers the problem of suppressing complex-jamming, which contains sidelobe blanket jammings (SLJs), multiple near-mainlobe blanket jammings (multiple-NMLJs) and self-defensive false target jamming (SDJ). We propose a blind source separation (BSS)-based space–time multi-channel algorithm for complex-jamming suppression. The space–time multi-channel consists of spatial multiple beams and temporal multiple adjacent pulse repetition intervals (PRIs). The source signals can be separated by the BSS, owing to their statistical independence. The real target and SDJ can then be obtained by the pulse compression approach, distinguished by echo identification simultaneously. A remarkable feature of the proposed approach is that it does not require prior knowledge about real target or jammings, and it is easy to implement for engineering applications.  相似文献   

7.

In this study, a new hybrid forecasting method is proposed. The proposed method is called autoregressive adaptive network fuzzy inference system (AR–ANFIS). AR–ANFIS can be shown in a network structure. The architecture of the network has two parts. The first part is an ANFIS structure and the second part is a linear AR model structure. In the literature, AR models and ANFIS are widely used in time series forecasting. Linear AR models are used according to model-based strategy. A nonlinear model is employed by using ANFIS. Moreover, ANFIS is a kind of data-based modeling system like artificial neural network. In this study, a linear and nonlinear forecasting model is proposed by creating a hybrid method of AR and ANFIS. The new method has advantages of data-based and model-based approaches. AR–ANFIS is trained by using particle swarm optimization, and fuzzification is done by using fuzzy C-Means method. AR–ANFIS method is examined on some real-life time series data, and it is compared with the other time series forecasting methods. As a consequence of applications, it is shown that the proposed method can produce accurate forecasts.

  相似文献   

8.
As one of the four major industrial raw materials in the world, natural rubber is closely related to the national economy and people’s livelihood. The analysis of natural rubber price and volatility can give hedging guidance to manufacturers and provide investors with uncertainty and risk information to reduce investment losses. To effectively analyses and forecast the natural rubber’s price and volatility, this paper constructed a hybrid model that integrated the bidirectional gated recurrent unit and variational mode decomposition for short-term prediction of the natural rubber futures on the Shanghai Futures Exchange. In data preprocessing period, time series is decomposed by variational mode decomposition to capture the tendency and mutability information. The bidirectional gated recurrent unit is introduced to return the one-day-ahead prediction of the closing price and 7-day volatility for the natural rubber futures. The experimental results demonstrated that: (a) variational mode decomposition is an effective method for time series analysis, which can capture the information closely related to the market fluctuations; (b) the bidirectional neural network structure can significantly improve the model performance both in terms of fitting performance and the trend prediction; (c) a correspondence was found between the predicted target, i.e., the price and volatility, and the intrinsic modes, which manifested as the impact of the long-term and short-term characteristics on the targets at different time-scales. With a change in the time scale of forecasting targets, it was found that there was some variation in matching degree between the forecasting target and the mode sub-sequences.  相似文献   

9.
The fuzzy min–max neural network classifier is a supervised learning method. This classifier takes the hybrid neural networks and fuzzy systems approach. All input variables in the network are required to correspond to continuously valued variables, and this can be a significant constraint in many real-world situations where there are not only quantitative but also categorical data. The usual way of dealing with this type of variables is to replace the categorical by numerical values and treat them as if they were continuously valued. But this method, implicitly defines a possibly unsuitable metric for the categories. A number of different procedures have been proposed to tackle the problem. In this article, we present a new method. The procedure extends the fuzzy min–max neural network input to categorical variables by introducing new fuzzy sets, a new operation, and a new architecture. This provides for greater flexibility and wider application. The proposed method is then applied to missing data imputation in voting intention polls. The micro data—the set of the respondents’ individual answers to the questions—of this type of poll are especially suited for evaluating the method since they include a large number of numerical and categorical attributes.  相似文献   

10.
In this paper we develop a unified difference-spectral method for stably solving time–space fractional sub-diffusion and super-diffusion equations. Based on the equivalence between Volterra integral equations and fractional ordinary differential equations with initial conditions, this proposed method is constructed by combining the spectral Galerkin method in space and the fractional trapezoid formula in time. Numerical experiments are carried out to verify the effectiveness of the method, and demonstrate that the unified method can achieve spectral accuracy in space and second-order accuracy in time for solving two kinds of time–space fractional diffusion equations.  相似文献   

11.
We provide a space–time adaptation procedure for the approximation of the Shallow Water Equations (SWE). This approach relies on a recovery based estimator for the global discretization error, where the space and time contributions are kept separate. In particular we propose an ad hoc procedure for the recovery of the time derivative of the numerical solution and then we employ this reconstruction to define the error estimator in time. Concerning the space adaptation, we move from an anisotropic error estimator able to automatically identify the density, the shape and the orientation of the elements of the computational mesh. The proposed global error estimator turns out to share the good properties of each recovery based error estimator. The whole adaptive procedure is then combined with a suitable stabilized finite element SW solver. Finally the reliability of the coupled solution–adaptation procedure is successfully assessed on two unsteady test cases of interest for hydraulics applications.  相似文献   

12.
Space–time receive beamforming is a promising technique to suppress co-channel multiuser interference for orthogonal frequency division multiplexing (OFDM) systems. However, the performance of existing receive beamforming methods is highly sensitive to the memory length of time–domain equalizer. In this paper, we propose a new robust solution of space–time receive beamforming to combat co-channel multiuser interference for OFDM systems. The proposed method performs subspace analysis for the received space–time snapshots from both the training block and few data blocks, and constrains the space–time beamforming vectors to lie in the corresponding signal subspace. A few beamforming vectors are then carefully designed to suppress interference as well as provide the potential multi-branch diversity gain. The numerical results are provided to corroborate the proposed studies. We show that the proposed method not only can outperform the exiting competitors but also has the advantage of being quite insensitive to the memory length of time–domain equalizer.  相似文献   

13.
Computing and Visualization in Science - We consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in...  相似文献   

14.
In this paper, we propose a fast algorithm for efficient and accurate solution of the space–time fractional diffusion equations defined in a rectangular domain. The spatial discretization is done by using the central finite difference scheme and matrix transfer technique. Due to its nonlocality, numerical discretization of the spectral fractional Laplacian (?Δ)sα/2 results in a large dense matrix. This causes considerable challenges not only for storing the matrix but also for computing matrix–vector products in practice. By utilizing the compact structure of the discrete system and the discrete sine transform, our algorithm avoids to store the large matrix from discretizing the nonlocal operator and also significantly reduces the computational costs. We then use the Laplace transform method for time integration of the semi-discretized system and a weighted trapezoidal method to numerically compute the convolutions needed in the resulting scheme. Various experiments are presented to demonstrate the efficiency and accuracy of our method.  相似文献   

15.
This paper is concerned with the analysis of a new stable space–time finite element method (FEM) for the numerical solution of parabolic evolution problems in moving spatial computational domains. The discrete bilinear form is elliptic on the FEM space with respect to a discrete energy norm. This property together with a corresponding boundedness property, consistency and approximation results for the FEM spaces yield an a priori discretization error estimate with respect to the discrete norm. Finally, we confirm the theoretical results with numerical experiments in spatial moving domains.  相似文献   

16.
In this paper, a Galerkin finite element scheme to approximate the time–space fractional diffusion equation is studied. Firstly, the fractional diffusion equation is transformed into a fractional Volterra integro-differential equation. And a second-order fractional trapezoidal formula is used to approach the time fractional integral. Then a Galerkin finite element method is introduced in space direction, where the semi-discretization scheme and fully discrete scheme are given separately. The stability analysis of semi-discretization scheme is discussed in detail. Furthermore, convergence analysis of semi-discretization scheme and fully discrete scheme are given in details. Finally, two numerical examples are displayed to demonstrate the effectiveness of the proposed method.  相似文献   

17.
The turning points prediction scheme for future time series analysis based on past and present information is widely employed in the field of financial applications. In this research, a novel approach to identify turning points of the trading signal using a fuzzy rule-based model is presented. The Takagi–Sugeno fuzzy rule-based model (the TS model) can accurately identify daily stock trading from sets of technical indicators according to the trading signals learned by a support vector regression (SVR) technique. In addition, when new trading points are created, the structure and parameters of the TS model are constantly inherited and updated. To verify the effectiveness of the proposed TS fuzzy rule-based modeling approach, we have acquired the stock trading data in the US stock market. The TS fuzzy approach with dynamic threshold control is compared with a conventional linear regression model and artificial neural networks. Our result indicates that the TS fuzzy model not only yields more profit than other approaches but also enables stable dynamic identification of the complexities of the stock forecasting system.  相似文献   

18.
Engineering with Computers - In this study, a new fractal-fractional (FF) derivative is defined by coupling the local conformable derivative and non-local Caputo fractional derivative. Using the...  相似文献   

19.
This article introduces four new modes of principal component analysis (PCA) to investigate space–time variability in an image time series. Using the concept of tensors, an image time series can be understood as a space–time cube and can be analysed using six different orientations by grouping the basic elements (voxels) of the cube across different dimensions. Voxels grouped across columns or rows of the cube to produce vectors result in profiles. Voxels grouped across different planes to produce matrices result in slices. The traditional S-mode and T-mode PCA are thus the profile modes and slice modes across time and across space, respectively. This research introduces two profile-mode orientations across longitude and latitude and two slice-mode orientations across longitude–time and latitude–time. The research shows that a more complete understanding of the spatio-temporal variability in the data set can be achieved by investigating these different orientation modes, as individual modes have the capability of capturing variability in a particular dimension of a spatio-temporal data set. A case study was carried out using weekly anomalies of the AVISO (Archiving, Validation and Interpretation of Satellite Oceanographic data) sea surface height product filtered for tropical instability waves (TIWs) for a three-year time period from 1997 to 1999 in the tropical Pacific region. The results show that PCA with longitude as the dimension of variability and latitude–time as the dimension of variability were able to capture the TIW and barotropic Rossby wave propagation across the equatorial Pacific. The other two orientation modes were able to detect dominant latitudinal locations for TIW.  相似文献   

20.
《国际计算机数学杂志》2012,89(8):1683-1712
Subdivision schemes are multi-resolution methods used in computer-aided geometric design to generate smooth curves or surfaces. We propose two new models for data analysis and compression based on subdivision schemes:(a) The ‘subdivision regression’ model, which can be viewed as a special multi-resolution decomposition.(b) The ‘tree regression’ model, which allows the identification of certain patterns within the data. The paper focuses on analysis and mentions compression as a byproduct. We suggest applying certain criteria on the output of these models as features for data analysis. Differently from existing multi-resolution analysis methods, these new models and criteria provide data features related to the schemes (the filters) themselves, based on a decomposition of the data into different resolution levels, and they also allow analysing data of non-smooth functions and working with varying-resolution subdivision rules. Finally, applications of these methods for music analysis and other potential usages are mentioned.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号