首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
methods     
This paper improved the quality of time series data sets through threemethodsdouble star data combination of MODIS, linear interpolation and HANTS smoothing. In this study, we used random forest classification and analyzed the impact of the quality of time series dataset construction on classification accuracy though evaluating the accuracy of classification results. Results showed that the double-star data was beneficial to improve the temporal resolution of time series dataset, accurately depict the coverage change, and provide the basis for subsequent processing; linear interpolation could improve the quality of pixel points and reduce the influence of cloud and rain factors; HANTS smoothing could remove outliers, smooth data, highlight curve features, and reduce classification complexity. After improving the quality of the time series data set, the overall classification accuracy increased from 84.32% to 90.75%, and the Kappa coefficient increased from 79.86% to 88.16%. In a word, when using time series data for land cover classification, the quality of the time series data set should be improved to eliminate the outliers and truly reflect the surface covering phenological features, and the classification accuracy of the results should be improved.  相似文献   

2.
In the simulation of dynamical systems exhibiting an ultraslow decay, differential equations of fractional order have been successfully proposed. In this paper we consider the problem of numerically solving fractional differential equations by means of a generalization of k  -step Adams–Moulton multistep methods. Our investigation is focused on stability properties and we determine intervals for the fractional order for which methods are at least A(π/2)A(π/2)-stable. Moreover we prove the A-stable character of k  -step methods for k=0k=0 and k=1k=1.  相似文献   

3.
We consider methods that are the inverse of the explicit Runge–Kutta methods. Such methods have some advantages, while their disadvantage is the low (first) stage order. This reduces the accuracy and the real order in solving stiff and differential-algebraic equations. New methods possessing properties of methods of a higher stage order are proposed. The results of the numerical experiments show that the proposed methods allow us to avoid reducing the order.  相似文献   

4.
Where are linear feature extraction methods applicable?   总被引:4,自引:0,他引:4  
A fundamental problem in computer vision and pattern recognition is to determine where and, most importantly, why a given technique is applicable. This is not only necessary because it helps us decide which techniques to apply at each given time. Knowing why current algorithms cannot be applied facilitates the design of new algorithms robust to such problems. In this paper, we report on a theoretical study that demonstrates where and why generalized eigen-based linear equations do not work. In particular, we show that when the smallest angle between the ith eigenvector given by the metric to be maximized and the first i eigenvectors given by the metric to be minimized is close to zero, our results are not guaranteed to be correct. Several properties of such models are also presented. For illustration, we concentrate on the classical applications of classification and feature extraction. We also show how we can use our findings to design more robust algorithms. We conclude with a discussion on the broader impacts of our results.  相似文献   

5.
6.
The results of a numerical investigation of a number of techniques for stability analysis of non-linear systems are presented. The methods compared are the Ingwerson and Zubov procedures for construction of Liapounov functions, a number of methods for obtaining optimum Liapounov functions, tracking functions, the alternating extreme radius path, and the method of Luus and Lapidus.  相似文献   

7.
Formal methods have been seen as ways to improve the quality of the software development process. However, apart from the interesting idea to apply mathematics and logic directly to software development activities, there is an urgent need to create a scientific foundation on which today's software technology practise can be based. We discuss the progress in formal methods and their influence in forming a scientific foundation for software technology. We argue that logics and mathematics provide a scientific basis for software engineering, which leads to a deeper understanding of the development process and a more powerful tool support. We point out, how software engineering methods can be backed up by mathematics. We discuss the benefits of such a mathematical foundation that go far beyond the application of so-called formal methods for the specification and verification of software.  相似文献   

8.
We consider the monotone composite variational inequality (CVI) where the underlying mapping is formed as the sum of two monotone mappings. We combine the forward–backward and descent direction ideas together, and thus present the unified algorithmic framework of forward–backward-based descent methods for solving the CVI. A new iterate of such a method is generated by a prediction–correction fashion, where the predictor is yielded by the forward–backward method and then the predictor is corrected by a descent step. We derive some implementable forward–backward-based descent algorithms for some concrete cases of the CVI, and verify their numerical efficiency via preliminary numerical experiments.  相似文献   

9.
《Information Systems》2002,27(4):245-275
Entity relationship (ER) schemas include cardinality constraints, that restrict the dependencies among entities within a relationship type. The cardinality constraints have direct impact on the application maintenance, since insertions or deletions of entities or relationships might affect related entities. Indeed, maintenance of a system or of a database can be strengthened to enforce consistency with respect to the cardinality constraints in a schema. Yet, once an ER schema is translated into a logical database schema, or translated within a system, the direct correlation between the cardinality constraints and maintenance transactions is lost, since the components of the ER schema might be decomposed among those of the logical database schema or the target system.In this paper, a full solution to the enforcement of cardinality constraints in EER schemas is given. We extend the enhanced ER (EER) data model with structure-based update methods that are fully defined by the cardinality constraints. The structure methods are provably terminating and cardinality faithful, i.e., they do not insert new inconsistencies and can only decrease existing ones. A refined approach towards measuring the cardinality consistency of a database is introduced. The contribution of this paper is in the automatic creation of update methods, and in building the formal basis for proving their correctness.  相似文献   

10.
This paper focuses on modeling pronunciation variation in two different ways: data-derived and knowledge-based. The knowledge-based approach consists of using phonological rules to generate variants. The data-derived approach consists of performing phone recognition, followed by smoothing using decision trees (D-trees) to alleviate some of the errors in the phone recognition. Using phonological rules led to a small improvement in WER; a data-derived approach in which the phone recognition was smoothed using D-trees prior to lexicon generation led to larger improvements compared to the baseline. The lexicon was employed in two different recognition systems: a hybrid HMM/ANN system and a HMM-based system, to ascertain whether pronunciation variation was truly being modeled. This proved to be the case as no significant differences were found between the results obtained with the two systems. A comparison between the knowledge-based and data-derived methods showed that 17% of variants generated by the phonological rules were also found using phone recognition, and this increases to 46% when the phone recognition output is smoothed by using D-trees.  相似文献   

11.
This paper discusses predictor–corrector iteration schemes (PC iteration schemes) based on direct collocatio–based Runge–Kutt–Nyström corrector methods (RKN corrector methods) for solving nonstiff initial-value problems (IVPs) for systems of special second-order differential equations y′′(t) = f(y(t)) Our approach is to regard the well-known parallel-iterated RKN methods (PIRKN methods) as PC iteration processes in which the simple, low-order last step value predictors are replaced with the high-order Adams-type predictors. Moreover, the param-eters of the new direct collocation-based RKN corrector methods are chosen in such a way that the convergence rate of the considered PC iteration processes is optimized. In this way, we obtain parallel PC methods with fast convergence and high-accurate predictions. Application of the resulting parallel PC methods to a few widely-used test problems reveals that the sequential costs are very much reduced when compared with the parallel and sequential explicit RKN methods from the literature.  相似文献   

12.
This paper deals with convergence and stability of exponential Runge–Kutta methods of collocation type for delay differential equations. It is proved that these kinds of numerical methods converge at least with their stage order. Moreover, a sufficient condition of the numerical stability is provided. Finally, some numerical examples are presented to illustrate the main conclusions.  相似文献   

13.
M. Arioli  E. Noulard  A. Russo 《Calcolo》2001,38(2):97-112
We show that, when solving a linear system with an iterative method, it is necessary to measure the error in the space in which the residual lies. We present examples of linear systems which emanate from the finite element discretization of elliptic partial differential equations, and we show that, when we measure the residual in H −1(Ω), we obtain a true evaluation of the error in the solution, whereas the measure of the same residual with an algebraic norm can give misleading information about the convergence. We also state a theorem of functional compatibility that proves the existence of perturbations such that the approximate solution of a PDE is the exact solution of the same PDE perturbed. Received: March 2000 / Accepted: October 2000  相似文献   

14.
The non-overlapping Domain Decomposition (DD) method is a powerful tool for the coupling of Finite Element (FE) and Boundary Element (BE) methods. Moreover it provides a natural basis for constructing efficient parallel solvers. However, both, the efficiency and the robustness of DD–solvers depends heavily on the underlying decomposition of the domain of interest into subdomains. In this paper, we introduce the Adaptive Domain Decomposition Preprocessor ADDPre which realizes an automatic decomposition of the computational domain into p subdomains, where p is the number of processors to be used. We discuss the codes being involved, the algorithms which they are based on and the data–formats being used for describing the decomposition of the problem. Numerical examples, demonstrating the performance of the preprocessor are presented. Received: 20 January 1999 / Accepted: 12 May 1999  相似文献   

15.
Using W-transformation of Hairre and Wanner, high-order implicit Runge–Kutta (RK) methods based on Lobatto quadrature are studied. Methods constructed and presented include and extend the existing Lobatto III type RK methods. Two to four stages general Lobatto III methods with three parameters α, β, γ are constructed, and a particular one-parameter family Lobatto III C(γ) method is also presented.  相似文献   

16.
This paper examines an analogue method for power spectral density estimation which employs an asymmetrical modulation. The approximate expressions for the expected value and for the dispersion of the estimate thus obtained are worked out by means of simplifying hypotheses.

Then a comparison is drawn between the method under examination and another one with symmetrical modulation; it is shown that in some cases the former has advantageous results in that at a parity of estimate dispersion it demands a smaller number of components.  相似文献   

17.
Li X.  Wang Z.  Liu B.  Wu L. 《智能系统学报》2012,(收录汇总):1143-1155
Mahjong and its different variants have complex rules. Therefore, building a high-level Mahjong game artificial intelligence (AI) algorithm and its test environment is challenging. Through the analysis of relevant research literature on Mahjong game, this paper identified two types of Mahjong AI construction methods based on knowledge and data. Moreover, the advantages and disadvantages of each typical method are analyzed, emphasizing the construction method of Suphx. The problems and challenges encountered in constructing Mahjong AI are identified, suggesting the need to apply experience replay, hierarchical reinforcement learning, curiosity model, opponent model, metalearning, transfer learning, and curriculum learning to the AI algorithm optimization of Mahjong game and construct diversified Mahjong AI evaluation indicators, general confrontation platforms, and high-quality data sets. These problems are all promising research directions for the future. © 2023, Editorial Department of CAAI Transactions on Intelligent Systems. All rights reserved.  相似文献   

18.
19.
The use of satellite remote sensing for the mapping of snow-cover characteristics has a long-lasting history reaching back until the 1960s. Because snow cover plays an important role in the Earth's climate system, it is necessary to map snow-cover extent and snow mass in both high temporal and high spatial resolutions. This task can only be achieved by the use of remotely sensed data. Many different sensors have been used in the past decades with various algorithms and respective accuracies. This article provides an overview of the most common methods. The limitations, advantages and drawbacks will be illustrated while error sources and strategies on how to ease their impact will be reviewed. Beginning with a short summary of the physical and spectral properties of snow, methods to map snow extent from the reflective part of the spectrum, algorithms to estimate snow water equivalent (SWE) from passive microwave (PM) data and the combination of both spectra will be delineated. At the end, the reader should have an overarching overview of what is currently possible and the difficulties that can occur in the context of snow-cover mapping from the reflective and microwave parts of the spectrum.  相似文献   

20.
《Computers & Structures》2001,79(22-25):2031-2037
The development of flow instabilities due to high Reynolds number flow in artificial heart-value geometries inducing high strain rates and stresses often leads to hemolysis and related highly undesired effects. Geometric and functional optimization of artificial heart valves is therefore mandatory. In addition to experimental work in this field it is meanwhile possible to obtain increasing insight into flow dynamics by computer simulation of refined model problems. Here we present two-dimensional simulation results of the coupled fluid–structure problem defined by a model geometry of an artificial heart value with moving leaflets exposed to a channel flow driven by transient boundary conditions representing a physiologically relevant regime. A modified lattice-Boltzmann approach is used to solve the coupled problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号