首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 154 毫秒
1.
投影方法作为一类重要的模型降阶方法,其计算过程稳定,易于实现,但在理论上鲜有良好的时域误差估计结果.本文提出一种基于小样本估计过程的时域投影模型降阶误差估计方法.该方法首先将降阶过程中产生的误差分解为两部分,然后对各部分利用小样本估计方法进行估计.文中分别对线性和非线性输入输出系统进行小样本误差估计分析.此外,该方法能对线性系统的扰动问题进行分析,进一步的数值算例验证了该方法的有效性.  相似文献   

2.
With the rise in the popularity of web-based education, there is a pressing need for the design of web-based systems that are domain-specific. This need is particularly acute for the domain of second language education, where generic web-based systems fall short of fulfilling the potential of the Internet for meeting the particular challenges faced by language learners and teachers. A novel interactive online environment is described which integrates the potential of computers, Internet, and linguistic analysis to address the highly specific needs of second language composition classes. The system accommodates learners, teachers, and researchers. A crucial consequence of the interactive nature of this system is that users actually create information through their use, and this information enables the system to improve with use. Specifically, the essays written by users and the comments given by teachers are archived in a searchable online database. Learners can do pinpoint searches of this data to understand their individual persistent difficulties. Teachers can do the same in order to discover these difficulties for individual learners and for a class as a whole. The architecture of the system makes possible a novel approach to corpus analysis of learner errors. Teachers' annotations of learner errors are exploited as bootstraps which can lead to the incremental uncovering of errors without the need to heavily error tag the learner corpus. Error analysis then feeds the design of online help content.  相似文献   

3.
Pareto evolutionary neural networks   总被引:4,自引:0,他引:4  
For the purposes of forecasting (or classification) tasks neural networks (NNs) are typically trained with respect to Euclidean distance minimization. This is commonly the case irrespective of any other end user preferences. In a number of situations, most notably time series forecasting, users may have other objectives in addition to Euclidean distance minimization. Recent studies in the NN domain have confronted this problem by propagating a linear sum of errors. However this approach implicitly assumes a priori knowledge of the error surface defined by the problem, which, typically, is not the case. This study constructs a novel methodology for implementing multiobjective optimization within the evolutionary neural network (ENN) domain. This methodology enables the parallel evolution of a population of ENN models which exhibit estimated Pareto optimality with respect to multiple error measures. A new method is derived from this framework, the Pareto evolutionary neural network (Pareto-ENN). The Pareto-ENN evolves a population of models that may be heterogeneous in their topologies inputs and degree of connectivity, and maintains a set of the Pareto optimal ENNs that it discovers. New generalization methods to deal with the unique properties of multiobjective error minimization that are not apparent in the uni-objective case are presented and compared on synthetic data, with a novel method based on bootstrapping of the training data shown to significantly improve generalization ability. Finally experimental evidence is presented in this study demonstrating the general application potential of the framework by generating populations of ENNs for forecasting 37 different international stock indexes.  相似文献   

4.
Non-conforming numerical approximations offer increased flexibility for applications that require high resolution in a localized area of the computational domain or near complex geometries. Two key properties for non-conforming methods to be applicable to real world applications are conservation and energy stability. The summation-by-parts (SBP) property, which certain finite-difference and discontinuous Galerkin methods have, finds success for the numerical approximation of hyperbolic conservation laws, because the proofs of energy stability and conservation can discretely mimic the continuous analysis of partial differential equations. In addition, SBP methods can be developed with high-order accuracy, which is useful for simulations that contain multiple spatial and temporal scales. However, existing non-conforming SBP schemes result in a reduction of the overall degree of the scheme, which leads to a reduction in the order of the solution error. This loss of degree is due to the particular interface coupling through a simultaneous-approximation-term (SAT). We present in this work a novel class of SBP–SAT operators that maintain conservation, energy stability, and have no loss of the degree of the scheme for non-conforming approximations. The new degree preserving discretizations require an ansatz that the norm matrix of the SBP operator is of a degree \(\ge 2p\), in contrast to, for example, existing finite difference SBP operators, where the norm matrix is \(2p-1\) accurate. We demonstrate the fundamental properties of the new scheme with rigorous mathematical analysis as well as numerical verification.  相似文献   

5.
彭锦峰  申德荣  寇月  聂铁铮 《软件学报》2023,34(3):1049-1064
随着信息化社会的发展,数据的规模越发庞大,数据的种类也越发丰富.时至今日,数据已经成为国家和企业的重要战略资源,是科学化管理的重要保障.然而,随着社会生活产生的数据日益丰富,大量的脏数据也随之而来,数据质量问题油然而生.如何准确而全面地检测出数据集中所包含的错误数据,一直是数据科学中的痛点问题.尽管已有许多传统方法被广泛用于各行各业,如基于约束与统计的检测方法,但这些方法通常需要丰富的先验知识与昂贵的人力和时间成本.受限于此,这些方法往往难以准确而全面地检测数据.近年来,许多新型错误检测方法利用深度学习技术,通过时序推断、文本解析等方式取得了更好检测效果,但它们通常只适用于特定的领域或特定的错误类型,面对现实生活中的复杂情况,泛用性不足.基于上述情况,结合传统方法与深度学习技术的优点,提出了一个基于多视角的多类型错误全面检测模型CEDM.首先,从模式的角度,结合现有约束条件,在属性、单元和元组层面进行多维度的统计分析,构建出基础检测规则;然后,通过词嵌入捕获数据语义,从语义的角度分析属性相关性、单元关联性与元组相似性,进而基于语义关系,从多个维度上更新、扩展基础规则;最终,联合多个视角...  相似文献   

6.
This paper presents a new bio-inspired algorithm (FClust) that dynamically creates and visualizes groups of data. This algorithm uses the concepts of a flock of agents that move together in a complex manner with simple local rules. Each agent represents one data. The agents move together in a 2D environment with the aim of creating homogeneous groups of data. These groups are visualized in real time, and help the domain expert to understand the underlying structure of the data set, like for example a realistic number of classes, clusters of similar data, isolated data. We also present several extensions of this algorithm, which reduce its computational cost, and make use of a 3D display. This algorithm is then tested on artificial and real-world data, and a heuristic algorithm is used to evaluate the relevance of the obtained partitioning.  相似文献   

7.
基于源代码分析的逆向建模*   总被引:1,自引:0,他引:1  
逆向建模通过对源代码进行分析,提取出代码中的对象信息、结构信息、流程信息等,生成对象间的关系描述、结构描述、系统流程描述等设计模型描述。逆向建模过程中对源代码的分析处理与编译过程的前端处理相似,只是处理的复杂程度与产生的目标结果不同,因此可以采用编译技术对源代码进行处理。通过逆向建模可以弥补软件设计中缺少或缺失的模型设计文档,帮助代码阅读者更好地理解程序,帮助软件的测试和优化。介绍了对C/C++源代码进行逆向建模的实现。  相似文献   

8.
Human error assessment and reduction technique (HEART) is one of the most commonly used human error assessment approaches which computes human error probability (HEP) to prioritize errors related to human actions. HEART is a powerful tool considering error producing conditions (EPCs) which increase the HEP for generalized task versions named as generic task types (GTTs). HEART can give a solution including prevention of human‐related errors (HREs) and reduction of the HREs’ impacts via implementing additional controls. However, it has many shortcomings for real‐life error assessments. In this context, this study aims to improve effective usage of HEART through an advanced version of decision‐making trial and evaluation laboratory (AV‐DEMATEL). The reason to perform AV‐DEMATEL is to show the complex effect relations between main tasks (MTs), subtasks (STs), and EPCs in a process. For this aim, an integrated effect relation matrix is proposed for DEMATEL and importance weights of MTs, STs, and EPCs are computed based on this matrix. In addition, not only HREs are considered but also machine‐related errors (MREs) are taken into account to make error assessment for the process. The proposed approach also provides flexibility to categorize STs in different GTTs. Finally, a new term “process error probability” including HREs’ probabilities and MREs’ probabilities is recommended to compute error probability in an integrated manner for the process. To utilize the proposed approach, an example of a steam boiler daily control process is given.  相似文献   

9.
Arbitrary high precision output tracking is one of the most desirable control objectives found in industrial applications regardless of measurement errors. The main purpose of this paper is to supply to the iterative learning control (ILC) designer guidelines to select the corresponding learning gain in order to achieve this control objective. For example, if certain conditions are met, then it is necessary for the learning gain to converge to zero in the learning iterative domain. In particular, this paper presents necessary and sufficient conditions for boundedness of trajectories and uniform tracking in presence of measurement noise and a class of random reinitialization errors for a simple ILC algorithm. The system under consideration is a class of discrete-time affine nonlinear systems with arbitrary relative degree and arbitrary number of system inputs and outputs. The state function does not need to satisfy a Lipschitz condition. This work also provides a recursive algorithm that generates the appropriate learning gain functions that meet the arbitrary high precision output tracking objective. The resulting tracking output error is shown to converge to zero at a rate inversely proportional to square root of the number of learning iterations in presence of measurement noise and a class of reinitialization errors. Two illustrative numerical examples are presented.  相似文献   

10.
A theory of learning from different domains   总被引:1,自引:0,他引:1  
Discriminative learning methods for classification perform well when training and test data are drawn from the same distribution. Often, however, we have plentiful labeled training data from a source domain but wish to learn a classifier which performs well on a target domain with a different distribution and little or no labeled training data. In this work we investigate two questions. First, under what conditions can a classifier trained from source data be expected to perform well on target data? Second, given a small amount of labeled target data, how should we combine it during training with the large amount of labeled source data to achieve the lowest target error at test time? We address the first question by bounding a classifier’s target error in terms of its source error and the divergence between the two domains. We give a classifier-induced divergence measure that can be estimated from finite, unlabeled samples from the domains. Under the assumption that there exists some hypothesis that performs well in both domains, we show that this quantity together with the empirical source error characterize the target error of a source-trained classifier. We answer the second question by bounding the target error of a model which minimizes a convex combination of the empirical source and target errors. Previous theoretical work has considered minimizing just the source error, just the target error, or weighting instances from the two domains equally. We show how to choose the optimal combination of source and target error as a function of the divergence, the sample sizes of both domains, and the complexity of the hypothesis class. The resulting bound generalizes the previously studied cases and is always at least as tight as a bound which considers minimizing only the target error or an equal weighting of source and target errors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号