首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
When Horn clause theories are combined with integrity constraints to produce potentially refutable theories, Seki and Takeuchi have shown how crucial literals can be used to discriminate two mutually incompatible theories. A literal is crucial with respect to two theories if only one of the two theories supports the derivation of that literal. In other words, actually determining the truth value of the crucial literal will refute one of the two incompatible theories. This paper presents an integration of the idea of crucial literal with Theorist, a logic-based system for hypothetical reasoning. Theorist is a goal-directed nonmonotonic reasoning system that classifies logical formulas as possible hypotheses, facts, and observations. As Theorist uses full clausal logic, it does not require Seki and Takeuchi's notion of integrity constraint to define refutable theories. In attempting to deduce observation sentences, Theorist identifies instances of possible hypotheses as nomological explanations: consistent sets of hypothesis instances required to deduce observations. As multiple and mutually incompatible explanations are possible, the notion of crucial literal provides the basis for proposing experiments that distinguish competing explanations. We attempt to make three contributions. First, we adapt Seki and Takeuchi's method for Theorist. To do so, we incrementally use crucial literals as experiments, whose results are used to reduce the total number of explanations generated for a given set of observations. Next, we specify an extension which incrementally constructs a table of all possible crucial literals for any pair of theories. This extension is more efficient and provides the user with greater opportunity to conduct experiments to eliminate falsifiable theories. A prototype is implemented in CProlog, and several examples of diagnosis are considered to show its empirical efficiency. Finally, we point out that assumption-based truth maintenance systems (ATMS), as used in the multiple fault diagnosis system of de Kleer and Williams, are interesting special cases of this more general method of distinguishing explanatory theories.  相似文献   

3.
介绍了处理多元有序数据的定向判别分析新方法原理、建模流程、应用流程及其在沉积化学中的应用实例。这种判别分析将分类建模与判别归类分开,求解与专业知识结合。新方法用多组或逐步判别分析对多元有序数据建模,应用时根据应用领域的知识对样本归属作初步定向,然后选择模型的相关局部进行判别归类,从而实现有序判别。这种方法用于解决由于时间序列多元数据周期性造成的样本分类颠倒问题。在塔里木盆地沉积岩时间序列化学数据的应用实例中,解决了石油井下地层预测和归类问题。  相似文献   

4.
The quality of experiments designed using standard optimisation-based techniques can be adversely affected by poor starting values of the parameters. Thus, there is a necessity for design methods that are insensitive (“robust”) to these starting values. Here, two novel, robust criteria are presented for computing optimal dynamic inputs for experiments aiming at improving parameter precision, based on previously proposed methods for providing design robustness — the expected value criterion and the max–min criterion. In this paper, both criteria are extended by use of the information matrix derived for dynamic systems. The experiment design problem is cast as an optimal control problem, with experiment decision variables such as sampling times of response variables, time-varying and time-invariant external controls, experiment duration, and initial conditions. Comparisons are made of experiment designs obtained using the conventional approach and the newly proposed criteria. A typical semi-continuous bioreactor application is presented.  相似文献   

5.
For the solution of the problem of L-optimal design of an experiment use is made of a skeleton algorithm [1]. This algorithm is modified for the solution of a multiparameter problem of linear programming, to which the initial problem reduces. The algorithm ensures the lack of almost degenerate iterations and does not require operations of the matrix inversion.  相似文献   

6.
A computer program for a microcomputer (HP 86) is presented to discriminate between different models and to design new experiments for model discrimination. By a non-linear fitting algorithm a set of experimental data is fitted with different models suggested by the user. The parameters characterizing each model are estimated by minimizing the sum of squared residuals; different criteria are used to test the choice between two or more models at different levels of probability and the smallest number of additional experiments required for discrimination is computed. If discrimination is not achieved a direct search method is used to find the local maxima of the divergence (or information for discrimination) over a user-chosen domain of independent variables (x R'). The x value corresponding to the absolute maximum of the divergence is the best choice to run a new experiment for discrimination.  相似文献   

7.
In many remote-sensing projects, one is usually interested in a small number of land-cover classes present in a study area and not in all the land-cover classes that make-up the landscape. Previous studies in supervised classification of satellite images have tackled specific class mapping problem by isolating the classes of interest and combining all other classes into one large class, usually called others, and by developing a binary classifier to discriminate the class of interest from the others. Here, this approach is called focused approach. The strength of the focused approach is to decompose the original multi-class supervised classification problem into a binary classification problem, focusing the process on the discrimination of the class of interest. Previous studies have shown that this method is able to discriminate more accurately the classes of interest when compared with the standard multi-class supervised approach. However, it may be susceptible to data imbalance problems present in the training data set, since the classes of interest are often a small part of the training set. A result the classification may be biased towards the largest classes and, thus, be sub-optimal for the discrimination of the classes of interest. This study presents a way to minimize the effects of data imbalance problems in specific class mapping using cost-sensitive learning. In this approach errors committed in the minority class are treated as being costlier than errors committed in the majority class. Cost-sensitive approaches are typically implemented by weighting training data points accordingly to their importance to the analysis. By changing the weight of individual data points, it is possible to shift the weight from the larger classes to the smaller ones, balancing the data set. To illustrate the use of the cost-sensitive approach to map specific classes of interest, a series of experiments with weighted support vector machines classifier and Landsat Thematic Mapper data were conducted to discriminate two types of mangrove forest (high-mangrove and low-mangrove) in Saloum estuary, Senegal, a United Nations Educational, Scientific and Cultural Organisation World Heritage site. Results suggest an increase in overall classification accuracy with the use of cost-sensitive method (97.3%) over the standard multi-class (94.3%) and the focused approach (91.0%). In particular, cost-sensitive method yielded higher sensitivity and specificity values on the discrimination of the classes of interest when compared with the standard multi-class and focused approaches.  相似文献   

8.
This paper deals with the H filtering problem for a class of discrete-time nonlinear systems with or without real time-varying parameter uncertainty and unknown initial state. For the case when there is no parametric uncertainty in the system, we are concerned with designing a nonlinear H filter such that the induced l2 norm of the mapping from the noise signal to the estimation error is within a specified bound. It is shown that this problem can be solved via one Riccati equation. We also consider the design of nonlinear filters which guarantee a prescribed H performance in the presence of parametric uncertainties. In this situation, a solution is obtained in terms of two Riccati equations.  相似文献   

9.
Body-based motion gestures have been gaining popularity in designing interactive systems. However, theories and design guidelines on the use of body movement in design have not been fully evaluated. This article investigates human ability to perform discrete target selection tasks using stretching action through two controlled experiments. The experimental results indicate that: (1) the range of the discrete levels of depth that users can easily discriminate with arm stretching is up to 16 with full visual feedback, but is down to 4 without the feedback; (2) dwelling, a gesture with keeping a hand motionless and the cursor within a target area for a certain amount of time, may be the best gesture for confirmation command; (3) full visual feedback can improve the user performance; and (4) arm stretching action can be modeled using Fitts’ law. We also discuss the design potentials for Stretch Widgets based on these results.  相似文献   

10.
基于CNN-LSTM的QAR数据特征提取与预测   总被引:1,自引:0,他引:1  
针对传统数据驱动的故障诊断方法难以从QAR数据中提取有效特征的问题,提出一种融合卷积神经网络(convolutional neural network,CNN)与长短时记忆网络(long short-term memory,LSTM)的双通道融合模型CNN-LSTM。CNN与LSTM分别作为两个通道,通过注意力机制(attention)融合,从而使模型能同时表达数据在空间维度和时间维度上的特征,并以时间序列预测的方式验证融合模型特征提取的有效性。实验结果表明,双通道融合模型与单一的CNN、LSTM相比,能够更有效地提取数据特征,模型单步预测与多步预测误差平均降低35.3%。为基于QAR数据的故障诊断提供一种新的研究思路。  相似文献   

11.
《Artificial Intelligence》2006,170(4-5):472-506
Modeling an experimental system often results in a number of alternative models that are all justified by the available experimental data. To discriminate among these models, additional experiments are needed. Existing methods for the selection of discriminatory experiments in statistics and in artificial intelligence are often based on an entropy criterion, the so-called information increment. A limitation of these methods is that they are not well-adapted to discriminating models of dynamical systems under conditions of limited measurability. Moreover, there are no generic procedures for computing the information increment of an experiment when the models are qualitative or semi-quantitative. This has motivated the development of a method for the selection of experiments to discriminate among semi-quantitative models of dynamical systems. The method has been implemented on top of existing implementations of the qualitative and semi-quantitative simulation techniques QSIM, Q2, and Q3. The applicability of the method to real-world problems is illustrated by means of an example in population biology: the discrimination of four competing models of the growth of phytoplankton in a bioreactor. The models have traditionally been considered equivalent for all practical purposes. Using our model discrimination approach and experimental data we show, however, that two of them are superior for describing phytoplankton growth under a wide range of experimental conditions.  相似文献   

12.
For some years a lot of effort has been put into improving the human-computer interface in CAD-systems. After a short introduction on input problems in design processes, some of this work is reported here as well as a fairly new method, handsketching. In the third part of this paper a special system called CASUS is thoroughly explained from the user's point of view. A fourth part deals with the authors' conviction that psychological theories and methods are necessary in order to create still better interfaces. In the last chapter the system presented with all its interface features is compared to some human factor considerations discussed before. No attention is paid to the fact that very often the quality of work is changed by introducing CAD-systems. Though the designer's situation can be very often improved much more by designing his entire work, this paper lays claim only to designing a tool for the designer's hand.  相似文献   

13.
The concept of particle swarms, although initially introduced for simulating human social behaviors, has become very popular these days as an efficient means for intelligent search and optimization. The particle swarm optimization (PSO), as it is called now, does not require any gradient information of the function to be optimized, uses only primitive mathematical operators and is conceptually very simple. This paper investigates a novel approach to the designing of two-dimensional zero phase infinite impulse response (IIR) digital filters using the PSO algorithm. The design task is reformulated as a constrained minimization problem and is solved by a modified PSO algorithm. Numerical results are presented. The paper also demonstrates the superiority of the proposed design method by comparing it with two recently published filter design methods and two other state of the art optimization techniques.  相似文献   

14.
张献  贲可荣  曾杰 《软件学报》2021,32(7):2219-2241
软件缺陷预测是软件质量保障领域的一个活跃话题,它可以帮助开发人员发现潜在的缺陷并更好地利用资源.如何为预测系统设计更具判别力的度量元,并兼顾性能与可解释性,一直是人们致力于研究的方向.针对这一挑战,提出了一种基于代码自然性特征的缺陷预测方法——CNDePor.该方法通过正逆双向度量代码并利用质量信息对样本加权的方式改进...  相似文献   

15.
针对加工时间为模糊数的柔性作业车间调度问题,考虑最小化模糊最大完工时间、模糊机器总负荷、模糊关键机器负荷为优化目标,提出一种有效求解该类优化问题的多目标进化算法。算法采用一种混合不同机器分配和工序排序策略的方法产生初始种群,并采用插入空隙法对染色体进行解码。定义一种新的基于可能度的个体支配关系和一种基于决策空间的拥挤算子,并将所提支配关系和拥挤算子运用于快速非支配排序。接着,提出一种基于移动模糊关键工序的局部搜索策略对种群中的优势个体进行局部搜索。通过试验研究关键参数对算法性能的影响并将所提算法与3种不同的优化算法作对比。结果表明,所提算法能够比其它算法更有效解决多目标模糊柔性作业车间调度优化问题。  相似文献   

16.
采用中心聚类与PSO的RBF网络设计方法   总被引:3,自引:0,他引:3       下载免费PDF全文
基于中心聚类法与微粒群(PSO)优化方法,提出一种径向基函数(RBF)网络的设计算法。算法采用中心聚类方法对输入样本数据进行聚类处理,自适应地确定RBF网络隐含层的初始参数;利用修正全局最优解计算方法的经典PSO算法优化RBF网络隐含层参数,进一步修正网络结构参数;输出层权值采用带遗忘因子的递推最小二乘算法在线更新。采用该方法建立炼铁过程中烧结矿成分与转鼓强度关系的预测模型,并用现场数据加以验证;实验结果表明该方法收敛速度快,所建立的模型具有较高的预测精度,可用于复杂非线性系统建模。  相似文献   

17.
Although the responses of dopamine neurons in the primate midbrain are well characterized as carrying a temporal difference (TD) error signal for reward prediction, existing theories do not offer a credible account of how the brain keeps track of past sensory events that may be relevant to predicting future reward. Empirically, these shortcomings of previous theories are particularly evident in their account of experiments in which animals were exposed to variation in the timing of events. The original theories mispredicted the results of such experiments due to their use of a representational device called a tapped delay line. Here we propose that a richer understanding of history representation and a better account of these experiments can be given by considering TD algorithms for a formal setting that incorporates two features not originally considered in theories of the dopaminergic response: partial observability (a distinction between the animal's sensory experience and the true underlying state of the world) and semi-Markov dynamics (an explicit account of variation in the intervals between events). The new theory situates the dopaminergic system in a richer functional and anatomical context, since it assumes (in accord with recent computational theories of cortex) that problems of partial observability and stimulus history are solved in sensory cortex using statistical modeling and inference and that the TD system predicts reward using the results of this inference rather than raw sensory data. It also accounts for a range of experimental data, including the experiments involving programmed temporal variability and other previously unmodeled dopaminergic response phenomena, which we suggest are related to subjective noise in animals' interval timing. Finally, it offers new experimental predictions and a rich theoretical framework for designing future experiments.  相似文献   

18.
Using the instructional computer simulation “Hunger in the Sahel”, two experiments were conducted concerning the moderating effect of domain knowledge on the correlation of intelligence and problem solving. Experiment 1 with N=200 students implemented a between-subjects design, Experiment 2 with N=28 young adults a within-subjects design with 10 repeated measures on problem solving. The results correspond to the Elshout–Raaheim hypothesis: With low domain knowledge, the correlation is low; with increasing knowledge, the correlation increases; with further increasing knowledge, the correlation decreases; finally, when the problem has become a simple task, the correlation is again low. The results are of practical and theoretical relevance for designing simulation-based learning environments and simulation-based tests for measuring intelligence and problem-solving ability.  相似文献   

19.
Conklin  Darrell  Witten  Ian H. 《Machine Learning》1994,16(3):203-225
A central problem in inductive logic programming is theory evaluation. Without some sort of preference criterion, any two theories that explain a set of examples are equally acceptable. This paper presents a scheme for evaluating alternative inductive theories based on an objective preference criterion. It strives to extract maximal redundancy from examples, transforming structure into randomness. A major strength of the method is its application to learning problems where negative examples of concepts are scarce or unavailable. A new measure called model complexity is introduced, and its use is illustrated and compared with a proof complexity measure on relational learning tasks. The complementarity of model and proof complexity parallels that of model and proof–theoretic semantics. Model complexity, where applicable, seems to be an appropriate measure for evaluating inductive logic theories.  相似文献   

20.
A new method for optimizing complex engineering designs is presented that is based on the Learnable Evolution Model (LEM), a recently developed form of non‐Darwinian evolutionary computation. Unlike conventional Darwinian‐type methods that execute an unguided evolutionary process, the proposed method, called LEMd, guides the evolutionary design process using a combination of two methods, one involving computational intelligence and the other involving encoded expert knowledge. Specifically, LEMd integrates two modes of operation, Learning Mode and Probing Mode. Learning Mode applies a machine learning program to create new designs through hypothesis generation and instantiation, whereas Probing Mode creates them by applying expert‐suggested design modification operators tailored to the specific design problem. The LEMd method has been used to implement two initial systems, ISHED1 and ISCOD1, specialized for the optimization of evaporators and condensers in cooling systems, respectively. The designs produced by these systems matched or exceeded in performance the best designs developed by human experts. These promising results and the generality of the presented method suggest that LEMd offers a powerful new tool for optimizing complex engineering systems. © 2006 Wiley Periodicals, Inc. Int J Int Syst 21: 1217–1248, 2006.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号