首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 140 毫秒
1.
In the ongoing discussion about combining rules and ontologies on the Semantic Web a recurring issue is how to combine first-order classical logic with nonmonotonic rule languages. Whereas several modular approaches to define a combined semantics for such hybrid knowledge bases focus mainly on decidability issues, we tackle the matter from a more general point of view. In this paper, we show how Quantified Equilibrium Logic (QEL) can function as a unified framework which embraces classical logic as well as disjunctive logic programs under the (open) answer set semantics. In the proposed variant of QEL, we relax the unique names assumption, which was present in earlier versions of QEL. Moreover, we show that this framework elegantly captures the existing modular approaches for hybrid knowledge bases in a unified way.  相似文献   

2.
Aerosol jet printing (AJP) technology recently gained considerable attention in an electronic manufacturing industry due to its ability to fabricate parts with fine resolution and high flexibility. However, morphology control has been identified as the main limitation of AJP process, which drastically affects the electrical performance of printed components. Even though previous researches have made significant efforts in process modeling to improve the controllability of the the printed line morphology, the modeling process is still inefficient under modified operating conditions due to the repeated experiments. In this paper, a knowledge transfer framework is proposed for efficient modeling of the AJP process under varied operating conditions. The proposed framework consists of three critical steps for rapid process modeling of AJP. First, a sufficient source domain dataset at a certain operating condition is collected to develop a source model based on Gaussian process regression. Then, the representative experimental points are selected from the source domain to construct a target dataset under different operating conditions. Finally, classical knowledge transfer approaches are adopted to extract the built-in knowledge from the source model; thus, a new process model can be developed efficiently by the transferred knowledge and the representative dataset from the target domain. The validity of the proposed framework for the rapid process modeling of AJP is investigated by case study, and the limitations of the classical knowledge transfer approaches adopted in AJP are also analyzed systematically. The proposed framework is developed based on the principles of knowledge discovery, which is different from traditional process modeling approaches in AJP. Therefore, the modeling process is more systematic and cost-efficient, which will be helpful to improve the controllability of the line morphology. Additionally, due to its data-driven based characteristics, the proposed framework can be applied to other additive manufacturing technologies for process modeling researches.  相似文献   

3.
Inductive learning is a method for automated knowledge acquisition. It converts a set of training data into a knowledge structure. In the process of knowledge induction, statistical techniques can play a major role in improving performance. In this paper, we investigate the competition and integration between the traditional statistical and the inductive learning methods. First, the competition between these two approaches is examined. Then, a general framework for integrating these two approaches is presented. This framework suggests three possible integrations: (1) statistical methods as preprocessors for inductive learning, (2) inductive learning methods as preprocessors for statistical classification, and (3) the combination of the two methods to develop new algorithms. Finally, empirical evidence concerning these three possible integrations are discussed. The general conclusion is that algorithms integrating statistical and inductive learning concepts are likely to make the most improvement in performance.  相似文献   

4.
In this paper, we consider instance selection as an important focusing task in the data preparation phase of knowledge discovery and data mining. Focusing generally covers all issues related to data reduction. First of all, we define a broader perspective on focusing tasks, choose instance selection as one particular focusing task, and outline the specification of concrete evaluation criteria to measure success of instance selection approaches. Thereafter, we present a unifying framework that covers existing approaches towards solutions for instance selection as instantiations. We describe specific examples of instantiations of this framework and discuss their strengths and weaknesses. Then, we outline an enhanced framework for instance selection, generic sampling, and summarize example evaluation results for several different instantiations of its implementation. Finally, we conclude with open issues and research challenges for instance selection as well as focusing in general.  相似文献   

5.
This paper presents a review in the form of a unified framework for tackling estimation problems in Digital Signal Processing (DSP) using Support Vector Machines (SVMs). The paper formalizes our developments in the area of DSP with SVM principles. The use of SVMs for DSP is already mature, and has gained popularity in recent years due to its advantages over other methods: SVMs are flexible non-linear methods that are intrinsically regularized and work well in low-sample-sized and high-dimensional problems. SVMs can be designed to take into account different noise sources in the formulation and to fuse heterogeneous information sources. Nevertheless, the use of SVMs in estimation problems has been traditionally limited to its mere use as a black-box model. Noting such limitations in the literature, we take advantage of several properties of Mercerʼs kernels and functional analysis to develop a family of SVM methods for estimation in DSP. Three types of signal model equations are analyzed. First, when a specific time-signal structure is assumed to model the underlying system that generated the data, the linear signal model (so-called Primal Signal Model formulation) is first stated and analyzed. Then, non-linear versions of the signal structure can be readily developed by following two different approaches. On the one hand, the signal model equation is written in Reproducing Kernel Hilbert Spaces (RKHS) using the well-known RKHS Signal Model formulation, and Mercerʼs kernels are readily used in SVM non-linear algorithms. On the other hand, in the alternative and not so common Dual Signal Model formulation, a signal expansion is made by using an auxiliary signal model equation given by a non-linear regression of each time instant in the observed time series. These building blocks can be used to generate different novel SVM-based methods for problems of signal estimation, and we deal with several of the most important ones in DSP. We illustrate the usefulness of this methodology by defining SVM algorithms for linear and non-linear system identification, spectral analysis, non-uniform interpolation, sparse deconvolution, and array processing. The performance of the developed SVM methods is compared to standard approaches in all these settings. The experimental results illustrate the generality, simplicity, and capabilities of the proposed SVM framework for DSP.  相似文献   

6.
采用精选Gabor小波和SVM分类的物体识别   总被引:3,自引:0,他引:3  
沈琳琳  纪震 《自动化学报》2009,35(4):350-355
提出了一种基于Gabor小波和支持向量机的物体识别通用框架. 在该框架中, 特征抽取采用选取的Gabor小波在物体的最佳位置卷积实现, 而分类则通过支持向量机实现. 相比传统的基于Gabor特征的识别系统, 该方法能够同时达到准确而快速的分类目的. 本论文成功地将该框架应用于两个实际的物体识别例子: 物体/非物体分类和人脸识别. 实验结果证明了所提出的方法相对于其它方法的优越性.  相似文献   

7.
Domain adaptation learning(DAL) methods have shown promising results by utilizing labeled samples from the source(or auxiliary) domain(s) to learn a robust classifier for the target domain which has a few or even no labeled samples.However,there exist several key issues which need to be addressed in the state-of-theart DAL methods such as sufficient and effective distribution discrepancy metric learning,effective kernel space learning,and multiple source domains transfer learning,etc.Aiming at the mentioned-above issues,in this paper,we propose a unified kernel learning framework for domain adaptation learning and its effective extension based on multiple kernel learning(MKL) schema,regularized by the proposed new minimum distribution distance metric criterion which minimizes both the distribution mean discrepancy and the distribution scatter discrepancy between source and target domains,into which many existing kernel methods(like support vector machine(SVM),v-SVM,and least-square SVM) can be readily incorporated.Our framework,referred to as kernel learning for domain adaptation learning(KLDAL),simultaneously learns an optimal kernel space and a robust classifier by minimizing both the structural risk functional and the distribution discrepancy between different domains.Moreover,we extend the framework KLDAL to multiple kernel learning framework referred to as MKLDAL.Under the KLDAL or MKLDAL framework,we also propose three effective formulations called KLDAL-SVM or MKLDAL-SVM with respect to SVM and its variant μ-KLDALSVM or μ-MKLDALSVM with respect to v-SVM,and KLDAL-LSSVM or MKLDAL-LSSVM with respect to the least-square SVM,respectively.Comprehensive experiments on real-world data sets verify the outperformed or comparable effectiveness of the proposed frameworks.  相似文献   

8.
Most inferential approaches to Information Retrieval (IR) have been investigated within the probabilistic framework. Although these approaches allow one to cope with the underlying uncertainty of inference in IR, the strict formalism of probability theory often confines our use of knowledge to statistical knowledge alone (e.g. connections between terms based on their co-occurrences). Human-defined knowledge (e.g. manual thesauri) can only be incorporated with difficulty. In this paper, based on a general idea proposed by van Rijsbergen, we first develop an inferential approach within a fuzzy modal logic framework. Differing from previous approaches, the logical component is emphasized and considered as the pillar in our approach. In addition, the flexibility of a fuzzy modal logic framework offers the possibility of incorporating human-defined knowledge in the inference process. After defining the model, we describe a method to incorporate a human-defined thesaurus into inference by taking user relevance feedback into consideration. Experiments on the CACM corpus using a general thesaurus of English, Wordnet, indicate a significant improvement in the system's performance.  相似文献   

9.
10.
The epistemic notions of knowledge and belief have most commonly been modeled by means of possible worlds semantics. In such approaches an agent knows (or believes) all logical consequences of its beliefs. Consequently, several approaches have been proposed to model systems of explicit belief, more suited to modeling finite agents or computers. In this paper a general framework is developed for the specification of logics of explicit belief. A generalization of possible worlds, called situations, is adopted. However the notion of an accessibility relation is not employed; instead a sentence is believed if the explicit proposition expressed by the sentence appears among a set of propositions associated with an agent at a situation. Since explicit propositions may be taken as corresponding to "belief contexts" or "frames of mind," the framework also provides a setting for investigating such approaches to belief. The approach provides a uniform and flexible basis from which various issues of explicit belief may be addressed and from which systems may be contrasted and compared. A family of logics is developed using this framework, which extends previous approaches and addresses issues raised by these earlier approaches. The more interesting of these logics are tractable, in that determining if a belief follows from a set of beliefs, given certain assumptions, can be accomplished in polynomial time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号