首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 578 毫秒
1.
许格妮  李永明  管雪冲 《软件学报》2015,26(5):1037-1047
在给定的一个初始论域U和参数集E上的全体软集中引入扩展运算与转移运算,研究了它们的性质.在此基础上引入商软集的概念,并在全体商软集中引入联合运算与聚焦运算,得到其构成一个无标记的信息代数,并且若参数集E有限,这个信息代数还是一个无标记的紧信息代数.最后,给出运用无标记信息代数的模型解决软集中不确定问题的决策算法与实例,并与Cagman等人提出的uni-int决策算法做了比较说明.  相似文献   

2.
We present Searn, an algorithm for integrating search and learning to solve complex structured prediction problems such as those that occur in natural language, speech, computational biology, and vision. Searn is a meta-algorithm that transforms these complex problems into simple classification problems to which any binary classifier may be applied. Unlike current algorithms for structured learning that require decomposition of both the loss function and the feature functions over the predicted structure, Searn is able to learn prediction functions for any loss function and any class of features. Moreover, Searn comes with a strong, natural theoretical guarantee: good performance on the derived classification problems implies good performance on the structured prediction problem.  相似文献   

3.
k-consistency operations in constraint satisfaction problems (CSPs) render constraints more explicit by solving size-k subproblems and projecting the information thus obtained down to low-order constraints. We generalise this notion of k-consistency to valued constraint satisfaction problems (VCSPs) and show that it can be established in polynomial time when penalties lie in a discrete valuation structure.A generic definition of consistency is given which can be tailored to particular applications. As an example, a version of high-order consistency (face consistency) is presented which can be established in low-order polynomial time given certain restrictions on the valuation structure and the form of the constraint graph.  相似文献   

4.
Abstract: The success of automatic classification is intricately linked with an effective feature selection. Previous studies on the use of genetic programming (GP) to solve classification problems have highlighted its benefits, principally its inherent feature selection (a process that is often performed independent of a learning method). In this paper, the problem of classification is recast as a feature generation problem, where GP is used to evolve programs that allow non‐linear combination of features to create superFeatures, from which classification tasks can be achieved fairly easily. In order to generate superFeatures robustly, the binary string fitness characterization along with the comparative partner selection strategy is introduced with the aim of promoting optimal convergence. The techniques introduced are applied to two illustrative problems first and then to the real‐world problem of audio source classification, with competitive results.  相似文献   

5.
The problem of searching all possible metrics in N-valued logic is considered. The cases of N = 1, 2, …, 7 are considered, and a number of theorems valid for arbitrary N are formulated. The found metrics are assumed to be used for solving recognition problems with ordinal features by means of metric algorithms of classification.  相似文献   

6.
为了解决传统方法在渲染多模型时通常出现的一些例如内存过度消耗甚至超出系统内存以及显示的帧数率过低等问题,提出了一种新型渐进网格的实现方法。该方法独立于简化算法,通过分解渐进网格的基本操作,尽量省去不必要的数据结构和操作从而使空间复杂性和时间复杂性都有明显的改进。为实际工程应用提出了一种简单的多模型显示框架;实验表明这种以新型渐进网格实现为基础的框架在多模型显示的场景下具有很好的应用价值。  相似文献   

7.
For primary school students, mathematical word problems are often more difficult to solve than straightforward number problems. Word problems require reading and analysis skills, and in order to explain their situational contexts, the proper mathematical knowledge and number operations have to be selected. To improve students' ability in solving word problems, the problem solving process could be supported by procedural and content specific guidance or with only procedural support.. This paper evaluates the effect of two types of hints, procedural only and content‐procedural, provided by a computer programme presented in two versions. Students of grade 6 were randomly assigned to these two versions, which offered five lesson units consisting of eight word problems each. The results indicate that on average the students in the procedural‐content hints group (n = 54) finished about just as many problems in the programme as their counterparts in the procedural‐only condition (n = 51). However, the participants in the first group solved more problems correctly and improved their problem‐solving skills more as indicated by the scores on the problem‐solving post‐test. Apart from presenting our analysis of the findings of this study, also its limitations and its possible implications for future research are discussed in this paper.  相似文献   

8.
Operations on basic data structures such as queues, priority queues, stacks, and counters can dominate the execution time of a parallel program due to both their frequency and their coordination and contention overheads. There are considerable performance payoffs in developing highly optimized, asynchronous, distributed, cache-conscious, parallel implementations of such data structures. Such implementations may employ a variety of tricks to reduce latencies and avoid serial bottlenecks, as long as the semantics of the data structure are preserved. The complexity of the implementation and the difficulty in reasoning about asynchronous systems increases concerns regarding possible bugs in the implementation. In this paper we consider postmortem, black-box procedures for testing whether a parallel data structure behaved correctly. We present the first systematic study of algorithms and hardness results for such testing procedures, focusing on queues, priority queues, stacks, and counters, under various important scenarios. Our results demonstrate the importance of selecting test data such that distinct values are inserted into the data structure (as appropriate). In such cases we present an O(n) time algorithm for testing linearizable queues, an O(n log n) time algorithm for testing linearizable priority queues, and an O( np 2 ) time algorithm for testing sequentially consistent queues, where n is the number of data structure operations and p is the number of processors. In contrast, we show that testing such data structures for executions with arbitrary input values is NP-complete. Our results also help clarify the thresholds between scenarios that admit polynomial time solutions and those that are NP-complete. Our algorithms are the first nontrivial algorithms for these problems.  相似文献   

9.
Multicategory Proximal Support Vector Machine Classifiers   总被引:5,自引:0,他引:5  
Given a dataset, each element of which labeled by one of k labels, we construct by a very fast algorithm, a k-category proximal support vector machine (PSVM) classifier. Proximal support vector machines and related approaches (Fung & Mangasarian, 2001; Suykens & Vandewalle, 1999) can be interpreted as ridge regression applied to classification problems (Evgeniou, Pontil, & Poggio, 2000). Extensive computational results have shown the effectiveness of PSVM for two-class classification problems where the separating plane is constructed in time that can be as little as two orders of magnitude shorter than that of conventional support vector machines. When PSVM is applied to problems with more than two classes, the well known one-from-the-rest approach is a natural choice in order to take advantage of its fast performance. However, there is a drawback associated with this one-from-the-rest approach. The resulting two-class problems are often very unbalanced, leading in some cases to poor performance. We propose balancing the k classes and a novel Newton refinement modification to PSVM in order to deal with this problem. Computational results indicate that these two modifications preserve the speed of PSVM while often leading to significant test set improvement over a plain PSVM one-from-the-rest application. The modified approach is considerably faster than other one-from-the-rest methods that use conventional SVM formulations, while still giving comparable test set correctness.Editor Shai Ben-David  相似文献   

10.

In many classification problems, it is necessary to consider the specific location of an n-dimensional space from which features have been calculated. For example, considering the location of features extracted from specific areas of a two-dimensional space, as an image, could improve the understanding of a scene for a video surveillance system. In the same way, the same features extracted from different locations could mean different actions for a 3D HCI system. In this paper, we present a self-organizing feature map able to preserve the topology of locations of an n-dimensional space in which the vector of features have been extracted. The main contribution is to implicitly preserving the topology of the original space because considering the locations of the extracted features and their topology could ease the solution to certain problems. Specifically, the paper proposes the n-dimensional constrained self-organizing map preserving the input topology (nD-SOM-PINT). Features in adjacent areas of the n-dimensional space, used to extract the feature vectors, are explicitly in adjacent areas of the nD-SOM-PINT constraining the neural network structure and learning. As a study case, the neural network has been instantiate to represent and classify features as trajectories extracted from a sequence of images into a high level of semantic understanding. Experiments have been thoroughly carried out using the CAVIAR datasets (Corridor, Frontal and Inria) taken into account the global behaviour of an individual in order to validate the ability to preserve the topology of the two-dimensional space to obtain high-performance classification for trajectory classification in contrast of non-considering the location of features. Moreover, a brief example has been included to focus on validate the nD-SOM-PINT proposal in other domain than the individual trajectory. Results confirm the high accuracy of the nD-SOM-PINT outperforming previous methods aimed to classify the same datasets.

  相似文献   

11.
基于隐马尔科夫模型的DNA序列分类方法   总被引:1,自引:0,他引:1  
DNA序列分类是生物信息学的一项基础任务,目的是根据结构或功能的相似性预测DNA序列所属的类别。为进行有效分类,如何将序列映射到特征向量空间并最大程度地保留序列中蕴含的碱基间顺序关系是一项困难的任务。为克服现有方法容易导致因DNA序列碱基残缺而影响分类精度等问题,提出一种新的DNA序列特征表示方法。新方法首先为每条序列训练一个隐马尔科夫模型(HMM),然后将DNA序列投影到由HMM状态转移概率矩阵的特征向量构成的向量空间中。基于这种新的特征表示法,构造了一种 K-NN分类器对DNA序列进行分类。实验结果表明,新型特征表示方法可以较为完整地保留 DNA 序列中不同碱基间的关系,充分反映序列的结构信息,从而有效提高了序列的分类精度。  相似文献   

12.
In this paper we consider constraint satisfaction problems where the set of constraint relations is fixed. Feder and Vardi (1998) identified three families of constraint satisfaction problems containing all known polynomially solvable problems. We introduce a new class of problems called para-primal problems, incomparable with the families identified by Feder and Vardi (1998) and we prove that any constraint problem in this class is decidable in polynomial time. As an application of this result we prove a complete classification for the complexity of constraint satisfaction problems under the assumption that the basis contains all the permutation relations. In the proofs, we make an intensive use of algebraic results from clone theory about the structure of para-primal and homogeneous algebras. AMS subject classification 08A70  相似文献   

13.
Abstract

To develop and validate a classification of non-technical skills (NTS) in military aviation, a study was conducted, using data from real operations of F16 aircraft formations. Phase 1 developed a NTS classification based on the literature review (e.g. NOTECHS) and a workshop with pilots. The Non-TEChnical-MILitary-Skills (NOTEMILS) scheme was tested in Phase 2 in a series of Principal Component Analysis with data from After-Action-Review sessions (i.e. 900 records from a wide range of operations). The NTS were found to make a good prediction of Mission Essential Components (R2 > 0.80) above the effect of experience. Phase 3 undertook a reliability analysis where three raters assessed the NOTEMILS scheme with good results (i.e. all rwg > 0.80). To look into the consistency of classifications, another test indicated that, at least, two out of three raters were in agreement in over 70% of the assessed flight segments.

Practitioner Summary: A classification scheme of Non-Technical Skills (NTS) was developed and tested for reliability in military aviation operations. The NTS scheme is a valuable tool for assessing individual and team skills of F-16 pilots in combat. It is noteworthy that the tool had a good capability of predicting Mission Essential Competencies.  相似文献   

14.
A general 2D-hp-adaptive Finite Element (FE) implementation in Fortran 90 is described. The implementation is based on an abstract data structure, which allows to incorporate the full hp-adaptivity of triangular and quadrilateral finite elements. The h-refinement strategies are based on h2-refinement of quadrilaterals and h4-refinement of triangles. For p-refinement we allow the approximation order to vary within any element. The mesh refinement algorithms are restricted to 1-irregular meshes. Anisotropic and geometric refinement of quadrilateral meshes is made possible by additionally allowing double constrained nodes in rectangles. The capabilities of this hp-adaptive FE package are demonstrated on various test problems. Received: 18 December 1997 / Accepted: 17 April 1998  相似文献   

15.
Teleological variations of non-deterministic processes are defined. The immediate past of a system defines the state from which the ordinary (non-teleological) dynamical law governing the system derives different possible present states. For every possible present state, again a number of possible states for the next time step can be defined, and so on. After k time steps, a selection criterion is applied. The present state leading to the selected state after k time steps is taken to be the effective present state. Hence, the present state of a system is defined by its past in the sense that the past determines the possible states that are to be considered, and by its future in the sense that the selection of a possible future state determines the effective present state. A system that obeys this type of teleological dynamics may have significantly better performance than its non-teleological counterpart. The basic reason is that evolutions that are less optimal for the present time step, but which lead to a higher optimality after k time steps, may be preferred. This abstract concept of teleology is implemented for two concrete systems. First, it is applied to a general method for function approximation and classification problems. The method at issue treats all problems handled by conventional connectionism, and is suited for information with inner structure also. Second, it is applied to a dynamics in which forms of maximal homogeneity have to be produced. The relevance of the latter dynamics for generative art is illustrated. The teleology is `deep' in the sense that it is situated at the cellular level, in contradistinction with the teleology that is usually met in cognitive contexts, and which refers to macroscopic processes such as making plans. It is conjectured that deep level teleology is useful for machines, even though the issue if natural systems use this teleology is left open.  相似文献   

16.
As an extension of multi-class classification, machine learning algorithms have been proposed that are able to deal with situations in which the class labels are defined in a non-crisp way. Objects exhibit in that sense a degree of membership to several classes. In a similar setting, models are developed here for classification problems where an order relation is specified on the classes (i.e., non-crisp ordinal regression problems). As for traditional (crisp) ordinal regression problems, it is argued that the order relation on the classes should be reflected by the model structure as well as the performance measure used to evaluate the model. These arguments lead to a natural extension of the well-known proportional odds model for non-crisp ordinal regression problems, in which the underlying latent variable is not necessarily restricted to the class of linear models (by using kernel methods).  相似文献   

17.
Top Scoring Pair (TSP) and its ensemble counterpart, k-Top Scoring Pair (k-TSP), were recently introduced as competitive options for solving classification problems of microarray data. However, support vector machine (SVM) which was compared with these approaches is not equipped with feature or variable selection mechanism while TSP itself is a kind of variable selection algorithm. Moreover, an ensemble of SVMs should also be considered as a possible competitor to k-TSP. In this work, we conducted a fair comparison between TSP and SVM-recursive feature elimination (SVM-RFE) as the feature selection method for SVM. We also compared k-TSP with two ensemble methods using SVM as their base classifier. Results on ten public domain microarray data indicated that TSP family classifiers serve as good feature selection schemes which may be combined effectively with other classification methods.  相似文献   

18.
Ma  Jiajun  Zhou  Shuisheng 《Applied Intelligence》2022,52(1):622-635

Least square regression has been widely used in pattern classification, due to the compact form and efficient solution. However, two main issues limit its performance for solving the multiclass classification problems. The first one is that employing the hard discrete labels as the regression targets is inappropriate for multiclass classification. The second one is that it focus only on exactly fitting the instances to the target matrix while ignoring the within-class similarity of the instances, resulting in overfitting. To address this issues, we propose a discriminative least squares regression for multiclass classification based on within-class scatter minimization (WCSDLSR). Specifically, a ε-dragging technique is first introduced to relax the hard discrete labels into the slack soft labels, which enlarges the between-class margin for the soft labels as much as possible. The within-class scatter for the soft labels is then constructed as a regularization term to make the transformed instances of the same class closer to each other. These factors ensure WCSDLSR can learn a more compact and discriminative transformation for classification, thus avoiding the overfitting problems. Furthermore, the proposed WCSDLSR can obtain a closed-form solution in each iteration with the lower computational costs. Experimental results on the benchmark datasets demonstrate that the proposed WCSDLSR achieves the better classification performance with the lower computational costs.

  相似文献   

19.
Due to increasing interest in representation of temporal knowledge, automation of temporal reasoning, and analysis of distributed systems, literally dozens of temporal models have been proposed and explored during the last decade. Interval-based temporal models are especially appealing when reasoning about events with temporal extent but pose special problems when deducing possible relationships among events. The paper delves deeply into the structure of the set of atomic relations in a class of temporal interval models assumed to satisfy density and homogeneity properties. An order structure is imposed on the atomic relations of a given model allowing the characterization of the compositions of atomic relations (or even lattice intervals) as lattice intervals. By allowing the utilization of lattice intervals rather than individual relations, this apparently abstract result explicitly leads to a concrete approach which speeds up constraint propagation algorithms.  相似文献   

20.
This paper investigates how to maintain an efficient dynamic ordered set of bit strings, which is an important problem in the field of information search and information processing. Generally, a dynamic ordered set is required to support 5 essential operations including search, insertion, deletion, max-value retrieval and next-larger-value retrieval. Based on previous research fruits, we present an advanced data structure named rich binary tree (RBT), which follows both the binary-search-tree property and the digital-search-tree property. Also, every key K keeps the most significant difference bit (MSDB) between itself and the next larger value among K’s ancestors, as well as that between itself and the next smaller one among its ancestors. With the new data structure, we can maintain a dynamic ordered set in O(L) time. Since computers represent objects in binary mode, our method has a big potential in application. In fact, RBT can be viewed as a general-purpose data structure for problems concerning order, such as search, sorting and maintaining a priority queue. For example, when RBT is applied in sorting, we get a linear-time algorithm with regard to the key number and its performance is far better than quick-sort. What is more powerful than quick-sort is that RBT supports constant-time dynamic insertion/deletion. Supported by the National Natural Science Foundation of China (Grant No. 60873111), and the National Basic Research Program of China (Grant No. 2004CB719400)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号