共查询到20条相似文献,搜索用时 0 毫秒
1.
We consider a high-multiplicity generalization of the classical stable matching problem known as the stable allocation problem, introduced by Baïou and Balinski in 2002. By leveraging new structural properties and sophisticated data structures, we show how to solve this problem in O(mlog?n) time on a bipartite instance with n vertices and m edges, improving the best known running time of O(mn). Building on this algorithm, we provide an algorithm for the non-bipartite stable allocation problem running in O(mlog?n) time with high probability. Finally, we give a polynomial-time algorithm for solving the “optimal” variant of the bipartite stable allocation problem, as well as a 2-approximation algorithm for the NP-hard “optimal” variant of the non-bipartite stable allocation problem. 相似文献
2.
Computational Economics - Option is a well-known financial derivative that attracts attention from investors and scholars, due to its flexible investment strategies. In this paper, we sought to... 相似文献
3.
Fuzzy-UCS: A Michigan-Style Learning Fuzzy-Classifier System for Supervised Learning 总被引:1,自引:0,他引:1
Orriols-Puig A. Casillas J. Bernado-Mansilla E. 《Evolutionary Computation, IEEE Transactions on》2009,13(2):260-283
This paper presents Fuzzy-UCS, a Michigan-style learning fuzzy-classifier system specifically designed for supervised learning tasks. Fuzzy-UCS is inspired by UCS, an on-line accuracy-based learning classifier system. Fuzzy-UCS introduces a linguistic representation of the rules with the aim of evolving more readable rule sets, while maintaining similar performance and generalization capabilities to those presented by UCS. The behavior of Fuzzy-UCS is analyzed in detail from several perspectives. The granularity of the linguistic fuzzy representation to define complex decision boundaries is illustrated graphically, and the test performance obtained with different inference schemes is studied. Fuzzy-UCS is also compared with a large set of other fuzzy and nonfuzzy learners, demonstrating the competitiveness of its on-line architecture in terms of performance and interpretability. Finally, the paper shows the advantages obtained when Fuzzy-UCS is applied to learn fuzzy models from large volumes of data. 相似文献
4.
5.
The recent Polytope ARTMAP (PTAM) suggests that irregular polytopes are more flexible than the redefined category geometries
to approximate the borders among the desired output predictions. However, category expansion and adjustment steps without
statistical information make PTAM not robust to noise and category overlap. In order to push the learning problem towards
Structural Risk Minimization (SRM), this paper proposes Hierarchical Polytope ARTMAP (HPTAM) to use a hierarchical structure
with different levels, which are determined by the complexity of regions incorporating the input pattern. Besides, overlapping
of simplexes from the same desired prediction is designed to reduce category proliferation. Although HPTAM is still inevitably
sensible to noisy outliers in the presence of noise, main experimental results show that HPTAM can achieve a balance between
representation error and approximation error, which ameliorates the overall generalization capabilities. 相似文献
6.
Baltag Alexandru Li Dazhu Pedersen Mina Young 《Journal of Logic, Language and Information》2022,31(2):213-234
Journal of Logic, Language and Information - Formal learning theory formalizes the process of inferring a general result from examples, as in the case of inferring grammars from sentences when... 相似文献
7.
Aguilar-Ruiz J.S. Giraldez R. Riquelme J.C. 《Evolutionary Computation, IEEE Transactions on》2007,11(4):466-479
Some of the most influential factors in the quality of the solutions found by an evolutionary algorithm (EA) are a correct coding of the search space and an appropriate evaluation function of the potential solutions. EAs are often used to learn decision rules from datasets, which are encoded as individuals in the genetic population. In this paper, the coding of the search space for the obtaining of those decision rules is approached, i.e., the representation of the individuals of the genetic population and also the design of specific genetic operators. Our approach, called "natural coding," uses one gene per feature in the dataset (continuous or discrete). The examples from the datasets are also encoded into the search space, where the genetic population evolves, and therefore the evaluation process is improved substantially. Genetic operators for the natural coding are formally defined as algebraic expressions. Experiments with several datasets from the University of California at Irvine (UCI) machine learning repository show that as the genetic operators are better guided through the search space, the number of rules decreases considerably while maintaining the accuracy, similar to that of hybrid coding, which joins the well-known binary and real representations to encode discrete and continuous attributes, respectively. The computational cost associated with the natural coding is also reduced with regard to the hybrid representation. Our algorithm, HlDER*, has been statistically tested against C4.5 and C4.5 Rules, and performed well. The knowledge models obtained are simpler, with very few decision rules, and therefore easier to understand, which is an advantage in many domains. The experiments with high-dimensional datasets showed the same good behavior, maintaining the quality of the knowledge model with respect to prediction accuracy. 相似文献
8.
9.
毛光喜 《计算机工程与应用》2005,41(20):186-188,218
遗传算法的一个具有代表性的应用领域就是发现一个已知的,通常比较复杂的系统的输入—输出映射。神经网络聚类方法中,比较著名的方法之一就是“竞争学习”。竞争学习采用若干单元的层次结构,以一种“胜利者全取”的方式对系统当前处理的对象进行竞争。通常的神经网络聚类方法,由于其较长的处理时间和数据的复杂性,很难适用于大型数据库。为此,文章采用遗传算法发现一个已知的、通常比较复杂的系统的输入—输出映射,利用调制的小波基对输入模式预处理,在函数链神经网络的基础上设计了一种基于遗传算法和小波变换函数链神经网络的竞争学习系统,充分利用遗传算法、小波变换和函数链神经网络的优势,这样设计的系统有惊人的学习速度、体系结构的通用性好、适应性强等特点,以此作为数据聚类分析工具,能够达到简化数据聚类的复杂性、缩短系统处理时间等效果。 相似文献
10.
A Knowledge-Intensive Genetic Algorithm for Supervised Learning 总被引:7,自引:0,他引:7
11.
在信息检索和机器学习领域,大部分排序学习方法假设查询中的各个对象均满足独立同分布.虽然该假设简化了排序问题,却未能利用目标对象之闻隐藏的相关性信息.在全监督排序和直推式排序2个问题中分别提出了新的方法,充分地利用了对象间的关系.在全监督排序问题中,将对象相关性映射为RBF Kernel,作为约束项加入优化目标,使得优化过程中越相似的对象打分越接近,即全局一致性思想.在直推式排序问题中,利用对象相关性将每个查询映射为图结构,设计了新的基于图结构的查询相似度度量,使得优化过程中越相似的查询,该查询内的对象对预测查询的影响越大.实验结果表明,加入对象之间的相关性提升了全监督排序算法和直推式排序算法的性能. 相似文献
12.
在信息检索和机器学习领域,大部分排序学习方法假设查询中的各个对象均满足独立同分布.虽然该假设简化了排序问题,却未能利用目标对象之间隐藏的相关性信息.在全监督排序和直推式排序2个问题中分别提出了新的方法,充分地利用了对象间的关系.在全监督排序问题中,将对象相关性映射为RBF Kernel,作为约束项加入优化目标,使得优化过程中越相似的对象打分越接近,即全局一致性思想.在直推式排序问题中,利用对象相关性将每个查询映射为图结构,设计了新的基于图结构的查询相似度度量,使得优化过程中越相似的查询,该查询内的对象对预测查询的影响越大.实验结果表明,加入对象之间的相关性提升了全监督排序算法和直推式排序算法的性能. 相似文献
13.
We consider several online scheduling problems that arise when customers request make-to-order products from a company. At the time of the order, the company must quote a due date to the customer. To satisfy the customer, the company must produce the good by the due date. The company must have an online algorithm with two components: The first component sets the due dates, and the second component schedules the resulting jobs with the goal of meeting the due dates. 相似文献
14.
Cheng-Jian Lin Yong-Cheng Liu Chi-Yung Lee 《Journal of Intelligent and Robotic Systems》2008,52(2):285-312
This study presents a wavelet-based neuro-fuzzy network (WNFN). The proposed WNFN model combines the traditional Takagi–Sugeno–Kang
(TSK) fuzzy model and the wavelet neural networks (WNN). This study adopts the non-orthogonal and compactly supported functions
as wavelet neural network bases. A novel supervised evolutionary learning, called WNFN-S, is proposed to tune the adjustable
parameters of the WNFN model. The proposed WNFN-S learning scheme is based on dynamic symbiotic evolution (DSE). The proposed
DSE uses the sequential-search-based dynamic evolutionary (SSDE) method. In some real-world applications, exact training data
may be expensive or even impossible to obtain. To solve this problem, the reinforcement evolutionary learning, called WNFN-R,
is proposed. Computer simulations have been conducted to illustrate the performance and applicability of the proposed WNFN-S
and WNFN-R learning algorithms. 相似文献
15.
Telikepalli Kavitha 《Algorithmica》2012,63(1-2):224-245
Let G=(V,E) be a weighted undirected graph, with non-negative edge weights. We consider the problem of efficiently computing approximate distances between all pairs of vertices in?G. While many efficient algorithms are known for this problem in unweighted graphs, not many results are known for this problem in weighted graphs. Zwick?(J. Assoc. Comput. Mach. 49:289–317, 2002) showed that for any fixed ε>0, stretch 1+ε distances (a path in G between u,v∈V is said to be of stretch t if its length is at most t times the distance between u and v in G) between all pairs of vertices in a weighted directed graph on n vertices can be computed in $\tilde{O}(n^{\omega})$ time, where ω<2.376 is the exponent of matrix multiplication and n is the number of vertices. It is known that finding distances of stretch less than 2 between all pairs of vertices in G is at least as hard as Boolean matrix multiplication of two n×n matrices. Here we show that all pairs stretch 2+ε distances for any fixed ε>0 in G can be computed in expected time O(n 9/4). This algorithm uses a fast rectangular matrix multiplication subroutine. We also present a combinatorial algorithm (that is, it does not use fast matrix multiplication) with expected running time O(n 9/4) for computing all-pairs stretch 5/2 distances in?G. This combinatorial algorithm will serve as a key step in our all-pairs stretch 2+ε distances algorithm. 相似文献
16.
Shuicheng Yan Jianzhuang Liu Xiaoou Tang Thomas S. Huang 《Information Forensics and Security, IEEE Transactions on》2007,2(1):69-76
Supervised subspace learning techniques have been extensively studied in biometrics literature; however, there is little work dedicated to: 1) how to automatically determine the subspace dimension in the context of supervised learning, and 2) how to explicitly guarantee the classification performance on a training set. In this paper, by following our previous work on unified subspace learning framework in our earlier work, we present a general framework, called parameter-free graph embedding (PFGE) to solve the above two problems by posing a general supervised subspace learning task as a semidefinite programming problem. The semipositive feature Gram matrix, namely the product of the transformation matrix and its transpose, is derived by optimizing a trace difference form of an objective function extended from that in our earlier work with the constraints that guarantee the class homogeneity within the neighborhood of each datum. Then, the subspace dimension and the feature weights are simultaneously obtained via the singular value decomposition of the feature Gram matrix. In addition, to alleviate the computational complexity, the Kronecker product approximation of the feature Gram matrix is proposed by taking advantage of the essential matrix form of image pixels. The experiments on simulated data and real-world data demonstrate the capability of the new PFGE framework in estimating the subspace dimension for supervised learning as well as the superiority in classification performance over traditional algorithms for subspace learning 相似文献
17.
18.
Pre-pruning and Post-pruning are two standard techniques for handling noise in decision tree learning. Pre-pruning deals with noise during learning, while post-pruning addresses this problem after an overfitting theory has been learned. We first review several adaptations of pre- and post-pruning techniques for separate-and-conquer rule learning algorithms and discuss some fundamental problems. The primary goal of this paper is to show how to solve these problems with two new algorithms that combine and integrate pre- and post-pruning. 相似文献
19.
Boosting Algorithms for Parallel and Distributed Learning 总被引:1,自引:0,他引:1
The growing amount of available information and its distributed and heterogeneous nature has a major impact on the field of data mining. In this paper, we propose a framework for parallel and distributed boosting algorithms intended for efficient integrating specialized classifiers learned over very large, distributed and possibly heterogeneous databases that cannot fit into main computer memory. Boosting is a popular technique for constructing highly accurate classifier ensembles, where the classifiers are trained serially, with the weights on the training instances adaptively set according to the performance of previous classifiers. Our parallel boosting algorithm is designed for tightly coupled shared memory systems with a small number of processors, with an objective of achieving the maximal prediction accuracy in fewer iterations than boosting on a single processor. After all processors learn classifiers in parallel at each boosting round, they are combined according to the confidence of their prediction. Our distributed boosting algorithm is proposed primarily for learning from several disjoint data sites when the data cannot be merged together, although it can also be used for parallel learning where a massive data set is partitioned into several disjoint subsets for a more efficient analysis. At each boosting round, the proposed method combines classifiers from all sites and creates a classifier ensemble on each site. The final classifier is constructed as an ensemble of all classifier ensembles built on disjoint data sets. The new proposed methods applied to several data sets have shown that parallel boosting can achieve the same or even better prediction accuracy considerably faster than the standard sequential boosting. Results from the experiments also indicate that distributed boosting has comparable or slightly improved classification accuracy over standard boosting, while requiring much less memory and computational time since it uses smaller data sets. 相似文献
20.
Average Reward Reinforcement Learning: Foundations,Algorithms, and Empirical Results 总被引:12,自引:0,他引:12
This paper presents a detailed study of average reward reinforcement learning, an undiscounted optimality framework that is more appropriate for cyclical tasks than the much better studied discounted framework. A wide spectrum of average reward algorithms are described, ranging from synchronous dynamic programming methods to several (provably convergent) asynchronous algorithms from optimal control and learning automata. A general sensitive discount optimality metric calledn-discount-optimality is introduced, and used to compare the various algorithms. The overview identifies a key similarity across several asynchronous algorithms that is crucial to their convergence, namely independent estimation of the average reward and the relative values. The overview also uncovers a surprising limitation shared by the different algorithms while several algorithms can provably generategain-optimal policies that maximize average reward, none of them can reliably filter these to producebias-optimal (orT-optimal) policies that also maximize the finite reward to absorbing goal states. This paper also presents a detailed empirical study of R-learning, an average reward reinforcement learning method, using two empirical testbeds: a stochastic grid world domain and a simulated robot environment. A detailed sensitivity analysis of R-learning is carried out to test its dependence on learning rates and exploration levels. The results suggest that R-learning is quite sensitive to exploration strategies and can fall into sub-optimal limit cycles. The performance of R-learning is also compared with that of Q-learning, the best studied discounted RL method. Here, the results suggest that R-learning can be fine-tuned to give better performance than Q-learning in both domains. 相似文献