首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Where the most prevalent optimal servo-compensator formulations penalise the behaviour of an error system, this paper considers the problem of additionally penalising the actual states and inputs of the plant. Doing so has the advantage of enabling the penalty function to better resemble an economic cost. This is especially true of problems where control effort needs to be sensibly allocated across weakly redundant inputs or where one wishes to use penalties to soft-constrain certain states or inputs. It is shown that, although the resulting cost function grows unbounded as its horizon approaches infinity, it is possible to formulate an equivalent optimisation problem with a bounded cost. The resulting optimisation problem is similar to those in earlier studies but has an additional ‘correction term’ in the cost function, and a set of equality constraints that arise when there are redundant inputs. A numerical approach to solve the resulting optimisation problem is presented, followed by simulations on a micro–macro positioner that illustrate the benefits of the proposed servo-compensator design approach.  相似文献   

2.
Effective data mining solutions have for long been anticipated in Customer Relationship Management (CRM) to accurately predict customer behavior, but from various industry research and case studies we have observed sub-optimal CRM classification models due to inferior data quality inherent to CRM data set. In this paper, one type of CRM data with a distinctive distribution pattern of Reduced Dimensionality is discussed. A new classification framework termed Partial Focus Feature Reduction is proposed to resolve CRM data set with Reduced Dimensionality using a collection of efficient data mining techniques characterizing a specially tailored modality grouping method to significantly improve data quality and feature relevancy after preprocessing, eventually achieving excellent classification performance with the right combination of classification algorithms.  相似文献   

3.
The traditional customer relationship management (CRM) studies are mainly focused on CRM in a specific point of time. The static CRM and derived knowledge of customer behavior could help marketers to redirect marketing resources for profit gain at the given point in time. However, as time goes on, the static knowledge becomes obsolete. Therefore, application of CRM to an online retailer should be done dynamically in time. Though the concept of buying-behavior-based CRM was advanced several decades ago, virtually little application of the dynamic CRM has been reported to date.

In this paper, we propose a dynamic CRM model utilizing data mining and a monitoring agent system to extract longitudinal knowledge from the customer data and to analyze customer behavior patterns over time for the retailer. Furthermore, we show that longitudinal CRM could be usefully applied to solve several managerial problems, which any retailer may face.  相似文献   


4.
Combined SVM-Based Feature Selection and Classification   总被引:1,自引:0,他引:1  
Feature selection is an important combinatorial optimisation problem in the context of supervised pattern classification. This paper presents four novel continuous feature selection approaches directly minimising the classifier performance. In particular, we include linear and nonlinear Support Vector Machine classifiers. The key ideas of our approaches are additional regularisation and embedded nonlinear feature selection. To solve our optimisation problems, we apply difference of convex functions programming which is a general framework for non-convex continuous optimisation. Experiments with artificial data and with various real-world problems including organ classification in computed tomography scans demonstrate that our methods accomplish the desired feature selection and classification performance simultaneously. Editor: Dale Schuurmans  相似文献   

5.
The rapid increase of available DNA, protein, and other biological sequences has made the problem of discovering meaningful patterns from sequences an important task for Bioinformatics research. Among all types of patterns defined in the literature, the most challenging one is to find repeating patterns with gap constraints. In this article, we identify a new research problem for mining approximate repeating patterns (ARPs) with gap constraints, where the appearance of a pattern is subject to an approximate match, which is very common in biological sequences. To solve the problem, we propose an ArpGap (ARP mining with Gap constraints) algorithm with three major components for ARP mining: (1) a data‐driven pattern generation approach to avoid generating unnecessary candidates for validation; (2) a back‐tracking pattern search process to discover approximate occurrences of a pattern under user specified gap constraints; and (3) an Apriori‐like deterministic pruning approach to progressively prune patterns and cease the search process if necessary. Experimental results on synthetic and real‐world protein sequences assert that ArpGap is efficient in terms of memory consumption and computational cost. The results further suggest that the proposed method is practical for discovering approximate patterns for protein sequences where the sequence length is usually several hundreds to one thousand and the pattern length is relatively short.  相似文献   

6.
With the increasing adoption of role-based access control (RBAC) in business security, role mining technology has been widely applied to aid the process of migrating a non-RBAC system to an RBAC system. However, because it is hard to deal with a variety of constraint conflicts at the same time, none of existing role mining algorithms can simultaneously satisfy various constraints that usually describe organizations’ security and business requirements. To extend the ability of role mining technology, this paper proposes a novel role mining approach using answer set programming (ASP) that complies with constraints and meets various optimization objectives, named constrained role miner (CRM). Essentially, the idea is that ASP is an approach to declarative problem solving. Thus, either to discover RBAC configurations or to deal with conflicts between constraints, ASP programs do not need to specify how answers are computed. Finally, we demonstrate the effectiveness and efficiency of our approach through experimental results.  相似文献   

7.
基于粒子群优化的有约束模型预测控制器   总被引:2,自引:1,他引:1  
研究了模型预测控制(MPC)中解决带约束的优化问题时所用到的优化算法,针对传统的二次规划(QP)方法的不足,引入了一种带有混沌初始化的粒子群优化算法(CPSO),将其应用到模型预测控制中,用十解决同时带有输入约束和状态约束的控制问题.最后,引入了一个实际的带有约束的线性离散系统的优化控制问题,分别用二次规划和粒子群优化两种算法去解决,通过仿真结果的比较,说明了基于粒子群优化(PSO)的模型预测控制算法的优越性.  相似文献   

8.
In the last few years, encouraging results using radiative transfer model inversion techniques were obtained for land biophysical variables retrieval. However, the inversion of radiative transfer models is a severely ill-posed problem that may lead to significant uncertainties in the biophysical variables estimates. Improvement of performances of the inversion process requires more information to be exploited including better radiative transfer models, exploitation of proper prior information on the distribution of canopy and atmosphere variables, knowledge of uncertainties in satellite measurements, as well as possible spatial and temporal constraints. In this study we focus on the use of coupled atmosphere-surface radiative transfer models (SMAC + SAIL + PROSPECT) to estimate some key biophysical variables from top of atmosphere canopy reflectance data. The inversion is achieved over an ensemble of pixels belonging to a spatial window where aerosol properties are supposed to be constant, and over a temporal window of few days where vegetation state is assumed not to vary. The ensemble inversion scheme accounting for the spatial and temporal constraints is described. Top of atmosphere reflectance observations are simulated for 13 bands within the visible and near infrared domains. The coupled model is inverted with a variational method implementation dedicated to solve very large inverse problems. It is based on the use of the adjoint model and a Quasi-Newton optimisation technique with BFGS update.The multitemporal-patch inversion approach exploiting the spatial and temporal constraints is compared to the classical instantaneous-local inversion applied on single pixel and date. The ‘ensemble’ approach shows significant performance improvements when retrieving aerosol optical thickness τ550 and some canopy characteristics (LAI, LAI × Cab and ALA). Conclusions are drawn on the interest of such approaches, and perspectives are given, with due attention to their applicability within operational algorithms.  相似文献   

9.
以最大化现金流净现值为优化目标的多模式资源约束调度问题MMRCPSP(Multi-mode Resource-Constrained Project Scheduling Problem)是一类带有复杂非线性特征的NP-hard问题,传统粒子群算法在解决该类离散问题上具有一定局限性。从粒子群算法的优化原理出发,结合遗传算法,在粒子群算法中引入交叉和变异操作,得出一种应用于MMRCPSP现金流优化的快速、易实现的混合粒子群算法,拓宽了粒子群优化算法在离散优化领域的应用。仿真实验结果验证了算法的有效性和高效性。  相似文献   

10.
We present a branch-price-and-cut method to solve a maritime pickup and delivery problem with time windows and split loads. The fleet of ships is heterogeneous and fixed for the planning horizon, and no common depot exists. The cargoes picked up and delivered consist of both mandatory and optional cargoes, and each cargo may be split among several ships. The objective is to design a route for each ship that will maximize the total profit from transporting all the mandatory and a subset of the optional cargoes. To solve this problem we introduce a new path-flow formulation, which we solve by branch-price-and-cut. The subproblem is a new variant of the elementary shortest path problem with resource constraints, where a multi-dimensional knapsack problem is solved to compute optimal cargo quantities. Further, we present new valid inequalities for this problem, and adaptations of existing inequalities successfully used to solve related problems in the literature. Finally, the computational results show that for certain types of instances, our solution method outperforms existing methods proposed in the literature.  相似文献   

11.
Improving constrained pattern mining with first-fail-based heuristics   总被引:1,自引:0,他引:1  
In this paper, we present a general framework to mine patterns with antimonotone constraints. This framework uses a technique that structures the pattern space in a way that facilitates the integration of constraints within the mining process. Furthermore, we also introduce a powerful strategy that uses background information on the data to speed-up the mining process. We illustrate our approach on a popular structured data mining problem, the frequent subgraph mining problem, and show, through experiments on synthetic and real-life data, that this general approach has advantages over state-of-the-art pattern mining algorithms.  相似文献   

12.
Since complexity and scale of systems are continuously increasing, there is a growing interest in developing distributed algorithms that are capable to address information constraints, specially for solving optimisation and decision-making problems. In this paper, we propose a novel method to solve distributed resource allocation problems that include lower bound constraints. The optimisation process is carried out by a set of agents that use a communication network to coordinate their decisions. Convergence and optimality of the method are guaranteed under some mild assumptions related to the convexity of the problem and the connectivity of the underlying graph. Finally, we compare our approach with other techniques reported in the literature, and we present some engineering applications.  相似文献   

13.
A simulated annealing algorithm for dynamic layout problem   总被引:1,自引:0,他引:1  
Increased level of volatility in today's manufacturing world demanded new approaches for modelling and solving many of its well-known problems like the facility layout problem. Over a decade ago Rosenblatt published a key paper on modelling and solving dynamic version of the facility layout problems. Since then, various other researchers proposed new and improved models and algorithms to solve the problem. Balakrishnan and Cheng have recently published a comprehensive review of the literature about this subject. The problem was defined as a complex combinatorial optimisation problem. The efficiency of SA in solving combinatorial optimisation problems is very well known. However, it has recently not been applied to DLP based on the review of the available literature. In this research paper a SA-based procedure for DLP is developed and results for test problems are reported.

Scope and purpose

One of the characteristic of today's manufacturing environments is volatility. Under a volatile environment (or dynamic manufacturing environment) demand is not stable. To operate efficiently under such environments facilities must be adaptive to changing demand conditions. This requires solution of the dynamic layout problem (DLP). DLP is a complex combinatorial optimisation problem for which optimal solutions can be found for small size problems. This research paper makes use of a SA algorithm to solve the DLP. Simulated annealing (SA) is a well-established stochastic neighbourhood search technique. It has a potential to solve complex combinatorial optimisation problems. The paper presents in detail how to apply SA to solve DLP and an extensive computational study. The computational study shows that SA is quite effective in solving dynamic layout problems.  相似文献   

14.
基于Web的数据挖掘技术   总被引:7,自引:1,他引:7  
基于Web数据挖掘是一个结合了数据挖掘和WWW的热门研究主题.本文综述了基于Web的数据挖掘技术,介绍了Web数据挖掘目前最流行的分类:Web内容挖掘、Web结构挖掘和Web访问挖掘,根据Web数据挖掘的最近研究现状,小结了几个研究热点.由于Web数据所具有的半结构化特性,使得Web数据挖掘更为复杂,不同于传统的基于数据库的数据挖掘.最后介绍一种全新的技术XML,XML的出现为解决Web数据挖掘的难题带来了机会.Web数据挖掘的研究具有极大的挑战性,同时又具有极大的开发潜力.  相似文献   

15.
数据立方梯度挖掘的研究   总被引:2,自引:0,他引:2  
1 前言随着人们生成、收集和存储数字化数据能力的极大提高,当今世界面临着各种原始数据的爆炸性增长。数据库技术的巨大进步创建了对大量数据的有效存储,成千上万的大型数据库被广泛地应用在商业、政府和科研等等部门。大量数据资源的积累为人们从历史数据中发现有用信息提供了基础,人们期望数据库能够提供智能化或者至少是半自动化的数据分析处理的能力。于是,数据仓库技术(Data Warehouse)、联机分析处理技术(On Line Analysis Processing)以及数据挖掘技术(Data Mining)应运而生。  相似文献   

16.
基于"新颖度"的关联挖掘算法   总被引:2,自引:2,他引:0  
关联挖掘的目的是从大量数据中发现对用户有用、新颖、重要的关联规则.传统的关联挖掘算法会产生大量对用户而言显而易见的平凡规则,使那些真正对用户有用的新颖规则被淹没,而一些针对新颖性的改进算法往往又存在先验知识表达复杂且工作量极大的问题.在本文中,我们运用简单的分类树,引入"新颖度"的概念,对Apriori算法进行改进,得到了基于"新颖度"的关联挖掘算法,此算法既充分考虑了挖掘过程中得新颖性问题,又克服了先验知识表达过于复杂的困难.  相似文献   

17.
使用基于粗糙集(Rough Set)约简并和抽样结合来约简KDD99的海量数据中的属性,降低属性之间的相关性。使用具有广泛数学基础的粗糙集约简海量网络侦听数据的属性,产生具有较好的独立性入侵检测属性集,学习的时间效率也得到提高。通过具有相通理论基础的决策树印证约简的有效性和检测的时间效率并生成检测规则。  相似文献   

18.
《Computer》2001,34(12):48-55
Decision support systems form the core of business IT infrastructures because they let companies translate business information into tangible and lucrative results. Collecting, maintaining, and analyzing large amounts of data, however, involves expensive technical challenges that require organizational commitment. Many commercial tools are available for each of the three major data warehousing tasks: populating the data warehouse from independent operational databases, storing and managing the data, and analyzing the data to make intelligent business decisions. Data cleaning relates to heterogeneous data integration, a problem studied for many years. More work must be done to develop domain-independent tools that solve the data cleaning problems associated with data warehouse development. Most data mining research has focused on developing algorithms for building more accurate models or building models faster. However, data preparation and mining model deployment present several engaging problems that relate specifically to achieving better synergy between database systems and data mining technology  相似文献   

19.
We address the one-to-one multi-commodity pickup and delivery traveling salesman problem (m-PDTSP) which is a generalization of the TSP and arises in several transportation and logistics applications. The objective is to find a minimum-cost directed Hamiltonian path which starts and ends at given depot nodes and such that the demand of each given commodity is transported from the associated source to its destination and the vehicle capacity is never exceeded. In contrast, the many-to-many one-commodity pickup and delivery traveling salesman problem (1-PDTSP) just considers a single commodity and each node can be a source or target for units of this commodity. We show that the m-PDTSP is equivalent to the 1-PDTSP with additional precedence constraints defined by the source–destination pairs for each commodity and explore several models based on this equivalence. In particular, we consider layered graph models for the capacity constraints and introduce new valid inequalities for the precedence relations. Especially for tightly capacitated instances with a large number of commodities our branch-and-cut algorithms outperform the existing approaches. For the uncapacitated m-PDTSP (which is known as the sequential ordering problem) we are able to solve to optimality several open instances from the TSPLIB and SOPLIB.  相似文献   

20.
Data envelopment analysis (DEA) is a non-parametric method for evaluating the relative efficiency of decision-making units (DMUs) on the basis of multiple inputs and outputs. Conventional DEA models assume that inputs and outputs are measured by exact values on a ratio scale. However, the observed values of the input and output data in real-world problems are often vague or random. Indeed, decision makers (DMs) may encounter a hybrid uncertain environment where fuzziness and randomness coexist in a problem. Several researchers have proposed various fuzzy methods for dealing with the ambiguous and random data in DEA. In this paper, we propose three fuzzy DEA models with respect to probability-possibility, probability-necessity and probability-credibility constraints. In addition to addressing the possibility, necessity and credibility constraints in the DEA model we also consider the probability constraints. A case study for the base realignment and closure (BRAC) decision process at the U.S. Department of Defense (DoD) is presented to illustrate the features and the applicability of the proposed models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号