首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We propose a surrogate model-assisted algorithm by using a directed fuzzy graph to extract a user’s cognition on evaluated individuals in order to alleviate user fatigue in interactive genetic algorithms with an individual’s fuzzy and stochastic fitness. We firstly present an approach to construct a directed fuzzy graph of an evolutionary population according to individuals’ dominance relations, cut-set levels and interval dominance probabilities, and then calculate an individual’s crisp fitness based on the out-degree and in-degree of the fuzzy graph. The approach to obtain training data is achieved using the fuzzy entropy of the evolutionary system to guarantee the credibilities of the samples which are used to train the surrogate model. We adopt a support vector regression machine as the surrogate model and train it using the sampled individuals and their crisp fitness. Then the surrogate model is optimized using the traditional genetic algorithm for some generations, and some good individuals are submitted to the user for the subsequent evolutions so as to guide and accelerate the evolution. Finally, we quantitatively analyze the performance of the presented algorithm in alleviating user fatigue and increasing more opportunities to find the satisfactory individuals, and also apply our algorithm to a fashion evolutionary design system to demonstrate its efficiency.  相似文献   

2.
Recent work in dynamic graph algorithms has led to efficient algorithms for dynamic undirected graph problems such as connectivity. However, no efficient deterministic algorithms are known for the dynamic versions of fundamental directed graph problems like strong connectivity and transitive closure, as well as some undirected graph problems such as maximum matchings and cuts. We provide some explanation for this lack of success by presenting quadratic lower bounds on the certificate complexity of the seemingly difficult problems, in contrast to the known linear certificate complexity for the problems which have efficient dynamic algorithms. In many applications of dynamic (di)graph problems, a certain form of lookahead is available. Specifically, we consider the problems of assembly planning in robotics and the maintenance of relations in databases. These give rise to dynamic strong connectivity and dynamic transitive closure problems, respectively. We explain why it is reasonable, and indeed natural and desirable, to assume that lookahead is available in these two applications. Exploiting lookahead to circumvent their inherent complexity, we obtain efficient dynamic algorithms for strong connectivity and transitive closure. Received August 11, 1995; revised August 23, 1996.  相似文献   

3.
Among the most exciting advances in early vision has been the development of efficient energy minimization algorithms for pixel-labeling tasks such as depth or texture computation. It has been known for decades that such problems can be elegantly expressed as Markov random fields, yet the resulting energy minimization problems have been widely viewed as intractable. Recently, algorithms such as graph cuts and loopy belief propagation (LBP) have proven to be very powerful: for example, such methods form the basis for almost all the top-performing stereo methods. However, the tradeoffs among different energy minimization algorithms are still not well understood. In this paper we describe a set of energy minimization benchmarks and use them to compare the solution quality and running time of several common energy minimization algorithms. We investigate three promising recent methods graph cuts, LBP, and tree-reweighted message passing in addition to the well-known older iterated conditional modes (ICM) algorithm. Our benchmark problems are drawn from published energy functions used for stereo, image stitching, interactive segmentation, and denoising. We also provide a general-purpose software interface that allows vision researchers to easily switch between optimization methods. Benchmarks, code, images, and results are available at http://vision.middlebury.edu/MRF/.  相似文献   

4.
Iterated greedy algorithms belong to the class of stochastic local search methods. They are based on the simple and effective principle of generating a sequence of solutions by iterating over a constructive greedy heuristic using destruction and construction phases. This paper, first, presents an efficient randomized iterated greedy approach for the minimum weight dominating set problem, where—given a vertex-weighted graph—the goal is to identify a subset of the graphs’ vertices with minimum total weight such that each vertex of the graph is either in the subset or has a neighbor in the subset. Our proposed approach works on a population of solutions rather than on a single one. Moreover, it is based on a fast randomized construction procedure making use of two different greedy heuristics. Secondly, we present a hybrid algorithmic model in which the proposed iterated greedy algorithm is combined with the mathematical programming solver CPLEX. In particular, we improve the best solution provided by the iterated greedy algorithm with the solution polishing feature of CPLEX. The simulation results obtained on a widely used set of benchmark instances shows that our proposed algorithms outperform current state-of-the-art approaches.  相似文献   

5.
隐式流对于污点分析方法的准确性有重要影响。为此,提出一种基于程序单静态赋值形式的隐式流检测方法。通过生成控制流图的必经节点树检测控制依赖关系,计算必经边界发现程序汇合点,引入虚拟取值函数获得汇合点变量的多个赋值,从而判别变量取值分歧并标记污点属性。与人工审计结果的对比证明,该方法能够诊断2个污点分析工具的污染缺失和污染过度问题,有效降低隐式流分析的误报率和漏报率。  相似文献   

6.
Efficient Spare Allocation for Reconfigurable Arrays   总被引:1,自引:0,他引:1  
Yield degradation from physical failures in large memories and processor arrays is of significant concern to semiconductor manufacturers. One method of increasing the yield for iterated arrays of memory cells or processing elements is to incorporate spare rows and columns in the die or wafer. These spare rows and columns can then be programmed into the array. The authors discuss the use of CAD approaches to reconfigure such arrays. The complexity of optimal reconfiguration is shown to be NP-complete. The authors present two algorithms for spare allocation that are based on graph-theoretic analysis. The first uses a branch-and-bound approach with early screening based on bipartite graph matching. The second is an efficient polynomial time-approximation algorithm. In contrast to existing greedy and exhaustive search algorithms, these algorithms provide highly efficient and flexible reconfiguration analysis.  相似文献   

7.
The computational complexity of a variety of problems from algorithmic game theory is investigated. These are variations on the question whether a strategy in a normal form game survives iterated elimination of dominated strategies. The difficulty of the computational task depends on the notion of dominance involved, on the number of distinct payoffs and whether the game is constant-sum. Most of the open cases are fully classified, and the remaining cases are shown to be equivalent to certain questions regarding elimination orders on graphs. The classifications may serve as the basis for a discussion to what extent iterated dominance could be useful to restrict rationality for computationally bounded agents.  相似文献   

8.
In this paper, we try to fill in the gap between theory and practice in production scheduling by defining a new term as “rejection” and treating the corresponding scheduling problem with multi-objective optimization approach. We study a bi-objective single machine scheduling problem with rejection. At the beginning of scheduling time horizon, scheduler needs to decide which job shall be rejected due to the resource constraints regarding two objective functions: minimization of total weighted completion time of accepted jobs and total rejection penalty of rejected jobs. We develop different algorithms to find the best estimation of Pareto-optimal front for this problem. In order to improve the quality of the solutions, on the one hand, and facilitate the process of selecting best solution for the final decision maker, on the other hand, we integrate various dominance criteria into our proposed algorithms. Finally we compare the performance of those methods by testing on a large set of instances and highlight the advantages and weak points of each one.  相似文献   

9.
Graphs are widely used for modeling complicated data such as social networks, bibliographical networks and knowledge bases. The growing sizes of graph databases motivate the crucial need for developing powerful and scalable graph-based query engines. We propose a SPARQL-like language, G-SPARQL, for querying attributed graphs. The language enables the expression of different types of graph queries that are of large interest in the databases that are modeled as large graph such as pattern matching, reachability and shortest path queries. Each query can combine both structural predicates and value-based predicates (on the attributes of the graph nodes/edges). We describe an algebraic compilation mechanism for our proposed query language which is extended from the relational algebra and based on the basic construct of building SPARQL queries, the Triple Pattern. We describe an efficient hybrid Memory/Disk representation of large attributed graphs where only the topology of the graph is maintained in memory while the data of the graph are stored in a relational database. The execution engine of our proposed query language splits parts of the query plan to be pushed inside the relational database (using SQL) while the execution of other parts of the query plan is processed using memory-based algorithms, as necessary. Experimental results on real and synthetic datasets demonstrate the efficiency and the scalability of our approach and show that our approach outperforms native graph databases by several factors.  相似文献   

10.
Querying graph data is a fundamental problem that witnesses an increasing interest especially for massive graph databases which come as a promising alternative to relational databases for big data modeling. In this paper, we study the problem of subgraph isomorphism search which consists to enumerate the embedding of a query graph in a data graph. The most known solutions of this NP-complete problem are backtracking-based and result in a high computational cost when we deal with massive graph databases. We address this problem and its challenges via graph compression with modular decomposition. In our approach, subgraph isomorphism search is performed on compressed graphs without decompressing them yielding substantial reduction of the search space and consequently a significant saving in processing time as well as in storage space for the graphs. We evaluated our algorithms on nine real-word datasets. The experimental results show that our approach is efficient and scalable.  相似文献   

11.
Several methods have been developed in the past for the efficient solution of sparse systems of linear equations with Gaussian elimination. In [5] the generalized nested dissection method is introduced. This method finds an ordering of the rows and columns of the coefficient matrix that produces small fill-in. [6] exploits the hierarchical structure imposed on the coefficient matrix by nested dissection to circumvent the necessity for using general sparse matrix algorithms. He decomposes the matrix into submatrices, solves them and reassembles the solution. We take the approach of [6] one step further: In many engineering applications such as finite element problems and circuit analysis problems many of the submatrices are identical. We exploit this fact to drastically reduce the space requirements for solving systems of linear equations that are defined succinctly using hierarchical definitions. Furthermore, if only a few components of the solution vector are asked for, we can by the same token achieve drastic savings of computing time. We get our results by extending a method for hierarchical graph processing presented in [4] to matrix computation. Our approach generalizes special solutions given in [9, 10].  相似文献   

12.
We address efficient processing of SPARQL queries over RDF datasets. The proposed techniques, incorporated into the gStore system, handle, in a uniform and scalable manner, SPARQL queries with wildcards and aggregate operators over dynamic RDF datasets. Our approach is graph based. We store RDF data as a large graph and also represent a SPARQL query as a query graph. Thus, the query answering problem is converted into a subgraph matching problem. To achieve efficient and scalable query processing, we develop an index, together with effective pruning rules and efficient search algorithms. We propose techniques that use this infrastructure to answer aggregation queries. We also propose an effective maintenance algorithm to handle online updates over RDF repositories. Extensive experiments confirm the efficiency and effectiveness of our solutions.  相似文献   

13.
Communication networks form the backbone of our society. Topology control algorithms optimize the topology of such communication networks. Due to the importance of communication networks, a topology control algorithm should guarantee certain required consistency properties (e.g., connectivity of the topology), while achieving desired optimization properties (e.g., a bounded number of neighbors). Real-world topologies are dynamic (e.g., because nodes join, leave, or move within the network), which requires topology control algorithms to operate in an incremental way, i.e., based on the recently introduced modifications of a topology. Visual programming and specification languages are a proven means for specifying the structure as well as consistency and optimization properties of topologies. In this paper, we present a novel methodology, based on a visual graph transformation and graph constraint language, for developing incremental topology control algorithms that are guaranteed to fulfill a set of specified consistency and optimization constraints. More specifically, we model the possible modifications of a topology control algorithm and the environment using graph transformation rules, and we describe consistency and optimization properties using graph constraints. On this basis, we apply and extend a well-known constructive approach to derive refined graph transformation rules that preserve these graph constraints. We apply our methodology to re-engineer an established topology control algorithm, kTC, and evaluate it in a network simulation study to show the practical applicability of our approach.  相似文献   

14.
Yujun Zheng  Jinyun Xue 《Computing》2010,88(1-2):31-54
The paper presents a novel approach to formal algorithm design for a typical class of discrete optimization problems. Using a concise set of program calculation rules, our approach reduces a problem into subproblems with less complexity based on function decompositions, constructs the problem reduction graph that describes the recurrence relations between the problem and subproblems, from which a provably correct algorithm can be mechanically derived. Our approach covers a large variety of algorithms and bridges the relationship between conventional methods for designing efficient algorithms (including dynamic programming and greedy) and some effective methods for coping with intractability (including approximation and parameterization).  相似文献   

15.
Functional graphs are a convenient representation that we have introduced to model automated production systems. They are useful for the monitoring and the supervision of manufacturing processes or other industrial processes, such as chemical processes. An approach based on relational theory and graph theory is presented in this paper. This approach allows to characterize formally structural properties of a functional graph and to map it into a set of relations translating all the complete paths existing in the initial graph. Two kinds of functional graphs are analyzed and algorithms exploiting their structures are presented. We introduce the concept of diagnosability as a system property that reflects the possibility to observe the behavior of a system with respect to faults. The diagnosability is defined and analyzed by means of computable states and mathematical relations. Propositions explaining causality relations between functions of a functional graph are given.  相似文献   

16.
Spatial data mining algorithms heavily depend on the efficient processing of neighborhood relations since the neighbors of many objects have to be investigated in a single run of a typical algorithm. Therefore, providing general concepts for neighborhood relations as well as an efficient implementation of these concepts will allow a tight integration of spatial data mining algorithms with a spatial database management system. This will speed up both, the development and the execution of spatial data mining algorithms. In this paper, we define neighborhood graphs and paths and a small set of database primitives for their manipulation. We show that typical spatial data mining algorithms are well supported by the proposed basic operations. For finding significant spatial patterns, only certain classes of paths “leading away” from a starting object are relevant. We discuss filters allowing only such neighborhood paths which will significantly reduce the search space for spatial data mining algorithms. Furthermore, we introduce neighborhood indices to speed up the processing of our database primitives. We implemented the database primitives on top of a commercial spatial database management system. The effectiveness and efficiency of the proposed approach was evaluated by using an analytical cost model and an extensive experimental study on a geographic database.  相似文献   

17.
李艳  郭娜娜  吴婷婷  湛燕 《计算机科学》2018,45(10):229-234
在优势关系粗糙集方法(DRSA)的框架下,针对不协调的目标信息系统求属性约简。基于优势矩阵的方法是最常用的一类约简方法,但矩阵中不是所有的元素都有效。浓缩优势矩阵只保留对求约简有用的最小属性集,因而可以明显降低约简过程中的计算量。进一步地,浓缩布尔矩阵通过布尔代数的形式有效地弥补了优势矩阵生成效率低的缺点。文中将等价关系上的浓缩布尔矩阵属性约简方法扩展到优势关系上,针对优势矩阵提出了浓缩布尔矩阵的概念,建立了相应的高效约简方法,使效率得到明显提高。最后采用9组UCI数据进行实验,结果验证了所提方法的有效性。  相似文献   

18.
This paper considers the job-shop problem with release dates and due dates, with the objective of minimizing the total weighted tardiness. A genetic algorithm is combined with an iterated local search that uses a longest path approach on a disjunctive graph model. A design of experiments approach is employed to calibrate the parameters and operators of the algorithm. Previous studies on genetic algorithms for the job-shop problem point out that these algorithms are highly depended on the way the chromosomes are decoded. In this paper, we show that the efficiency of genetic algorithms does no longer depend on the schedule builder when an iterated local search is used. Computational experiments carried out on instances of the literature show the efficiency of the proposed algorithm.  相似文献   

19.
We propose an approach that allows a user (e.g., an analyst) to explore a layout produced by any graph drawing algorithm, in order to reduce the visual complexity and clarify its presentation. Our approach is based on stratifying the drawing into layers with desired properties; to this aim, heuristics are presented. The produced layers can be explored and combined by the user to gradually acquire details. We present a user study to test the effectiveness of our approach. Furthermore, we performed an experimental analysis on popular force-directed graph drawing algorithms, in order to evaluate what is the algorithm that produces the smallest number of layers and if there is any correlation between the number of crossings and the number of layers of a graph layout. The proposed approach is useful to explore graph layouts, as confirmed by the presented user study. Furthermore, interesting considerations arise from the experimental evaluation, in particular, our results suggest that the number of layers of a graph layout may represent a reliable measure of its visual complexity. The algorithms presented in this paper can be effectively applied to graph layouts with a few hundreds of edges and vertices. For larger drawings that contain lots of crossings, the time complexity of our algorithms grows quadratically in the number of edges and more efficient techniques need to be devised. The proposed approach takes as input a layout produced by any graph drawing algorithm, therefore it can be applied in a variety of application domains. Several research directions can be explored to extend our framework and to devise new visualization paradigms to effectively present stratified drawings.  相似文献   

20.
In the last decade, skyline queries have gained much attention and are proved to be valuable for multi-criteria decisions. Based on the concept of Pareto dominance, they return the non-dominated points, called the skyline points. In practice, it may happen that the skyline only contains a small number of points which could be insufficient for the user needs. In this paper, we discuss two fuzzy-set-based approaches to enriching the small skyline with particular points that could serve the decision makers’ needs. The basic idea consists in identifying the most interesting points among the non-skyline ones. On the one hand, we introduce a novel fuzzy dominance relationship which makes more demanding the dominance between the points of interest. So, much points would be considered as incomparable and then as elements of the new relaxed skyline. On the other hand, we leverage an appropriate fuzzy closeness relation to retrieve non skyline points that are fuzzily close to some skyline points. Furthermore, we develop efficient algorithms to compute the relaxed variants of skyline. Extensive experiments are conducted to demonstrate the effectiveness of our approaches and analyze the performance of the proposed algorithms. A comparative study between the approaches presented is made as well.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号