首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
图划分是分布式图计算中的一项基础工作, 其作用是将大规模图进行划分并分配到集群中的不同机器上. 图划分的质量对分布式图计算的性能有很大的影响, 其目标是降低负载平衡和最小化边割. 如今, 现实中的图数据通常呈动态增长态势, 这就需要一种能够处理动态增量图的划分方法, 在图数据动态增长的过程中确保划分的质量不受影响. 目前虽然有一些动态图划分算法被提出, 但它们不能同时专注于实时处理动态变化和获得高质量的划分结果. 提出基于顶点组重分配的动态增量图划分算法(ED-IDGP)来解决大规模动态增量图的划分问题. 在ED-IDGP算法中, 设计实时处理4种不同单元更新类型的动态处理器, 并在每次处理完单元更新后通过在分区发生动态变化的附近执行局部优化器进一步提高图划分的质量. 在ED-IDGP的局部优化器中, 利用基于改进标签传播算法的顶点组搜索策略搜索顶点组, 并利用提出的顶点组移动增益公式衡量最有益的顶点组, 将该顶点组移动到目标分区中做优化. 在真实数据集上从不同的角度和度量指标评估了ED-IDGP算法的性能和效率.  相似文献   

2.
Contextual Part Analogies in 3D Objects   总被引:1,自引:0,他引:1  
In this paper we address the problem of finding analogies between parts of 3D objects. By partitioning an object into meaningful parts and finding analogous parts in other objects, not necessarily of the same type, many analysis and modeling tasks could be enhanced. For instance, partial match queries can be formulated, annotation of parts in objects can be utilized, and modeling-by-parts applications could be supported. We define a similarity measure between two parts based not only on their local signatures and geometry, but also on their context within the shape to which they belong. In our approach, all objects are hierarchically segmented (e.g. using the shape diameter function), and each part is given a local signature. However, to find corresponding parts in other objects we use a context enhanced part-in-whole matching. Our matching function is based on bi-partite graph matching and is computed using a flow algorithm which takes into account both local geometrical features and the partitioning hierarchy. We present results on finding part analogies among numerous objects from shape repositories, and demonstrate sub-part queries using an implementation of a simple search and retrieval application. We also demonstrate a simple annotation tool that carries textual tags of object parts from one model to many others using analogies, laying a basis for semantic text based search.  相似文献   

3.
This paper presents an external parallelization of Constraint Programming (CP) search tree mixing both static and dynamic partitioning. The principle of the parallelization is to partition the CP search tree into a set of sub-trees, then assign each sub-tree to one computing core in order to perform a local search using a sequential CP solver. In this context, static partitioning consists of decomposing the CP variables domains in order to split the CP search tree into a set of disjoint sub-trees to assign them to the cores. This strategy performs well without adding an extra cost to the parallel search, but the problem is the load imbalance between computing cores. On the other hand, dynamic partitioning is based on preservation of the search state to generate, dynamically or on demand, the sub-trees that are assigned to the cores. This strategy offers good load balancing between the different computing cores, but computing overcosts appear due to the initialisation of the search when a sub-tree is migrated from one core to another. In this paper, we propose a new partitioning strategy that mixes the static and dynamic partitioning and enjoys the benefits of each strategy. This mixed partitioning is designed to run on shared and distributed memory architectures. The performances obtained are illustrated by solving the CP problems modelled using the FlatZinc format and solved using the Google OR-Tools solver on top of the parallel Bobpp framework.  相似文献   

4.
Many current recognition systems use variations on constrained tree search to locate objects in cluttered environments. If the system is simply finding instances of an object known to be in the scene, then previous formal analysis has shown that the expected amount of search is quadratic in the number of model and data features when all the data is known to come from a single object, but is exponential when spurious data is included. If one can group the data into subsets likely to have come from a single object, then terminating the search once a “good enough” interpretation is found reduces the expected search to cubic. Without successful grouping, terminated search is still exponential. These results apply to finding instances of a known object in the data. What happens when the object is not present? In this article, we turn to the problem of selecting models from a library, and examine the combinatorial cost of determining that an incorrectly chosen candidate object is not present in the data. We show that the expected search is again exponential, implying that naive approaches to library indexing are likely to carry an expensive overhead, since an exponential amount of work is needed to weed out each incorrect model. The analytic results are shown to be in agreement with empirical data for cluttered object recognition.  相似文献   

5.
This article deals with the group interview problem, in which each group contains several alternatives and each group of alternatives is presented and evaluated sequentially over time. We derive the optimal selection strategy for the group interview problem with a general utility function. Among the various types of utility function, we focus on the best choice problem, in which our utility is one if we successfully select the best choice and zero otherwise. We derive a simple selection rule called the optimal partitioning strategy in which the decision-maker divides the entire groups into two disjoint sets and, after evaluating the choices in the first set, chooses the relatively best choice available for the first time in the second set. Because the selected choice is not necessarily the absolutely best choice, we also consider the probability distribution of the actual rank of the choice selected under the partitioning strategy.

Scope and purpose

In many managerial decision situations such as buying a car, selling a house, or searching for a job, several alternatives are presented sequentially and an accept-or-reject decision is made immediately after evaluating each alternative. The classical secretary problem and its extensions have been successfully applied to such a sequential search and selection problem. This article deals with a more generalized version of the secretary problem, called the group interview problem, in which several groups of alternatives are presented and evaluated sequentially over time. Based on a stochastic dynamic programming approach, we propose the optimal selection strategy for the group interview problem with various types of the decision-maker's utility function. There are many potential applications of the group interview problem, including consumer search and purchase process, job search problem, sequential assignment of batch jobs, and so on.  相似文献   

6.
Horizontal partitioning is a logical database design technique which facilitates efficient execution of queries by reducing the irrelevant objects accessed. Given a set of most frequently executed queries on a class, the horizontal partitioning generates horizontal class fragments (each of which is a subset of object instances of the class), that meet the queries requirements. There are two types of horizontal class partitioning, namely, primary and derived. Primary horizontal partitioning of a class is performed using predicates of queries accessing the class. Derived horizontal partitioning of a class is the partitioning of a class based on the horizontal partitioning of another class. We present algorithms for both primary and derived horizontal partitioning and discuss some issues in derived horizontal partitioning and present their solutions. There are two important aspects for supporting database operations on a partitioned database, namely, fragment localization for queries and object migration for updates. Fragment localization deals with identifying the horizontal fragments that contribute to the result of the query, and object migration deals with migrating objects from one class fragment to another due to updates. We provide novel solutions to these two problems, and finally we show the utility of horizontal partitioning for query processing.  相似文献   

7.
Mesh partitioning and skeletonisation are fundamental for many computer graphics and animation techniques. Because of the close link between an object’s skeleton and its boundary, these two problems are in many cases complementary. Any partitioning of the object can assist in the creation of a skeleton and any segmentation of the skeleton can infer a partitioning of the object. In this paper, we consider these two problems on a wide variety of meshes, and strive to construct partitioning and skeletons which remain consistent across a family of objects, not a single one. Such families can consist of either a single object in multiple poses and resolutions, or multiple objects which have a general common shape. To achieve consistency, we base our algorithms on a volume-based shape-function called the shape-diameter-function (SDF), which remains largely oblivious to pose changes of the same object and maintains similar values in analogue parts of different objects. The SDF is a scalar function defined on the mesh surface; however, it expresses a measure of the diameter of the object’s volume in the neighborhood of each point on the surface. Using the SDF we are able to process and manipulate families of objects which contain similarities using a simple and consistent algorithm: consistently partitioning and creating skeletons among multiple meshes.  相似文献   

8.
基于Snake模型的快速目标检测算法的研究与仿真   总被引:2,自引:0,他引:2  
姚小虹  赵亦工 《计算机仿真》2004,21(11):181-184
该文提出了一种基于Snake模型的快速目标检测算法,可以实现图象全空域有效搜索。为了能够在实时图象处理系统中获得应用,该算法主要针对简单离散Snake模型进行改进。利用图象区域分割,并设置适当的目标判别方法,经有限次迭代之后能够有效地收敛到目标,并且对处于相邻图象块中的同一目标能进行自动合并。仿真结果表明:该算法即使在目标与背景的对比度很低的情况下也能有效地检测到目标。  相似文献   

9.
可变形物体间的精确碰撞检测方法研究   总被引:2,自引:1,他引:1       下载免费PDF全文
针对可变形物体,提出了一种基于粒子的精确碰撞检测算法。首先用LBG矢量量化技术将物体的表面划分成几个小区域,然后在每个区域中分别选择一个点作为检测粒子。当一个物体接近另一个物体时,找出两物体上靠得最近的粒子对。为了得到精确的碰撞位置坐标,进一步计算靠得最近的顶点的相关三角面片之间的最短距离。若此距离小于某个给定的阈值,则可认为两物体在相关三角面片上的最近点处发生了碰撞。仿真实验验证了该算法能有效处理虚拟力交互仿真中的可变形物体的碰撞检测。  相似文献   

10.

In this paper we present a novel moment-based skeleton detection for representing human objects in RGB-D videos with animated 3D skeletons. An object often consists of several parts, where each of them can be concisely represented with a skeleton. However, it remains as a challenge to detect the skeletons of individual objects in an image since it requires an effective part detector and a part merging algorithm to group parts into objects. In this paper, we present a novel fully unsupervised learning framework to detect the skeletons of human objects in a RGB-D video. The skeleton modeling algorithm uses a pipeline architecture which consists of a series of cascaded operations, i.e., symmetry patch detection, linear time search of symmetry patch pairs, part and symmetry detection, symmetry graph partitioning, and object segmentation. The properties of geometric moment-based functions for embedding symmetry features into centers of symmetry patches are also investigated in detail. As compared with the state-of-the-art deep learning approaches for skeleton detection, the proposed approach does not require tedious human labeling work on training images to locate the skeleton pixels and their associated scale information. Although our algorithm can detect parts and objects simultaneously, a pre-learned convolution neural network (CNN) can be used to locate the human object from each frame of the input video RGB-D video in order to achieve the goal of constructing real-time applications. This much reduces the complexity to detect the skeleton structure of individual human objects with our proposed method. Using the segmented human object skeleton model, a video surveillance application is constructed to verify the effectiveness of the approach. Experimental results show that the proposed method gives good performance in terms of detection and recognition using publicly available datasets.

  相似文献   

11.
This paper presents an Efficient Distributed Genetic Algorithm for classification Rule extraction in data mining (EDGAR), which promotes a new method of data distribution in computer networks. This is done by spatial partitioning of the population into several semi-isolated nodes, each evolving in parallel and possibly exploring different regions of the search space. The presented algorithm shows some advantages when compared with other distributed algorithms proposed in the specific literature. In this way, some results are presented showing significant learning rate speedup without compromising the accuracy.  相似文献   

12.
Traditional nearest-neighbor (NN) search is based on two basic indexing approaches: object-based indexing and solution-based indexing. The former is constructed based on the locations of data objects: using some distance heuristics on object locations. The latter is built on a precomputed solution space. Thus, NN queries can be reduced to and processed as simple point queries in this solution space. Both approaches exhibit some disadvantages, especially when employed for wireless data broadcast in mobile computing environments. In this paper, we introduce a new index method, called the grid-partition index, to support NN search in both on-demand access and periodic broadcast modes of mobile computing. The grid-partition index is constructed based on the Voronoi diagram, i.e., the solution space of NN queries. However, it has two distinctive characteristics. First, it divides the solution space into grid cells such that a query point can be efficiently mapped into a grid cell around which the nearest object is located. This significantly reduces the search space. Second, the grid-partition index stores the objects that are potential NNs of any query falling within the cell. The storage of objects, instead of the Voronoi cells, makes the grid-partition index a hybrid of the solution-based and object-based approaches. As a result, it achieves a much more compact representation than the pure solution-based approach and avoids backtracked traversals required in the typical object-based approach, thus realizing the advantages of both approaches. We develop an incremental construction algorithm to address the issue of object update. In addition, we present a cost model to approximate the search cost of different grid partitioning schemes. The performances of the grid-partition index and existing indexes are evaluated using both synthetic and real data. The results show that, overall, the grid-partition index significantly outperforms object-based indexes and solution-based indexes. Furthermore, we extend the grid-partition index to support continuous-nearest-neighbor search. Both algorithms and experimental results are presented. Edited by R. Guting  相似文献   

13.
Very large scale integration (VLSI) circuit partitioning is an important problem in design automation of VLSI chips and multichip systems; it is an NP-hard combinational optimization problem. In this paper, an effective hybrid multi-objective partitioning algorithm, based on discrete particle swarm optimzation (DPSO) with local search strategy, called MDPSO-LS, is presented to solve the VLSI twoway partitioning with simultaneous cutsize and circuit delay minimization. Inspired by the physics of genetic algorithm, uniform crossover and random two-point exchange operators are designed to avoid the case of generating infeasible solutions. Furthermore, the phenotype sharing function of the objective space is applied to circuit partitioning to obtain a better approximation of a true Pareto front, and the theorem of Markov chains is used to prove global convergence. To improve the ability of local exploration, Fiduccia-Matteyses (FM) strategy is also applied to further improve the cutsize of each particle, and a local search strategy for improving circuit delay objective is also designed. Experiments on ISCAS89 benchmark circuits show that the proposed algorithm is efficient.  相似文献   

14.
As an important problem in image understanding, salient object detection is essential for image classification, object recognition, as well as image retrieval. In this paper, we propose a new approach to detect salient objects from an image by using content-sensitive hypergraph representation and partitioning. Firstly, a polygonal potential Region-Of-Interest (p-ROI) is extracted through analyzing the edge distribution in an image. Secondly, the image is represented by a content-sensitive hypergraph. Instead of using fixed features and parameters for all the images, we propose a new content-sensitive method for feature selection and hypergraph construction. In this method, the most discriminant color channel which maximizes the difference between p-ROI and the background is selected for each image. Also the number of neighbors in hyperedges is adjusted automatically according to the image content. Finally, an incremental hypergraph partitioning is utilized to generate the candidate regions for the final salient object detection, in which all the candidate regions are evaluated by p-ROI and the best match one will be the selected as final salient object. Our approach has been extensively evaluated on a large benchmark image database. Experimental results show that our approach can not only achieve considerable improvement in terms of commonly adopted performance measures in salient object detection, but also provide more precise object boundaries which is desirable for further image processing and understanding.  相似文献   

15.
A necessary condition for providing quality of service to distributed virtual environments (DVEs) is to provide a system response below a maximum threshold to the client computers. In this sense, latency-aware partitioning methods try to provide response times below the threshold to the maximum number of client computers as possible. These partitioning methods should find an assignment of clients to servers that optimizes system throughput, system latency, and partitioning efficiency. In this paper, we present a new algorithm based on greedy randomized adaptive search procedure with memory for finding the best solutions as possible to this problem. We take into account several different alternatives in order to design both the constructive phase and the local search phase of this multistart metaheuristic for combinatorial problems. Additionally, we enhance this basic approach with some intensification strategies that improve the efficiency of the basic search method. Performance evaluation results show that the new algorithm increases the performance provided by other metaheuristics when applied to solve the latency-aware partitioning problem in DVE systems.  相似文献   

16.
In this article, we present a framework called state-set branching that combines symbolic search based on reduced ordered Binary Decision Diagrams (BDDs) with best-first search, such as A* and greedy best-first search. The framework relies on an extension of these algorithms from expanding a single state in each iteration to expanding a set of states. We prove that it is generally sound and optimal for two A* implementations and show how a new BDD technique called branching partitioning can be used to efficiently expand sets of states. The framework is general. It applies to any heuristic function, evaluation function, and transition cost function defined over a finite domain. Moreover, branching partitioning applies to both disjunctive and conjunctive transition relation partitioning. An extensive experimental evaluation of the two A* implementations proves state-set branching to be a powerful framework. The algorithms outperform the ordinary A* algorithm in almost all domains. In addition, they can improve the complexity of A* exponentially and often dominate both A* and blind BDD-based search by several orders of magnitude. Moreover, they have substantially better performance than BDDA*, the currently most efficient BDD-based implementation of A*.  相似文献   

17.
Probabilistic Models of Appearance for 3-D Object Recognition   总被引:6,自引:0,他引:6  
We describe how to model the appearance of a 3-D object using multiple views, learn such a model from training images, and use the model for object recognition. The model uses probability distributions to describe the range of possible variation in the object's appearance. These distributions are organized on two levels. Large variations are handled by partitioning training images into clusters corresponding to distinctly different views of the object. Within each cluster, smaller variations are represented by distributions characterizing uncertainty in the presence, position, and measurements of various discrete features of appearance. Many types of features are used, ranging in abstraction from edge segments to perceptual groupings and regions. A matching procedure uses the feature uncertainty information to guide the search for a match between model and image. Hypothesized feature pairings are used to estimate a viewpoint transformation taking account of feature uncertainty. These methods have been implemented in an object recognition system, OLIVER. Experiments show that OLIVER is capable of learning to recognize complex objects in cluttered images, while acquiring models that represent those objects using relatively few views.  相似文献   

18.
Stochastic local search algorithms (SLS) have been increasingly applied to approximate solutions of the weighted maximum satisfiability problem (MAXSAT), a model for solutions of major problems in AI and combinatorial optimization. While MAXSAT instances have generally a strong intrinsic dependency between their variables, most of SLS algorithms start the search process with a random initial solution where the value of each variable is generated independently with the same uniform distribution. In this paper, we propose a new SLS algorithm for MAXSAT based on an unconventional distribution known as the Bose-Einstein distribution in quantum physics. It provides a stochastic initialization scheme to an efficient and very simple heuristic inspired by the co-evolution process of natural species and called Extremal Optimization (EO). This heuristic was introduced for finding high quality solutions to hard optimization problems such as colouring and partitioning. We examine the effectiveness of the resulting algorithm by computational experiments on a large set of test instances and compare it with some of the most powerful existing algorithms. Our results are remarkable and show that this approach is appropriate for this class of problems.  相似文献   

19.
Using a complex training image (TI) for the single normal equation simulation (SNESIM) algorithm results in poor simulated realizations since that image contains trends and location specific patterns. By pooling all the TI patterns in a single search tree and not recording the relative locations of those patterns, some critical features of these complex TIs are lost.The search tree partitioning approach subdivides a large TI into imbricated, homogeneous, smaller images, called partition classes. Each of these partition classes has a corresponding search tree that can be utilized by the SNESIM algorithm. These partition classes are obtained by processing the TIs with spatial filters that are pattern sensitive. The resulting filter scores are then clustered into partition classes. All patterns within a partition class are recorded by a search tree; there is one tree per partition class. At each pixel along the simulation path, the partition class is retrieved first and used to select the appropriate search tree. That search tree contains the patterns relevant to that partition class.In practice, the partitioning approach adds flexibility in choosing a TI. TIs that were easier to obtain but traditionally too complex for simulation can now be considered as input to SNESIM. In many cases, it also significantly increases the simulation speed by searching a vector of smaller trees instead of a single large one. A plugin for the SGeMS software is provided.  相似文献   

20.
In this paper, we study the partitioning of constraints in temporal planning problems formulated as mixed-integer nonlinear programming (MINLP) problems. Constraint partitioning is attractive because it leads to much easier subproblems, where each is a significant relaxation of the original problem. Moreover, each subproblem is very similar to the original problem and can be solved by any existing solver with little or no modification. Constraint partitioning, however, introduces global constraints that may be violated when subproblems are evaluated independently. To reduce the overhead in resolving such global constraints, we develop in this paper new conditions and algorithms for limiting the search space to be backtracked in each subproblem. Using a penalty formulation of a MINLP where the constraint functions of the MINLP are transformed into non-negative functions, we present a necessary and sufficient extended saddle-point condition (ESPC) for constrained local minimization. When the penalties are larger than some thresholds, our theory shows a one-to-one correspondence between a constrained local minimum of the MINLP and an extended saddle point of the penalty function. Hence, one way to find a constrained local minimum is to increase gradually the penalties of those violated constraints and to look for a local minimum of the penalty function using any existing algorithm until a solution to the constrained model is found. Next, we extend the ESPC to constraint-partitioned MINLPs and propose a partition-and-resolve strategy for resolving violated global constraints across subproblems. Using the discrete-space ASPEN and the mixed-space MIPS planners to solve subproblems, we show significant improvements on some planning benchmarks, both in terms of the quality of the plans generated and the execution times to find them.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号