首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present some theoretical results related to the problem of actively searching a 3D scene to determine the positions of one or more pre-specified objects. We investigate the effects that input noise, occlusion, and the VC-dimensions of the related representation classes have in terms of localizing all objects present in the search region, under finite computational resources and a search cost constraint. We present a number of bounds relating the noise-rate of low level feature detection to the VC-dimension of an object representable by an architecture satisfying the given computational constraints. We prove that under certain conditions, the corresponding classes of object localization and recognition problems are efficiently learnable in the presence of noise and under a purposive learning strategy, as there exists a polynomial upper bound on the minimum number of examples necessary to correctly localize the targets under the given models of uncertainty. We also use these arguments to show that passive approaches to the same problem do not necessarily guarantee that the problem is efficiently learnable. Under this formulation, we prove the existence of a number of emergent relations between the object detection noise-rate, the scene representation length, the object class complexity, and the representation class complexity, which demonstrate that selective attention is not only necessary due to computational complexity constraints, but it is also necessary as a noise-suppression mechanism and as a mechanism for efficient object class learning. These results concretely demonstrate the advantages of active, purposive and attentive approaches for solving complex vision problems.  相似文献   

2.
We present an efficient search method for job-shop scheduling problems. Our technique is based on an innovative way of relaxing and subsequently reimposing the capacity constraints on some critical operations. We integrate this technique into a fast tabu search algorithm. Our computational results on benchmark problems show that this approach is very effective. Upper bounds for 11 well-known test problems are thus improved. Through the work presented We hope to move a step closer to the ultimate vision of an automated system for generating optimal or near-optimal production schedules. The peripheral conditions for such a system are ripe with the increasingly widespread adoption of enterprise information systems and plant floor tracking systems based on bar code or wireless technologies. One of the remaining obstacles, however, is the fact that scheduling problems arising from many production environments, including job-shops, are extremely difficult to solve. Motivated by recent success of local search methods in solving the job-shop scheduling problem, we propose a new diversification technique based on relaxing and subsequently reimposing the capacity constraints on some critical operations. We integrate this technique into a fast tabu search algorithm and are able to demonstrate its effectiveness through extensive computational experiments. In future research, we will consider other diversification techniques that are not restricted to critical operations.  相似文献   

3.
One of the major limitations of relational learning is due to the complexity of verifying hypotheses on examples. In this paper we investigate this task in light of recent published results, which show that many hard problems exhibit a narrow phase transition with respect to some order parameter, coupled with a large increase in computational complexity. First we show that matching a class of artificially generated Horn clauses on ground instances presents a typical phase transition in solvability with respect to both the number of literals in the clause and the number of constants occurring in the instance to match. Then, we demonstrate that phase transitions also appear in real-world learning problems, and that learners tend to generate inductive hypotheses lying exactly on the phase transition. On the other hand, an extensive experimenting revealed that not every matching problem inside the phase transition region is intractable. However, unfortunately, identifying those that are feasible cannot be done solely on the basis of the order parameters. To face this problem, we propose a method, based on a Monte Carlo algorithm, to estimate on-line the likelihood that the current matching problem will exceed a given amount of computational resources. The impact of the above findings on relational learning is discussed.  相似文献   

4.
We illustrate the use of phase transition behavior in the study of heuristics. Using an "annealed" theory, we define a parameter that measures the "constrainedness" of an ensemble of number partitioning problems. We identify a phase transition at a critical value of constrainedness. We then show that constrainedness can be used to analyze and compare algorithms and heuristics for number partitioning in a precise and quantitative manner. For example, we demonstrate that on uniform random problems both the Karmarkar–Karp and greedy heuristics minimize the constrainedness, but that the decisions made by the Karmarkar–Karp heuristic are superior at reducing constrainedness. This supports the better performance observed experimentally for the Karmarkar–Karp heuristic. Our results refute a conjecture of Fu that phase transition behavior does not occur in number partitioning. Additionally, they demonstrate that phase transition behavior is useful for more than just simple benchmarking. It can, for instance, be used to analyze heuristics, and to compare the quality of heuristic solutions.  相似文献   

5.
Heuristic Solutions for Locating Health Resources   总被引:1,自引:0,他引:1  
Of the three metaheuristic strategies tested that can help determine healthcare facility locations, scatter search performed the best and fastest. Hypoglycemia, or low blood sugar, can cause many health problems, including blurred vision, mental confusion, and speech impairment. If someone becomes severely hypoglycemic and doesn't recover quickly, he or she might lose consciousness and go into a diabetic coma. If that happens, the person must get treated within 15 to 20 minutes; otherwise, he or she almost certainly will suffer devastating physical damage - possibly neuropathy, or blindness - or might even die. Because the risk of permanent neurological deficit increases as the coma is prolonged, it's important for people with diabetes to live no further than 20 minutes (known as the critical time) from their closest health center. Facility location problems such as this involve determining where to install resources and how to assign potential customers to those resources. Most studies on location problems are framed under deterministic conditions. Our proposed solution is more realistic. We adapted, implemented, and compared three metaheuristic strategies - scatter search, tabu search. and variable neighborhood search - to find the best locations in Spain's Burgos province to place health resources for treating people in diabetic comas. To check the efficiency of SS, TS, and VNS, we used instances of the well-known OR-Library as well as real data from the Burgos area in northern Spain. Using metaheuristics is a good option when the problem's complexity prevents us from using a commercially available solver to solve it exactly. This is especially true here, because we're considering hundreds of locations.  相似文献   

6.
We study the connection between the order of phase transitions in combinatorial problems and the complexity of decision algorithms for such problems. We rigorously show that, for a class of random constraint satisfaction problems, a limited connection between the two phenomena indeed exists. Specifically, we extend the definition of the spine order parameter of Bollobás et al. [10] to random constraint satisfaction problems, rigorously showing that for such problems a discontinuity of the spine is associated with a 2Ω(n) resolution complexity (and thus a 2Ω(n) complexity of DPLL algorithms) on random instances. The two phenomena have a common underlying cause: the emergence of “large” (linear size) minimally unsatisfiable subformulas of a random formula at the satisfiability phase transition.We present several further results that add weight to the intuition that random constraint satisfaction problems with a sharp threshold and a continuous spine are “qualitatively similar to random 2-SAT”. Finally, we argue that it is the spine rather than the backbone parameter whose continuity has implications for the decision complexity of combinatorial problems, and we provide experimental evidence that the two parameters can behave in a different manner.AMS subject classification 68Q25, 82B27  相似文献   

7.
Intermediate-level vision is central to form perception, and we outline an approach to intermediate-level segmentation based on complexity analysis. We focus on the problem of edge detection, and how edge elements might be grouped together. This is typical because, once the local structure is established, the transition to global structure must be effected and context is critical. To illustrate, consider an edge element inferred from an unknown image. Is this local edge part of a long curve, or part of a texture? If the former, which is the next element along the curve? If the latter, is the texture like a hair pattern, in which nearby elements are oriented similarly, or like a spaghetti pattern, in which they are not? Are there other natural possibilities? Such questions raise issues of dimensionality, since curves are 1-D and textures are 2-D, and also of complexity. Working toward a measure of representational complexity for vision, in this first of a pair of papers we develop a foundation based on geometric measure theory. The main result concerns the distribution of tangents in space and in orientation, which serves as a formal basis for the concrete measure of representational complexity developed in the companion paper.  相似文献   

8.
In this paper, we consider the role of the crossover operator in genetic algorithms. Specifically, we study optimisation problems that exhibit many local optima and consider how crossover affects the rate at which the population breaks the symmetry of the problem. As an example of such a problem, we consider the subset sum problem. In doing so, we demonstrate a previously unobserved phenomenon, whereby the genetic algorithm with crossover exhibits a critical mutation rate, at which its performance sharply diverges from that of the genetic algorithm without crossover. At this critical mutation rate, the genetic algorithm with crossover exhibits a rapid increase in population diversity. We calculate the details of this phenomenon on a simple instance of the subset sum problem and show that it is a classic phase transition between ordered and disordered populations. Finally, we show that this critical mutation rate corresponds to the transition between the genetic algorithm accelerating or preventing symmetry breaking and that the critical mutation rate represents an optimum in terms of the balance of exploration and exploitation within the algorithm.  相似文献   

9.
Here, this author attempts to tie the concept of active perception to attentive processing in general and to the complexity level analysis of visual search described previously; the aspects of active vision as they have been currently described form a subset of the full spectrum of attentional capabilities. Our approach is motivated by the search requirements of vision tasks and thus we cast the problem as one of search preceding the application of methods for shape-from-X, optical flow, etc., and recognition in general. This perspective permits a dimension of analysis not found in current formulations of the active perception problem, that of computational complexity. This article describes where the active perception paradigm does and does not provide computational benefits along this dimension. A formalization of the search component of active perception is presented in order to accomplish this. The link to attentional mechanisms is through the control of data acquisition and processing by the active process. It should be noted that the analysis performed here applies to the general hypothesize-and-test search strategy, to time-varying scenes as well as to the general problem of integration of successive fixations. Finally, an argument is presented as to why this framework is an extension of the behaviorist approaches to active vision.  相似文献   

10.
11.
12.
The Quantum Adiabatic Algorithm has been proposed as a general purpose algorithm for solving hard optimization problems on a quantum computer. Early work on very small sizes indicated that the running time (complexity) only increased as a (quite small) power of the problem size N. We report results of Quantum Monte Carlo simulations, using parallel tempering, with which we determine the minimum energy gap (and hence get information the complexity) for much bigger sizes than was possible before. The aim is to see if there is a “crossover” to exponential complexity at large N. We present data for the typical (median) complexity as a function of N, which indicate a crossover to a first order transition at large sizes. This implies that the complexity is exponential at large N, at least for the problem studied.  相似文献   

13.
Proof procedures based on model elimination or the connection tableau calculus have become more and more successful. But these procedures still suffer from long proof lengths as well as from a rather high degree of redundant search effort in comparison with resolution-style search procedures. In order to overcome these problems we investigate the use of clausal lemmas. We develop a method to augment a given set of clauses by a lemma set in a preprocessing phase and discuss the ability of this technique to reduce proof lengths and depths and to provide an appropriate reordering of the search space. We deal with the basic connection tableau calculus as well as with several calculus refinements and extensions. In order to control the use of lemmas we develop techniques for selecting relevant lemmas based on partial evaluation techniques. Experiments with the prover Setheo performed in several domains demonstrate the high potential of our approach.  相似文献   

14.
多优解更新信息素的混合行为蚁群算法   总被引:1,自引:1,他引:0  
蚁群算法在优化领域,尤其在组合优化问题中获得了较为成功的应用,然而它存在易于早熟收敛、搜索时间长等不足.针对该问题,提出了一种改进算法.该算法一方面在典型的状态转移规则中融合了一种随机选择策略,保证算法始终具有一定的探索能力;另一方面在搜索过程中保持一个优解池,通过交替使用池中最优解和其它次优解更新信息素,达到平衡算法强化搜索和分散搜索的目的.文中讨论了相关参数的选取方法,分析了所提算法的计算复杂度和收敛性,并针对典型的旅行商问题进行了仿真实验,结果表明该算法获得的解质量高于其他已有算法.  相似文献   

15.
It is well-known that heuristic search in ILP is prone to plateau phenomena. An explanation can be given after the work of Giordana and Saitta: the ILP covering test is NP-complete and therefore exhibits a sharp phase transition in its coverage probability. As the heuristic value of a hypothesis depends on the number of covered examples, the regions “yes” and “no” represent plateaus that need to be crossed during search without an informative heuristic value. Several subsequent works have extensively studied this finding by running several learning algorithms on a large set of artificially generated problems and argued that the occurrence of this phase transition dooms every learning algorithm to fail to identify the target concept. We note however that only generate-and-test learning algorithms have been applied and that this conclusion has to be qualified in the case of data-driven learning algorithms. Mostly building on the pioneering work of Winston on near-miss examples, we show that, on the same set of problems, a top-down data-driven strategy can cross any plateau if near-misses are supplied in the training set, whereas they do not change the plateau profile and do not guide a generate-and-test strategy. We conclude that the location of the target concept with respect to the phase transition alone is not a reliable indication of the learning problem difficulty as previously thought. Editors: Stephen Muggleton, Ramon Otero, Simon Colton.  相似文献   

16.
This paper presents an experimental investigation of the following questions: how does the average-case complexity of random 3-SAT, understood as a function of the order (number of variables) for fixed density (ratio of number of clauses to order) instances, depend on the density? Is there a phase transition in which the complexity shifts from polynomial to exponential in the order? Is the transition dependent or independent of the solver? Our experiment design uses three complete SAT solvers embodying different algorithms: GRASP, CPLEX, and CUDD. We observe new phase transitions for all three solvers, where the median running time shifts from polynomial in the order to exponential. The location of the phase transition appears to be solver-dependent. GRASP shifts from polynomial to exponential complexity near the density of 3.8, CPLEX shifts near density 3, while CUDD exhibits this transition between densities of 0.1 and 0.5. This experimental result underscores the dependence between the solver and the complexity phase transition, and challenges the widely held belief that random 3-SAT exhibits a phase transition in computational complexity very close to the crossover point.  相似文献   

17.
Multi-agent planning (MAP) approaches are typically oriented at solving loosely coupled problems, being ineffective to deal with more complex, strongly related problems. In most cases, agents work under complete information, building complete knowledge bases. The present article introduces a general-purpose MAP framework designed to tackle problems of any coupling levels under incomplete information. Agents in our MAP model are partially unaware of the information managed by the rest of agents and share only the critical information that affects other agents, thus maintaining a distributed vision of the task. Agents solve MAP tasks through the adoption of an iterative refinement planning procedure that uses single-agent planning technology. In particular, agents will devise refinements through the partial-order planning paradigm, a flexible framework to build refinement plans leaving unsolved details that will be gradually completed by means of new refinements. Our proposal is supported with the implementation of a fully operative MAP system and we show various experiments when running our system over different types of MAP problems, from the most strongly related to the most loosely coupled.  相似文献   

18.
Graph search represents a cornerstone in computer science and is employed when the best algorithmic solution to a problem consists in performing an analysis of a search space representing computational possibilities. Typically, in such problems it is crucial to determine the sequence of transitions performed that led to certain states. In this work we discuss how to adapt generic quantum search procedures, namely quantum random walks and Grover’s algorithm, in order to obtain computational paths. We then compare these approaches in the context of tree graphs. In addition we demonstrate that in a best-case scenario both approaches differ, performance-wise, by a constant factor speedup of two, whilst still providing a quadratic speedup relatively to their classical equivalents. We discuss the different scenarios that are better suited for each approach.  相似文献   

19.
In this paper we propose a simple but efficient modification of the well-known Nelder–Mead (NM) simplex search method for unconstrained optimization. Instead of moving all n simplex vertices at once in the direction of the best vertex, our “shrink” step moves them in the same direction but one by one until an improvement is obtained. In addition, for solving non-convex problems, we simply restart the so-modified NM (MNM) method by constructing an initial simplex around the solution obtained in the previous phase. We repeat restarts until there is no improvement in the objective function value. Thus, our restarted modified NM (RMNM) is a descent and deterministic method and may be seen as an extended local search for continuous optimization. In order to improve computational complexity and efficiency, we use the heap data structure for storing and updating simplex vertices. Extensive empirical analysis shows that: our modified method outperforms in average the original version as well as some other recent successful modifications; in solving global optimization problems, it is comparable with the state-of-the-art heuristics.  相似文献   

20.

In order to reduce the cost and power consumption of radio frequency (RF) chains in a millimeterwave (mmWave) multiple-input multiple-output (MIMO) system, hybrid analog/digital beamforming (HBF) can be utilized to reduce the number of RF chains. The HBF consists of an analog beamforming (ABF) stage and a digital beamforming (DBF) stage. The ABF is always realized by using phase shifters and the DBF is done in a low-dimensional digital domain. However, phase shifters have several drawbacks, such as high power consumption and inconsistency of insertion loss. In this paper, we propose an energy-efficient HBF structure to handle these problems, which utilizes the Butler phase shifting matrix in the ABF stage. With the Butlermatrix- based ABF, several fixed beam directions can be obtained, and the best beam directions of different Butler matrices can be chosen by using exhaustive search. To reduce the high complexity of exhaustive search, we further provide a low complexity HBF algorithm. Simulations under the conditions of perfect channel state information (CSI) and estimated CSI verify the effectiveness of our proposed Butler-matrix-based HBF structure and related algorithms.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号