首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Full coverage and exploration of an environment is essential in robot rescue operations where victim identification is required. Three methods of target selection towards full exploration and coverage of an unknown space oriented for Urban Search and Rescue (USAR) applications have been developed. These are the Selection of the closest topological node, the Selection of the minimum cost topological node and the Selection of the minimum cost sub-graph. All methods employ a topological graph extracted from the Generalized Voronoi Diagram (GVD), in order to select the next best target during exploration. The first method utilizes a distance metric for determining the next best target whereas the Selection of the minimum cost topological node method assigns four different weights on the graph’s nodes, based on certain environmental attributes. The Selection of the minimum cost sub-graph uses a similar technique, but instead of single nodes, sets of graph nodes are examined. In addition, a modification of A* algorithm for biased path creation towards uncovered areas, aiming at a faster spatial coverage, is introduced. The proposed methods’ performance is verified by experiments conducted in two heterogeneous simulated environments. Finally, the results are compared with two common exploration methods.  相似文献   

2.
We examine the orthographic-n-point problem (OnP), which extends the perspective-n-point problem to telecentric cameras. Given a set of 3D points and their corresponding 2D points under orthographic projection, the OnP problem is the determination of the pose of the 3D point cloud with respect to the telecentric camera. We show that the OnP problem is equivalent to the unbalanced orthogonal Procrustes problem for non-coplanar 3D points and to the sub-Stiefel Procrustes problem for coplanar 3D points. To solve the OnP problem, we apply existing algorithms for the respective Procrustes problems and also propose novel algorithms. Furthermore, we evaluate the algorithms to determine their robustness and speed and conclude which algorithms are preferable in real applications. Finally, we evaluate which algorithm is most suitable as a minimal solver in a RANSAC scheme.  相似文献   

3.
Consideration was given to the classical NP-hard problem 1|rj|Lmax of the scheduling theory. An algorithm to determine the optimal schedule of processing n jobs where the job parameters satisfy a system of linear constraints was presented. The polynomially solvable area of the problem 1|rj|Lmax was expanded. An algorithm was described to construct a Pareto-optimal set of schedules by the criteria Lmax and Cmax for complexity of O(n3logn) operations.  相似文献   

4.
In the article the problem of finding the maximum multiple flow in the network of any natural multiplicity k is studied. There are arcs of three types: ordinary arcs, multiple arcs and multi-arcs. Each multiple and multi-arc is a union of k linked arcs, which are adjusted with each other. The network constructing rules are described. The definitions of a divisible network and some associated subjects are stated. The important property of the divisible network is that every divisible network can be partitioned into k parts, which are adjusted on the linked arcs of each multiple and multi-arc. Each part is the ordinary transportation network. The main results of the article are the following subclasses of the problem of finding the maximum multiple flow in the divisible network. 1. The divisible networks with the multi-arc constraints. Assume that only one vertex is the ending vertex for a multi-arc in s network parts. In this case the problem can be solved in a polynomial time. 2. The divisible networks with the weak multi-arc constraints. Assume that only one vertex is the ending vertex for a multi-arc in k-1 network parts (1 ≤ s < k ? 1) and other parts have at least two such vertices. In that case the multiplicity of the maximum multiple flow problem can be decreased to k - s. 3. The divisible network of the parallel structure. Assume that the divisible network component, which consists of all multiple arcs, can be partitioned into subcomponents, each of them containing exactly one vertex-beginning of a multi-arc. Suppose that intersection of each pair of subcomponents is the only vertex-network source x 0. If k=2, the maximum flow problem can be solved in a polynomial time. If k ≥ 3, the problem is NP-complete. The algorithms for each polynomial subclass are suggested. Also, the multiplicity decreasing algorithm for the divisible network with weak multi-arc constraints is formulated.  相似文献   

5.
Sweeping Points     
Given a set of points in the plane, and a sweep-line as a tool, what is best way to move the points to a target point using a sequence of sweeps? In a sweep, the sweep-line is placed at a start position somewhere in the plane, then moved orthogonally and continuously to another parallel end position, and then lifted from the plane. The cost of a sequence of sweeps is the total length of the sweeps. Another parameter of interest is the number of sweeps. Four variants are discussed, depending on whether the target is a hole or a pile, and whether the target is specified or freely selected by the algorithm. Here we present a ratio 4/π≈1.27 approximation algorithm in the length measure, which performs at most four sweeps. We also prove that, for the two constrained variants, there are sets of n points for which any sequence of minimum cost requires 3n/2?O(1) sweeps.  相似文献   

6.
In this paper we present a polynomial time approximation scheme for the most points covering problem. In the most points covering problem, n points in R 2, r>0, and an integer m>0 are given and the goal is to cover the maximum number of points with m disks with radius r. The dual of the most points covering problem is the partial covering problem in which n points in R 2 are given, and we try to cover at least pn points of these n points with the minimum number of disks. Both these problems are NP-hard. To solve the most points covering problem, we use the solution of the partial covering problem to obtain an upper bound for the problem and then we generate a valid solution for the most points covering problem by a careful modification of the partial covering solution. We first present an improved approximation algorithm for the partial covering problem which has a better running time than the previous algorithm for this problem. Using this algorithm, we attain a \((1 - \frac{{2\varepsilon }}{{1 +\varepsilon }})\)-approximation algorithm for the most points covering problem. The running time of our algorithm is \(O((1+\varepsilon )mn+\epsilon^{-1}n^{4\sqrt{2}\epsilon^{-1}+2}) \) which is polynomial with respect to both m and n, whereas the previously known algorithm for this problem runs in \(O(n \log n +n\epsilon^{-6m+6} \log (\frac{1}{\epsilon}))\) which is exponential regarding m.  相似文献   

7.
Tracking frequent items (also called heavy hitters) is one of the most fundamental queries in real-time data due to its wide applications, such as logistics monitoring, association rule based analysis, etc. Recently, with the growing popularity of Internet of Things (IoT) and pervasive computing, a large amount of real-time data is usually collected from multiple sources in a distributed environment. Unfortunately, data collected from each source is often uncertain due to various factors: imprecise reading, data integration from multiple sources (or versions), transmission errors, etc. In addition, due to network delay and limited by the economic budget associated with large-scale data communication over a distributed network, an essential problem is to track the global frequent items from all distributed uncertain data sites with the minimum communication cost. In this paper, we focus on the problem of tracking distributed probabilistic frequent items (TDPF). Specifically, given k distributed sites S = {S 1, … , S k }, each of which is associated with an uncertain database \(\mathcal {D}_{i}\) of size n i , a centralized server (or called a coordinator) H, a minimum support ratio r, and a probabilistic threshold t, we are required to find a set of items with minimum communication cost, each item X of which satisfies P r(s u p(X) ≥ r × N) > t, where s u p(X) is a random variable to describe the support of X and \(N={\sum }_{i=1}^{k}n_{i}\). In order to reduce the communication cost, we propose a local threshold-based deterministic algorithm and a sketch-based sampling approximate algorithm, respectively. The effectiveness and efficiency of the proposed algorithms are verified with extensive experiments on both real and synthetic uncertain datasets.  相似文献   

8.
Given a road network G = (V,E), where V (E) denotes the set of vertices(edges) in G, a set of points of interest P and a query point q residing in G, the reverse furthest neighbors (Rfn R ) query in road networks fetches a set of points pP that take q as their furthest neighbor compared with all points in P ∪ {q}. This is the monochromatic Rfn R (Mrfn R ) query. Another interesting version of Rfn R query is the bichromatic reverse furthest neighbor (Brfn R ) query. Given two sets of points P and Q, and a query point qQ, a Brfn R query fetches a set of points pP that take q as their furthest neighbor compared with all points in Q. This paper presents efficient algorithms for both Mrfn R and Brfn R queries, which utilize landmarks and partitioning-based techniques. Experiments on real datasets confirm the efficiency and scalability of proposed algorithms.  相似文献   

9.
Target tracking is one of the important applications of wireless sensor networks (WSNs). Most of the existing approaches assume that the nodes are dense enough and ignore the coverage holes which are very common in WSNs because of random deployment of the sensor nodes, block of obstacles, etc. Besides, predicting the target’s location of the next time instance is unwise because of the quite a lot random factors. In this paper, we propose a novel target tracking approach without any predicting, called k-nearest neighbors tracking (k-NNT), to tackle the problems of energy efficiency, continuity and coverage holes. In k-NNT, only the k-nearest neighbors keep active and track the target when more than k nodes can sense the target; the k-nearest neighbors work when there are only k′ nodes (k′ < k) can sense the target. A sophisticated rotation mechanism is designed to improve the continuity of the tracking process. In the worst case, none of the nodes can sense the target, i.e., the target enters into the coverage holes, and then k-NNT recovers by the Round Up mode (RU mode). The nodes on the perimeter of the coverage hole always keep active for a time threshold t and sense the around environment to find the target in time. Once a node finds the target, the RU mode is over and the irrelevant nodes turn into inactive mode. A series of simulation show that k-NNT performs superiorly compared with several existing approaches in terms of tracking accuracy, continuity and energy efficiency.  相似文献   

10.
Choosing the best location for starting a business or expanding an existing enterprize is an important issue. A number of location selection problems have been discussed in the literature. They often apply the Reverse Nearest Neighbor as the criterion for finding suitable locations. In this paper, we apply the Average Distance as the criterion and propose the so-called k-most suitable locations (k-MSL) selection problem. Given a positive integer k and three datasets: a set of customers, a set of existing facilities, and a set of potential locations. The k-MSL selection problem outputs k locations from the potential location set, such that the average distance between a customer and his nearest facility is minimized. In this paper, we formally define the k-MSL selection problem and show that it is NP-hard. We first propose a greedy algorithm which can quickly find an approximate result for users. Two exact algorithms are then proposed to find the optimal result. Several pruning rules are applied to increase computational efficiency. We evaluate the algorithms’ performance using both synthetic and real datasets. The results show that our algorithms are able to deal with the k-MSL selection problem efficiently.  相似文献   

11.
For q-ary Hamming spaces we address the problem of the minimum number of points such that any point of the space is uniquely determined by its (Hamming) distances to them. It is conjectured that for a fixed q and growing dimension n of the Hamming space this number asymptotically behaves as 2n/ log q n. We prove this conjecture for q = 3 and q = 4; for q = 2 its validity has been known for half a century.  相似文献   

12.
Travel planning and location recommendation are increasingly important in recent years. In this light, we propose and study a novel aggregate location recommendation query (ALRQ) of discovering aggregate locations for multiple travelers and planning the corresponding travel routes in dynamic transportation networks. Assuming the scenario that multiple travelers target the same destination, given a set of travelers’ locations Q, a set of potential aggregate location O, and a departure time t, the ALRQ finds an aggregate location oO that has the minimum global travel time \({\sum }_{q \in Q} T(q,o,t)\), where T(q,o,t) is the travel time between o and q with departure time t. The ALRQ problem is challenging due to three reasons: (1) how to model the dynamic transportation networks practically, and (2) how to compute ALRQ efficiently. We take two types of dynamic transportation networks into account, and we define a pair of upper and lower bounds to prune the search space effectively. Moreover, a heuristic scheduling strategy is adopted to schedule multiple query sources. Finally, we conducted extensive experiments on real and synthetic spatial data to verify the performance of the developed algorithms.  相似文献   

13.
Continuous visible nearest neighbor query processing in spatial databases   总被引:1,自引:0,他引:1  
In this paper, we identify and solve a new type of spatial queries, called continuous visible nearest neighbor (CVNN) search. Given a data set P, an obstacle set O, and a query line segment q in a two-dimensional space, a CVNN query returns a set of \({\langle p, R\rangle}\) tuples such that \({p \in P}\) is the nearest neighbor to every point r along the interval \({R \subseteq q}\) as well as p is visible to r. Note that p may be NULL, meaning that all points in P are invisible to all points in R due to the obstruction of some obstacles in O. In contrast to existing continuous nearest neighbor query, CVNN retrieval considers the impact of obstacles on visibility between objects, which is ignored by most of spatial queries. We formulate the problem, analyze its unique characteristics, and develop efficient algorithms for exact CVNN query processing. Our methods (1) utilize conventional data-partitioning indices (e.g., R-trees) on both P and O, (2) tackle the CVNN search by performing a single query for the entire query line segment, and (3) only access the data points and obstacles relevant to the final query result by employing a suite of effective pruning heuristics. In addition, several interesting variations of CVNN queries have been introduced, and they can be supported by our techniques, which further demonstrates the flexibility of the proposed algorithms. A comprehensive experimental evaluation using both real and synthetic data sets has been conducted to verify the effectiveness of our proposed pruning heuristics and the performance of our proposed algorithms.  相似文献   

14.
In the Fixed Cost k-Flow problem, we are given a graph G = (V, E) with edge-capacities {u e eE} and edge-costs {c e eE}, source-sink pair s, tV, and an integer k. The goal is to find a minimum cost subgraph H of G such that the minimum capacity of an st-cut in H is at least k. By an approximation-preserving reduction from Group Steiner Tree problem to Fixed Cost k-Flow, we obtain the first polylogarithmic lower bound for the problem; this also implies the first non-constant lower bounds for the Capacitated Steiner Network and Capacitated Multicommodity Flow problems. We then consider two special cases of Fixed Cost k-Flow. In the Bipartite Fixed-Cost k-Flow problem, we are given a bipartite graph G = (AB, E) and an integer k > 0. The goal is to find a node subset S ? AB of minimum size |S| such G has k pairwise edge-disjoint paths between SA and SB. We give an \(O(\sqrt {k\log k})\) approximation for this problem. We also show that we can compute a solution of optimum size with Ω(k/polylog(n)) paths, where n = |A| + |B|. In the Generalized-P2P problem we are given an undirected graph G = (V, E) with edge-costs and integer charges {b v : vV}. The goal is to find a minimum-cost spanning subgraph H of G such that every connected component of H has non-negative charge. This problem originated in a practical project for shift design [11]. Besides that, it generalizes many problems such as Steiner Forest, k-Steiner Tree, and Point to Point Connection. We give a logarithmic approximation algorithm for this problem. Finally, we consider a related problem called Connected Rent or Buy Multicommodity Flow and give a log3+?? n approximation scheme for it using Group Steiner Tree techniques.  相似文献   

15.
An optimal probabilistic-planning algorithm solves a problem, usually modeled by a Markov decision process, by finding an optimal policy. In this paper, we study the k best policies problem. The problem is to find the k best policies of a discrete Markov decision process. The k best policies, k?>?1, cannot be found directly using dynamic programming. Naïvely, finding the k-th best policy can be Turing reduced to the optimal planning problem, but the number of problems queried in the naïve algorithm is exponential in k. We show empirically that solving k best policies problem by using this reduction requires unreasonable amounts of time even when k?=?3. We then provide two new algorithms. The first is a complete algorithm, based on our theoretical contribution that the k-th best policy differs from the i-th policy, for some i?k, on exactly one state. The second is an approximate algorithm that skips many less useful policies. We show that both algorithms have good scalability. We also show that the approximate algorithms runs much faster and finds interesting, high-quality policies.  相似文献   

16.
Barrier coverage in wireless sensor networks has been used in many applications such as intrusion detection and border surveillance. Barrier coverage is used to monitor the network borders to prevent intruders from penetrating the network. In these applications, it is critical to find optimal number of sensor nodes to prolong the network lifetime. Also, increasing the network lifetime is one of the important challenges in these networks. Various algorithms have been proposed to extend the network lifetime while guaranteeing barrier coverage requirements. In this paper, we use the imperialist competitive algorithm (ICA) for selecting sensor nodes to do barrier coverage monitoring operations called ICABC. The main objective of this work is to improve the network lifetime in a deployed network. To investigate the performance of ICABC, several simulations were conducted and the results of the experiments show that the ICABC significantly improves the performance than other state-of-art methods.  相似文献   

17.
An outer-connected dominating set in a graph G = (V, E) is a set of vertices D ? V satisfying the condition that, for each vertex v ? D, vertex v is adjacent to some vertex in D and the subgraph induced by V?D is connected. The outer-connected dominating set problem is to find an outer-connected dominating set with the minimum number of vertices which is denoted by \(\tilde {\gamma }_{c}(G)\). In this paper, we determine \(\tilde {\gamma }_{c}(S(n,k))\), \(\tilde {\gamma }_{c}(S^{+}(n,k))\), \(\tilde {\gamma }_{c}(S^{++}(n,k))\), and \(\tilde {\gamma }_{c}(S_{n})\), where S(n, k), S +(n, k), S ++(n, k), and S n are Sierpi\(\acute {\mathrm {n}}\)ski-like graphs.  相似文献   

18.
Intermittent Fault Diagnosability of Interconnection Networks   总被引:1,自引:0,他引:1       下载免费PDF全文
An interconnection network’s diagnosability is an important metric for measuring its self-diagnostic capability. Permanent fault and intermittent fault are two different fault models that exist in an interconnection network. In this paper, we focus on the problem pertaining to the diagnosability of interconnection networks in an intermittent fault situation. First, we study a class of interconnection networks called crisp three-cycle networks, in which the cn in -number (the number of common vertices each pair of vertices share) is no more than one. Necessary and sufficient conditions are derived for the diagnosability of crisp three-cycle networks under the PMC (Preparata, Metze, and Chien) model. A simple check can show that many well-known interconnection networks are crisp three-cycle networks. Second, we prove that an interconnection network S is a t i -fault diagnosable system without repair if and only if its minimum in-degree is greater than t i under the BGM (Barsi, Grandoni, and Masetrini) model. Finally, we extend the necessary and sufficient conditions to determine whether an interconnection network S is t i -fault diagnosable without repair under the MM (Maeng and Malek) model from the permanent fault situation to the intermittent fault situation.  相似文献   

19.
Let f be an integer valued function on a finite set V. We call an undirected graph G(V,E) a neighborhood structure for f. The problem of finding a local minimum for f can be phrased as: for a fixed neighborhood structure G(V,E) find a vertex xV such that f(x) is not bigger than any value that f takes on some neighbor of x. The complexity of the algorithm is measured by the number of questions of the form “what is the value of f on x?” We show that the deterministic, randomized and quantum query complexities of the problem are polynomially related. This generalizes earlier results of Aldous (Ann. Probab. 11(2):403–413, [1983]) and Aaronson (SIAM J. Comput. 35(4):804–824, [2006]) and solves the main open problem in Aaronson (SIAM J. Comput. 35(4):804–824, [2006]).  相似文献   

20.
Rapid advances in image acquisition and storage technology underline the need for real-time algorithms that are capable of solving large-scale image processing and computer-vision problems. The minimum st cut problem, which is a classical combinatorial optimization problem, is a prominent building block in many vision and imaging algorithms such as video segmentation, co-segmentation, stereo vision, multi-view reconstruction, and surface fitting to name a few. That is why finding a real-time algorithm which optimally solves this problem is of great importance. In this paper, we introduce to computer vision the Hochbaum’s pseudoflow (HPF) algorithm, which optimally solves the minimum st cut problem. We compare the performance of HPF, in terms of execution times and memory utilization, with three leading published algorithms: (1) Goldberg’s and Tarjan’s Push-Relabel; (2) Boykov’s and Kolmogorov’s augmenting paths; and (3) Goldberg’s partial augment-relabel. While the common practice in computer-vision is to use either BK or PRF algorithms for solving the problem, our results demonstrate that, in general, HPF algorithm is more efficient and utilizes less memory than these three algorithms. This strongly suggests that HPF is a great option for many real-time computer-vision problems that require solving the minimum st cut problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号