首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Choosing the best location for starting a business or expanding an existing enterprize is an important issue. A number of location selection problems have been discussed in the literature. They often apply the Reverse Nearest Neighbor as the criterion for finding suitable locations. In this paper, we apply the Average Distance as the criterion and propose the so-called k-most suitable locations (k-MSL) selection problem. Given a positive integer k and three datasets: a set of customers, a set of existing facilities, and a set of potential locations. The k-MSL selection problem outputs k locations from the potential location set, such that the average distance between a customer and his nearest facility is minimized. In this paper, we formally define the k-MSL selection problem and show that it is NP-hard. We first propose a greedy algorithm which can quickly find an approximate result for users. Two exact algorithms are then proposed to find the optimal result. Several pruning rules are applied to increase computational efficiency. We evaluate the algorithms’ performance using both synthetic and real datasets. The results show that our algorithms are able to deal with the k-MSL selection problem efficiently.  相似文献   

2.
A deterministic parallel LL parsing algorithm is presented. The algorithm is based on a transformation from a parsing problem to parallel reduction. First, a nondeterministic version of a parallel LL parser is introduced. Then, it is transformed into the deterministic version—the LLP parser. The deterministic LLP(q,k) parser uses two kinds of information to select the next operation — a lookahead string of length up to k symbols and a lookback string of length up to q symbols. Deterministic parsing is available for LLP grammars, a subclass of LL grammars. Since the presented deterministic and nondeterministic parallel parsers are both based on parallel reduction, they are suitable for most parallel architectures.  相似文献   

3.
This paper proposes a strengthening of the author’s core-accessibility theorem for balanced TU-cooperative games. The obtained strengthening relaxes the influence of the nontransitivity of classical domination αv on the quality of the sequential improvement of dominated imputations in a game v. More specifically, we establish the k-accessibility of the core C v ) of any balanced TU-cooperative game v for all natural numbers k: for each dominated imputation x, there exists a converging sequence of imputations x0, x1,..., such that x0 = x, lim x r C v ) and xr?m is dominated by any successive imputation x r with m ∈ [1, k] and rm. For showing that the TU-property is essential to provide the k-accessibility of the core, we give an example of an NTU-cooperative game G with a ”black hole” representing a nonempty closed subset B ? G(N) of dominated imputations that contains all the α G -monotonic sequential improvement trajectories originating at any point xB.  相似文献   

4.
The theory of finite pseudo-random binary sequences was built by C. Mauduit and A. Sárközy and later extended to sequences of k symbols (or k-ary sequences). Certain constructions of pseudo-random sequences of k symbols were presented over finite fields in the literature. In this paper, two families of sequences of k symbols are constructed by using the integers modulo pq for distinct odd primes p and q. The upper bounds on the well-distribution measure and the correlation measure of the families sequences are presented in terms of certain character sums over modulo pq residue class rings. And low bounds on the linear complexity profile are also estimated.  相似文献   

5.
Finding k nearest neighbor objects in spatial databases is a fundamental problem in many geospatial systems and the direction is one of the key features of a spatial object. Moreover, the recent tremendous growth of sensor technologies in mobile devices produces an enormous amount of spatio-directional (i.e., spatially and directionally encoded) objects such as photos. Therefore, an efficient and proper utilization of the direction feature is a new challenge. Inspired by this issue and the traditional k nearest neighbor search problem, we devise a new type of query, called the direction-constrained k nearest neighbor (DCkNN) query. The DCkNN query finds k nearest neighbors from the location of the query such that the direction of each neighbor is in a certain range from the direction of the query. We develop a new index structure called MULTI, to efficiently answer the DCkNN query with two novel index access algorithms based on the cost analysis. Furthermore, our problem and solution can be generalized to deal with spatio-circulant dimensional (such as a direction and circulant periods of time such as an hour, a day, and a week) objects. Experimental results show that our proposed index structure and access algorithms outperform two adapted algorithms from existing kNN algorithms.  相似文献   

6.
We introduce m-near-resolvable block designs. We establish a correspondence between such block designs and a subclass of (optimal equidistant) q-ary constant-weight codes meeting the Johnson bound. We present constructions of m-near-resolvable block designs, in particular based on Steiner systems and super-simple t-designs.  相似文献   

7.
Every rectilinear Steiner tree problem admits an optimal tree T * which is composed of tree stars. Moreover, the currently fastest algorithms for the rectilinear Steiner tree problem proceed by composing an optimum tree T * from tree star components in the cheapest way. The efficiency of such algorithms depends heavily on the number of tree stars (candidate components). Fößmeier and Kaufmann (Algorithmica 26, 68–99, 2000) showed that any problem instance with k terminals has a number of tree stars in between 1.32 k and 1.38 k (modulo polynomial factors) in the worst case. We determine the exact bound O *(ρ k ) where ρ≈1.357 and mention some consequences of this result.  相似文献   

8.
The problem of kNN (k Nearest Neighbor) queries has received considerable attention in the database and information retrieval communities. Given a dataset D and a kNN query q, the k nearest neighbor algorithm finds the closest k data points to q. The applications of kNN queries are board, not only in spatio-temporal databases but also in many areas. For example, they can be used in multimedia databases, data mining, scientific databases and video retrieval. The past studies of kNN query processing did not consider the case that the server may receive multiple kNN queries at one time. Their algorithms process queries independently. Thus, the server will be busy with continuously reaccessing the database to obtain the data that have already been acquired. This results in wasting I/O costs and degrading the performance of the whole system. In this paper, we focus on this problem and propose an algorithm named COrrelated kNN query Evaluation (COKE). The main idea of COKE is an “information sharing” strategy whereby the server reuses the query results of previously executed queries for efficiently processing subsequent queries. We conduct a comprehensive set of experiments to analyze the performance of COKE and compare it with the Best-First Search (BFS) algorithm. Empirical studies indicate that COKE outperforms BFS, and achieves lower I/O costs and less running time.  相似文献   

9.
Recently, some authors considered the origin of a type-IV singular bounce in modified gravity and obtained the explicit form of F(R) which can produce this type of cosmology. In this paper, we show that during the contracting branch of type-IV bouncing cosmology, the sign of gravity changes, and antigravity emerges. In our model, M0 branes get together and shape a universe, an anti-universe, and a wormhole which connects them. As time passes, this wormhole is dissolved in the universes, F(R) gravity emerges, and the universe expands. When the brane universes become close to each other, the squared energy of their system becomes negative, and some tachyonic states are produced. To remove these states, universes are assumed to be compact, the sign of compacted gravity changes, and anti-F(R) gravity arises, which causes getting away of the universes from each other. In this theory, a Type-IV singularity occurs at t = t s , which is the time of producing tachyons between expansion and contraction branches.  相似文献   

10.
Maximal clique enumeration is a fundamental problem in graph theory and has been extensively studied. However, maximal clique enumeration is time-consuming in large graphs and always returns enormous cliques with large overlaps. Motivated by this, in this paper, we study the diversified top-k clique search problem which is to find top-k cliques that can cover most number of nodes in the graph. Diversified top-k clique search can be widely used in a lot of applications including community search, motif discovery, and anomaly detection in large graphs. A naive solution for diversified top-k clique search is to keep all maximal cliques in memory and then find k of them that cover most nodes in the graph by using the approximate greedy max k-cover algorithm. However, such a solution is impractical when the graph is large. In this paper, instead of keeping all maximal cliques in memory, we devise an algorithm to maintain k candidates in the process of maximal clique enumeration. Our algorithm has limited memory footprint and can achieve a guaranteed approximation ratio. We also introduce a novel light-weight \(\mathsf {PNP}\)-\(\mathsf {Index}\), based on which we design an optimal maximal clique maintenance algorithm. We further explore three optimization strategies to avoid enumerating all maximal cliques and thus largely reduce the computational cost. Besides, for the massive input graph, we develop an I/O efficient algorithm to tackle the problem when the input graph cannot fit in main memory. We conduct extensive performance studies on real graphs and synthetic graphs. One of the real graphs contains 1.02 billion edges. The results demonstrate the high efficiency and effectiveness of our approach.  相似文献   

11.
Nowadays, location-based services (LBS) are facilitating people in daily life through answering LBS queries. However, privacy issues including location privacy and query privacy arise at the same time. Existing works for protecting query privacy either work on trusted servers or fail to provide sufficient privacy guarantee. This paper combines the concepts of differential privacy and k-anonymity to propose the notion of differentially private k-anonymity (DPkA) for query privacy in LBS. We recognize the sufficient and necessary condition for the availability of 0-DPkA and present how to achieve it. For cases where 0-DPkA is not achievable, we propose an algorithm to achieve ??-DPkA with minimized ??. Extensive simulations are conducted to validate the proposed mechanisms based on real-life datasets and synthetic data distributions.  相似文献   

12.
We study the physical behavior of the transition of a 5D perfect fluid universe from an early decelerating phase to the current accelerating phase in the framework of f(R, T) theory of gravity in the presence of domain walls. The fifth dimension is not observed because it is compact. To determine the solution of the field equations, we use the concept of a time-dependent deceleration parameter which yields the scale factor a(t) = sinh1/n(αt), where n and α are positive constants. For 0 < n ≤ 1, this generates a class of accelerating models, while for n > 1 the universe attains a phase transition from an early decelerating phase to the present accelerating phase, consistent with the recent observations. Some physical and geometric properties of the models are also discussed.  相似文献   

13.
The key issue in top-k retrieval, finding a set of k documents (from a large document collection) that can best answer a user’s query, is to strike the optimal balance between relevance and diversity. In this paper, we study the top-k retrieval problem in the framework of facility location analysis and prove the submodularity of that objective function which provides a theoretical approximation guarantee of factor 1?\(\frac{1}{e}\) for the (best-first) greedy search algorithm. Furthermore, we propose a two-stage hybrid search strategy which first obtains a high-quality initial set of top-k documents via greedy search, and then refines that result set iteratively via local search. Experiments on two large TREC benchmark datasets show that our two-stage hybrid search strategy approach can supersede the existing ones effectively and efficiently.  相似文献   

14.
Target tracking is one of the important applications of wireless sensor networks (WSNs). Most of the existing approaches assume that the nodes are dense enough and ignore the coverage holes which are very common in WSNs because of random deployment of the sensor nodes, block of obstacles, etc. Besides, predicting the target’s location of the next time instance is unwise because of the quite a lot random factors. In this paper, we propose a novel target tracking approach without any predicting, called k-nearest neighbors tracking (k-NNT), to tackle the problems of energy efficiency, continuity and coverage holes. In k-NNT, only the k-nearest neighbors keep active and track the target when more than k nodes can sense the target; the k-nearest neighbors work when there are only k′ nodes (k′ < k) can sense the target. A sophisticated rotation mechanism is designed to improve the continuity of the tracking process. In the worst case, none of the nodes can sense the target, i.e., the target enters into the coverage holes, and then k-NNT recovers by the Round Up mode (RU mode). The nodes on the perimeter of the coverage hole always keep active for a time threshold t and sense the around environment to find the target in time. Once a node finds the target, the RU mode is over and the irrelevant nodes turn into inactive mode. A series of simulation show that k-NNT performs superiorly compared with several existing approaches in terms of tracking accuracy, continuity and energy efficiency.  相似文献   

15.
We present methods to construct transitive partitions of the set E n of all binary vectors of length n into codes. In particular, we show that for all n = 2 k ? 1, k ≥ 3, there exist transitive partitions of E n into perfect transitive codes of length n.  相似文献   

16.
In 2012, Lee et al. proposed an interpolation technique with neighboring pixels (INP) as the base to conceal secret information in predicted pixels. Their method can effectively predict the pixel between two neighboring pixels. However, the different lengths of secret messages caused great distortion when a large secret message was concealed in the predicted value. Therefore, the proposed scheme applies the center folding strategy to fold the secret message for reducing image distortion. Furthermore, the proposed scheme references the variance of the neighboring pixel to determine the length of the secret message for controlling image quality. The parameter pair (k, F 1) is used to categorize the variance and determine the size of the secret message hidden in each category. k is the total number of thresholds which computed based on the characteristics of each image for balancing hiding payload and image quality. F 1 is the length of the secret message for the smoothest area. The experimental results show that the embedding capacity of the proposed method is 1.5 bpp higher than that of existing methods. For the same hiding payload, the image quality of the proposed method is 1.6 dB higher than that of existing methods.  相似文献   

17.
Although k-anonymity is a good way of publishing microdata for research purposes, it cannot resist several common attacks, such as attribute disclosure and the similarity attack. To resist these attacks, many refinements of kanonymity have been proposed with t-closeness being one of the strictest privacy models. While most existing t-closeness models address the case in which the original data have only one single sensitive attribute, data with multiple sensitive attributes are more common in practice. In this paper, we cover this gap with two proposed algorithms for multiple sensitive attributes and make the published data satisfy t-closeness. Based on the observation that the values of the sensitive attributes in any equivalence class must be as spread as possible over the entire data to make the published data satisfy t-closeness, both of the algorithms use different methods to partition records into groups in terms of sensitive attributes. One uses a clustering method, while the other leverages the principal component analysis. Then, according to the similarity of quasiidentifier attributes, records are selected from different groups to construct an equivalence class, which will reduce the loss of information as much as possible during anonymization. Our proposed algorithms are evaluated using a real dataset. The results show that the average speed of the first proposed algorithm is slower than that of the second proposed algorithm but the former can preserve more original information. In addition, compared with related approaches, both proposed algorithms can achieve stronger protection of privacy and reduce less.  相似文献   

18.
The question of the contemporary relevance of Heidegger’s reflections on technology to today’s advanced technology is here explored with reference to the notion of “entanglement” towards a review of Heidegger’s understanding of technology and media, including the entertainment industry and modern digital life. Heidegger’s reflections on Gelassenheit have been connected with the aesthetics of the tea ceremony, disputing the material aesthetics of porcelain versus plastic. Here by approaching the art of wabi-sabi as the art of Verfallenheit, I argue that Gelassenheit may be understood in these terms.  相似文献   

19.
For the interval system of equations defined by [x] = [A][x] + [b] we derive necessary and sufficient criteria for the existence of solutions [x]. Furthermore we give necessary and sufficient criteria for the convergence of powers of [A]. In contrast to former results we treat complex interval arithmetics.  相似文献   

20.
This paper suggests ways to facilitate creativity and innovation in software development. The paper applies four perspectives – Product, Project, Process, and People – to identify an outlook for software innovation. The paper then describes a new facility – Software Innovation Research Lab (SIRL) – and a new method concept for software innovation – Essence – based on views, modes, and team roles. Finally, the paper reports from an early experiment using SIRL and Essence and identifies further research.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号