首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Matrix models are ubiquitous for constraint problems. Many such problems have a matrix of variables $\mathcal{M}$ , with the same constraint C defined by a finite-state automaton $\mathcal{A}$ on each row of $\mathcal{M}$ and a global cardinality constraint $\mathit{gcc}$ on each column of $\mathcal{M}$ . We give two methods for deriving, by double counting, necessary conditions on the cardinality variables of the $\mathit{gcc}$ constraints from the automaton $\mathcal{A}$ . The first method yields linear necessary conditions and simple arithmetic constraints. The second method introduces the cardinality automaton, which abstracts the overall behaviour of all the row automata and can be encoded by a set of linear constraints. We also provide a domain consistency filtering algorithm for the conjunction of lexicographic ordering constraints between adjacent rows of $\mathcal{M}$ and (possibly different) automaton constraints on the rows. We evaluate the impact of our methods in terms of runtime and search effort on a large set of nurse rostering problem instances.  相似文献   

2.
Luc Jaulin 《Computing》2012,94(2-4):297-311
In this paper, we consider the resolution of constraint satisfaction problems in the case where the variables of the problem are subsets of ${\mathbb{R}^{n}}$ . In order to use a constraint propagation approach, we introduce set intervals (named i-sets), which are sets of subsets of ${\mathbb{R}^{n}}$ with a lower bound and an upper bound with respect to the inclusion. Then, we propose basic operations for i-sets. This makes possible to build contractors that are then used by the propagation to solve problem involving sets as unknown variables. In order to illustrate the principle and the efficiency of the approach, a testcase is provided.  相似文献   

3.
The AtMostSeqCard constraint is the conjunction of a cardinality constraint on a sequence of n variables and of n???q?+?1 constraints AtMost u on each subsequence of size q. This constraint is useful in car-sequencing and crew-rostering problems. In van Hoeve et al. (Constraints 14(2):273–292, 2009), two algorithms designed for the AmongSeq constraint were adapted to this constraint with an O(2 q n) and O(n 3) worst case time complexity, respectively. In Maher et al. (2008), another algorithm similarly adaptable to filter the AtMostSeqCard constraint with a time complexity of O(n 2) was proposed. In this paper, we introduce an algorithm for achieving arc consistency on the AtMostSeqCard constraint with an O(n) (hence optimal) worst case time complexity. Next, we show that this algorithm can be easily modified to achieve arc consistency on some extensions of this constraint. In particular, the conjunction of a set of m AtMostSeqCard constraints sharing the same scope can be filtered in O(nm). We then empirically study the efficiency of our propagator on instances of the car-sequencing and crew-rostering problems.  相似文献   

4.
Rare-category detection helps discover new rare classes in an unlabeled data set by selecting their candidate data examples for labeling. Most of the existing approaches for rare-category detection require prior information about the data set without which they are otherwise not applicable. The prior-free algorithms try to address this problem without prior information about the data set; though, the compensation is high time complexity, which is not lower than $O(dN^2)$ where $N$ is the number of data examples in a data set and $d$ is the data set dimension. In this paper, we propose CLOVER a prior-free algorithm by introducing a novel rare-category criterion known as local variation degree (LVD), which utilizes the characteristics of rare classes for identifying rare-class data examples from other types of data examples and passes those data examples with maximum LVD values to CLOVER for labeling. A remarkable improvement is that CLOVER’s time complexity is $O(dN^{2-1/d})$ for $d > 1$ or $O(N\log N)$ for $d = 1$ . Extensive experimental results on real data sets demonstrate the effectiveness and efficiency of our method in terms of new rare classes discovery and lower time complexity.  相似文献   

5.
The uml Profile for Modeling and Analysis of Real-Time and Embedded (RTE) systems has recently been adopted by the OMG. Its Time Model extends the informal and simplistic Simple Time package proposed by Unified Modeling Language (UML2) and offers a broad range of capabilities required to model RTE systems including discrete/dense and chronometric/logical time. The Marte specification introduces a Time Structure inspired from several time models of the concurrency theory and proposes a new clock constraint specification language (ccsl) to specify, within the context of the uml, logical and chronometric time constraints. A semantic model in ccsl is attached to a (uml) model to give its timed causality semantics. In that sense, ccsl is comparable to the Ptolemy environment, in which directors give the semantics to models according to predefined models of computation and communication. This paper focuses on one historical model of computation of Ptolemy [Synchronous Data Flow (SDF)] and shows how to build SDF graphs by combining uml models and ccsl.  相似文献   

6.
Recently, a large amount of work has been devoted to the study of spectral clustering—a simple yet powerful method for finding structure in a data set using spectral properties of an associated pairwise similarity matrix. Most of the existing spectral clustering algorithms estimate only one cluster number or estimate non-unique cluster numbers based on eigengap criterion. However, the number of clusters not always exists one, and eigengap criterion lacks theoretical justification. In this paper, we propose non-unique cluster numbers determination methods based on stability in spectral clustering (NCNDBS). We first utilize the multiway normalized cut spectral clustering algorithm to cluster data set for a candidate cluster number $k$ . Then the ratio value of the multiway normalized cut criterion of the obtained clusters and the sum of the leading eigenvalues (descending sort) of the stochastic transition matrix is chosen as a standard to decide whether the $k$ is a reasonable cluster number. At last, by varying the scaling parameter in the Gaussian function, we judge whether the reasonable cluster number $k$ is also a stability one. By three stages, we can determine non-unique cluster numbers of a data set. The Lumpability theorem concluded by Meil $\breve{a}$ and Xu provides a theoretical base for our methods. NCNDBS can estimate non-unique cluster numbers of the data set successfully by illustrative experiments.  相似文献   

7.
In this paper we consider integration of SMT solvers with the filtering algorithms for the finite domain alldifferent constraint. Such integration makes SMT solvers suitable for solving constraint satisfaction problems with the alldifferent constraint involved. First, we present a novel algorithm for explaining inconsistencies and propagations in the alldifferent constraint. We compare it to Katsirelos’ algorithm and flow-based algorithms that are commonly used for that purpose. Then we describe our DPLL(T)-compliant SMT theory solver for constraint satisfaction problems that include alldifferent constraints. We also provide an experimental evaluation of our approach.  相似文献   

8.
We present a data structure for maintaining the geodesic hull of a set of points (sites) in the presence of pairwise noncrossing line segments (barriers) that subdivide a bounding box into simply connected faces. For m barriers and n sites, our data structure has O((m+n)logn) size. It supports a mixed sequence of O(m) barrier insertions and O(n) site deletions in $O((m+n) \operatorname{polylog}(mn))$ total time, and answers analogues of standard convex hull queries in $O(\operatorname{polylog}(mn))$ time. Our data structure supports a generalization of the sweep line technique, in which the sweep wavefront is a simple closed polygonal curve, and it sweeps a set of n points in the plane by simple moves. We reduce the total time of supporting m online moves of a polygonal wavefront sweep algorithm from the naïve $O(m\sqrt{n} \operatorname{polylog}n)$ to $O((m+n) \operatorname{polylog}(mn))$ .  相似文献   

9.
10.
In this paper we propose mathematical optimizations to select the optimal regularization parameter for ridge regression using cross-validation. The resulting algorithm is suited for large datasets and the computational cost does not depend on the size of the training set. We extend this algorithm to forward or backward feature selection in which the optimal regularization parameter is selected for each possible feature set. These feature selection algorithms yield solutions with a sparse weight matrix using a quadratic cost on the norm of the weights. A naive approach to optimizing the ridge regression parameter has a computational complexity of the order $O(R K N^{2} M)$ with $R$ the number of applied regularization parameters, $K$ the number of folds in the validation set, $N$ the number of input features and $M$ the number of data samples in the training set. Our implementation has a computational complexity of the order $O(KN^3)$ . This computational cost is smaller than that of regression without regularization $O(N^2M)$ for large datasets and is independent of the number of applied regularization parameters and the size of the training set. Combined with a feature selection algorithm the algorithm is of complexity $O(RKNN_s^3)$ and $O(RKN^3N_r)$ for forward and backward feature selection respectively, with $N_s$ the number of selected features and $N_r$ the number of removed features. This is an order $M$ faster than $O(RKNN_s^3M)$ and $O(RKN^3N_rM)$ for the naive implementation, with $N \ll M$ for large datasets. To show the performance and reduction in computational cost, we apply this technique to train recurrent neural networks using the reservoir computing approach, windowed ridge regression, least-squares support vector machines (LS-SVMs) in primal space using the fixed-size LS-SVM approximation and extreme learning machines.  相似文献   

11.
In this paper, we formulate object tracking in a particle filter framework as a structured multi-task sparse learning problem, which we denote as Structured Multi-Task Tracking (S-MTT). Since we model particles as linear combinations of dictionary templates that are updated dynamically, learning the representation of each particle is considered a single task in Multi-Task Tracking (MTT). By employing popular sparsity-inducing $\ell _{p,q}$ mixed norms $(\text{ specifically} p\in \{2,\infty \}$ and $q=1),$ we regularize the representation problem to enforce joint sparsity and learn the particle representations together. As compared to previous methods that handle particles independently, our results demonstrate that mining the interdependencies between particles improves tracking performance and overall computational complexity. Interestingly, we show that the popular $L_1$ tracker (Mei and Ling, IEEE Trans Pattern Anal Mach Intel 33(11):2259–2272, 2011) is a special case of our MTT formulation (denoted as the $L_{11}$ tracker) when $p=q=1.$ Under the MTT framework, some of the tasks (particle representations) are often more closely related and more likely to share common relevant covariates than other tasks. Therefore, we extend the MTT framework to take into account pairwise structural correlations between particles (e.g. spatial smoothness of representation) and denote the novel framework as S-MTT. The problem of learning the regularized sparse representation in MTT and S-MTT can be solved efficiently using an Accelerated Proximal Gradient (APG) method that yields a sequence of closed form updates. As such, S-MTT and MTT are computationally attractive. We test our proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that S-MTT is much better than MTT, and both methods consistently outperform state-of-the-art trackers.  相似文献   

12.
We introduce a knowledge representation language ${\cal AC(C)}$ extending the syntax and semantics of ASP and CR-Prolog, give some examples of its use, and present an algorithm, $\mathcal{AC}\!solver$ , for computing answer sets of ${\cal AC(C)}$ programs. The algorithm does not require full grounding of a program and combines “classical” ASP solving methods with constraint logic programming techniques and CR-Prolog based abduction. The ${\cal AC(C)}$ based approach often allows to solve problems which are impossible to solve by more traditional ASP solving techniques. We believe that further investigation of the language and development of more efficient and reliable solvers for its programs can help to substantially expand the domain of applicability of the answer set programming paradigm.  相似文献   

13.
We present new filtering algorithms for Disjunctive and Cumulative constraints, each of which improves the complexity of the state-of-the-art algorithms by a factor of log n. We show how to perform Time-Tabling and Detectable Precedences in linear time on the Disjunctive constraint. Furthermore, we present a linear-time Overload Checking for the Disjunctive and Cumulative constraints. Finally, we show how the rule of Not-first/Not-last can be enforced in quadratic time for the Cumulative constraint. These algorithms rely on the union find data structure, from which we take advantage to introduce a new data structure that we call it time line. This data structure provides constant time operations that were previously implemented in logarithmic time by the Θ-tree data structure. Experiments show that these new algorithms are competitive even for a small number of tasks and outperform existing algorithms as the number of tasks increases. We also show that the time line can be used to solve specific scheduling problems.  相似文献   

14.
The Voronoi diagram is an important technique for answering nearest-neighbor queries for spatial databases. We study how the Voronoi diagram can be used for uncertain spatial data, which are inherent in scientific and business applications. Specifically, we propose the Uncertain-Voronoi diagram (or UV-diagram), which divides the data space into disjoint “UV-partitions”. Each UV-partition $P$ is associated with a set $S$ of objects, such that any point $q$ located in $P$ has the set $S$ as its nearest neighbor with nonzero probabilities. The UV-diagram enables queries that return objects with nonzero chances of being the nearest neighbor (NN) of a given point $q$ . It supports “continuous nearest-neighbor search”, which refreshes the set of NN objects of $q$ , as the position of $q$ changes. It also allows the analysis of nearest-neighbor information, for example, to find out the number of objects that are the nearest neighbors of any point in a given area. A UV-diagram requires exponential construction and storage costs. To tackle these problems, we devise an alternative representation of a UV-diagram, by using a set of UV-cells. A UV-cell of an object $o$ is the extent $e$ for which $o$ can be the nearest neighbor of any point $q \in e$ . We study how to speed up the derivation of UV-cells by considering its nearby objects. We also use the UV-cells to design the UV-index, which supports different queries, and can be constructed in polynomial time. We have performed extensive experiments on both real and synthetic data to validate the efficiency of our approaches.  相似文献   

15.
We consider the following list scheduling problem. We are given a set \(S\) of jobs which are to be scheduled sequentially on a single processor. Each job has an associated processing time which is required for its processing. Given a particular permutation of the jobs in \(S\) , the jobs are processed in that order with each job started as soon as possible, subject only to the following constraint: For a fixed integer \(B \ge 2\) , no unit time interval \([x, x+1)\) is allowed to intersect more than \(B\) jobs for any real \(x\) . It is not surprising that this problem is NP-hard when the value \(B\) is variable (which is typical of many scheduling problems). There are several real world situations for which this restriction is natural. For example, suppose in addition to our jobs being executed sequentially on a single main processor, each job also requires the use of one of \(B\) identical subprocessors during its execution. Each time a job is completed, the subprocessor it was using requires one unit of time in order to reset itself. In this way, it is never possible for more than \(B\) jobs to be worked on during any unit interval. In this paper we carry out a classical worst-case analysis for this situation. In particular, we show that any permutation of the jobs can be processed within a factor of \(2-1/(B-1)\) of the optimum (plus an additional small constant) when \(B \ge 3\) and this factor is best possible. For the case \(B=2\) , the situation is rather different, and in this case the corresponding factor we establish is \(4/3\) (plus an additional small constant), which is also best possible. It is fairly rare that best possible bounds can be obtained for the competitive ratios of list scheduling problems of this general type.  相似文献   

16.
One very fertile domain of applied Artificial Intelligence is constraint solving technologies. Especially, constraint networks that concern problems that can be represented using discrete variables, together with constraints on allowed instantiation values for these variables. Every solution to a constraint network must satisfy every constraint. When no solution exists, the user might want to know the actual reasons leading to the absence of global solution. In this respect, extracting mucs (Minimal Unsatisfiable Cores) from an unsatisfiable constraint network is a useful process when causes of unsatisfiability must be understood so that the network can be re-engineered and relaxed to become satisfiable. Despite bad worst-case computational complexity results, various muc-finding approaches that appear tractable for many real-life instances have been proposed. Many of them are based on the successive identification of so-called transition constraints. In this respect, we show how local search can be used to possibly extract additional transition constraints at each main iteration step. In the general constraint networks setting, the approach is shown to outperform a technique based on a form of model rotation imported from the sat-related technology and that also exhibits additional transition constraints. Our extensive computational experimentations show that this enhancement also boosts the performance of state-of-the-art DC(WCORE)-like MUC extractors.  相似文献   

17.
A non-Hermitian quantum optimization algorithm is created and used to find the ground state of an antiferromagnetic Ising chain. We demonstrate analytically and numerically (for up to $N=1,024$ spins) that our approach leads to a significant reduction in the annealing time that is proportional to $\ln N$ , which is much less than the time (proportional to $N^2$ ) required for the quantum annealing based on the corresponding Hermitian algorithm. We propose to use this approach to achieve similar speed-up for NP-complete problems by using classical computers in combination with quantum algorithms.  相似文献   

18.
19.
We investigate data parallel techniques for belief propagation in acyclic factor graphs on multi-core systems. Belief propagation is a key inference algorithm in factor graph, a probabilistic graphical model that has found applications in many domains. In this paper, we explore data parallelism for basic operations over the potential tables in belief propagation. Data parallel techniques for these table operations are developed for shared memory platforms. We then propose a complete belief propagation algorithm using these table operations to perform exact inference in factor graphs. The proposed algorithms are implemented on state-of-the-art multi-socket multi-core systems with additional NUMA-aware optimizations. Our proposed algorithms exhibit good scalability using a representative set of factor graphs. On a four-socket Intel Westmere-EX system with 40 cores, we achieve 39.5 $\times $ speedup for the table operations and 39 $\times $ speedup for the complete algorithm using factor graphs with large potential tables.  相似文献   

20.
Replication is a standard technique for fault tolerance in distributed systems modeled as deterministic finite state machines (DFSMs or machines). To correct \(f\) crash or \(\lfloor f/2 \rfloor \) Byzantine faults among \(n\) different machines, replication requires \(nf\) backup machines. We present a solution called fusion that requires just \(f\) backup machines. First, we build a framework for fault tolerance in DFSMs based on the notion of Hamming distances. We introduce the concept of an ( \(f\) , \(m\) )-fusion, which is a set of \(m\) backup machines that can correct \(f\) crash faults or \(\lfloor f/2 \rfloor \) Byzantine faults among a given set of machines. Second, we present an algorithm to generate an ( \(f\) , \(f\) )-fusion for a given set of machines. We ensure that our backups are efficient in terms of the size of their state and event sets. Third, we use locality sensitive hashing for the detection and correction of faults that incurs almost the same overhead as that for replication. We detect Byzantine faults with time complexity \(O(n f)\) on average while we correct crash and Byzantine faults with time complexity \(O(n \rho f)\) with high probability, where \(\rho \) is the average state reduction achieved by fusion. Finally, our evaluation of fusion on the widely used MCNC’91 benchmarks for DFSMs shows that the average state space savings in fusion (over replication) is 38 % (range 0–99 %). To demonstrate the practical use of fusion, we describe its potential application to two areas: sensor networks and the MapReduce framework. In the case of sensor networks a fusion-based solution can lead to significantly fewer sensor-nodes than a replication-based solution. For the MapReduce framework, fusion can reduce the number of map-tasks compared to replication. Hence, fusion results in considerable savings in state space and other resources such as the power needed to run the backups.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号