首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 859 毫秒
1.
Yongge Tian 《Calcolo》2010,47(4):193-209
This paper considers decompositions of solutions of the linear matrix equation AXB=C into the sums of solutions of other two linear matrix equations A 1 X 1 B 1=C 1 and A 2 X 2 B 2=C 2. Some applications are also given on additive decompositions of generalized inverses, as well as decompositions of solutions of matrix equations into the sums of solutions of their small equations.  相似文献   

2.
We propose a non-iterative solution to the PnP problem—the estimation of the pose of a calibrated camera from n 3D-to-2D point correspondences—whose computational complexity grows linearly with n. This is in contrast to state-of-the-art methods that are O(n 5) or even O(n 8), without being more accurate. Our method is applicable for all n≥4 and handles properly both planar and non-planar configurations. Our central idea is to express the n 3D points as a weighted sum of four virtual control points. The problem then reduces to estimating the coordinates of these control points in the camera referential, which can be done in O(n) time by expressing these coordinates as weighted sum of the eigenvectors of a 12×12 matrix and solving a small constant number of quadratic equations to pick the right weights. Furthermore, if maximal precision is required, the output of the closed-form solution can be used to initialize a Gauss-Newton scheme, which improves accuracy with negligible amount of additional time. The advantages of our method are demonstrated by thorough testing on both synthetic and real-data.  相似文献   

3.
Consider an n-vertex planar graph G. The depth of an embedding Γ of G is the maximum distance of its internal faces from the external one. Several researchers pointed out that the quality of a planar embedding can be measured in terms of its depth. We present an O(n 4)-time algorithm for computing an embedding of G with minimum depth. This bound improves on the best previous bound by an O(nlog n) factor. As a side effect, our algorithm improves the bounds of several algorithms that require the computation of a minimum-depth embedding.  相似文献   

4.
In this paper we present a robust polynomial classifier based on L 1-norm minimization. We do so by reformulating the classifier training process as a linear programming problem. Due to the inherent insensitivity of the L 1-norm to influential observations, class models obtained via L 1-norm minimization are much more robust than their counterparts obtained by the classical least squares minimization (L 2-norm). For validation purposes, we apply this method to two recognition problems: character recognition and sign language recognition. Both are examined under different signal to noise ratio (SNR) values of the test data. Results show that L 1-norm minimization provides superior recognition rates over L 2-norm minimization when the training data contains influential observations especially if the test dataset is noisy.  相似文献   

5.
For the interval system of equations defined by [x] = [A][x]+[b]with(|[A])|1 we derive a necessary and sufficient criterion for the existence and uniqueness of solutions [x]. Generalizing former results we allow the absolute value |[A]| of [A] to be reducible.  相似文献   

6.
Let F=C 1C m be a Boolean formula in conjunctive normal form over a set V of n propositional variables, s.t. each clause C i contains at most three literals l over V. Solving the problem exact 3-satisfiability (X3SAT) for F means to decide whether there is a truth assignment setting exactly one literal in each clause of F to true (1). As is well known X3SAT is NP-complete [6]. By exploiting a perfect matching reduction we prove that X3SAT is deterministically decidable in time O(20.18674n ). Thereby we improve a result in [2,3] stating X3SATO(20.2072n ) and a bound of O(20.200002n ) for the corresponding enumeration problem #X3SAT stated in a preprint [1]. After that by a more involved deterministic case analysis we are able to show that X3SATO(20.16254n ).  相似文献   

7.
Every rectilinear Steiner tree problem admits an optimal tree T * which is composed of tree stars. Moreover, the currently fastest algorithms for the rectilinear Steiner tree problem proceed by composing an optimum tree T * from tree star components in the cheapest way. The efficiency of such algorithms depends heavily on the number of tree stars (candidate components). Fößmeier and Kaufmann (Algorithmica 26, 68–99, 2000) showed that any problem instance with k terminals has a number of tree stars in between 1.32 k and 1.38 k (modulo polynomial factors) in the worst case. We determine the exact bound O *(ρ k ) where ρ≈1.357 and mention some consequences of this result.  相似文献   

8.
In most of the auction systems the values of bids are known to the auctioneer. This allows him to manipulate the outcome of the auction. Hence, one might be interested in hiding these values. Some cryptographically secure protocols for electronic auctions have been presented in the last decade. Our work extends these protocols in several ways. On the basis of garbled circuits, i.e., encrypted circuits, we present protocols for sealed-bid auctions that fulfill the following requirements: 1) protocols are information-theoretically t-private for honest but curious parties; 2) the number of bits that can be learned by malicious adversaries is bounded by the output length of the auction; 3) the computational requirements for participating parties are very low: only random bit choices and bitwise computation of the XOR-function are necessary. Note that one can distinguish between the protocol that generates a garbled circuit for an auction and the protocol to evaluate the auction. In this paper we address both problems. We will present a t-private protocol for the construction of a garbled circuit that reaches the lower bound of 2t + 1 parties, and Finally, we address the problem of bid changes in an auction. a more randomness efficient protocol for (t + 1)^2 parties  相似文献   

9.
This paper describes the development steps and core ideas used by the USP Farmers herding team, that has participated in the 2010 edition of the Multi-Agent Programming Contest (MAPC 2010). This is the third year that the competitors must design a team of herding agents, whose global goal is to lead a maximum number of cows to their own corral. As this is a very complex task and requires coordination of the team, we have developed the individual agents using the Jason (Bordini et al. 2007) interpreter for AgentSpeak(L) (Rao 1996). Moreover, the coordination strategy was defined using the M\mathcal{M} OISE  +  (Hübner et al. 2002, 2007) organizational model. We have also used the idea of artifact (Ricci et al. 2007) to develop global services, available to all the agents. Moreover, it is clear that for this contest some pure procedural processing should be developed in a lower abstraction level (Hübner et al. 2008); therefore some calculation and pre-defined global decisions were implemented by Java classes.  相似文献   

10.
In the connected dominating set problem we are given an n-node undirected graph, and we are asked to find a minimum cardinality connected subset S of nodes such that each node not in S is adjacent to some node in S. This problem is also equivalent to finding a spanning tree with maximum number of leaves. Despite its relevance in applications, the best known exact algorithm for the problem is the trivial Ω(2 n ) algorithm that enumerates all the subsets of nodes. This is not the case for the general (unconnected) version of the problem, for which much faster algorithms are available. Such a difference is not surprising, since connectivity is a global property, and non-local problems are typically much harder to solve exactly. In this paper we break the 2 n barrier, by presenting a simple O(1.9407 n ) algorithm for the connected dominating set problem. The algorithm makes use of new domination rules, and its analysis is based on the Measure and Conquer technique. An extended abstract of this paper appeared in the proceedings of FSTTCS’06. Fedor V. Fomin was additionally supported by the Research Council of Norway.  相似文献   

11.
Given a graph with a source and a sink node, the NP-hard maximum k-splittable s,t-flow (M k SF) problem is to find a flow of maximum value from s to t with a flow decomposition using at most k paths. The multicommodity variant of this problem is a natural generalization of disjoint paths and unsplittable flow problems. Constructing a k-splittable flow requires two interdepending decisions. One has to decide on k paths (routing) and on the flow values for the paths (packing). We give efficient algorithms for computing exact and approximate solutions by decoupling the two decisions into a first packing step and a second routing step. Usually the routing is considered before the packing. Our main contributions are as follows: (i) We show that for constant k a polynomial number of packing alternatives containing at least one packing used by an optimal M k SF solution can be constructed in polynomial time. If k is part of the input, we obtain a slightly weaker result. In this case we can guarantee that, for any fixed ε>0, the computed set of alternatives contains a packing used by a (1−ε)-approximate solution. The latter result is based on the observation that (1−ε)-approximate flows only require constantly many different flow values. We believe that this observation is of interest in its own right. (ii) Based on (i), we prove that, for constant k, the M k SF problem can be solved in polynomial time on graphs of bounded treewidth. If k is part of the input, this problem is still NP-hard and we present a polynomial time approximation scheme for it.  相似文献   

12.
Pursuing our work in Tone (Asymptot. Analysis 51:231–245, 2007) and Tone and Wirosoetisno (SIAM J. Number. Analysis 44:29–40, 2006), we consider in this article the two-dimensional magnetohydrodynamics equations, we discretize these equations in time using the implicit Euler scheme and with the aid of the classical and uniform discrete Gronwall lemma, we prove that the scheme is H 2-uniformly stable in time.  相似文献   

13.
Given a Laman graph G, i.e. a minimally rigid graph in R 2, we provide a Θ(n 2) algorithm to augment G to a redundantly rigid graph, by adding a minimum number of edges. Moreover, we prove that this problem of augmenting is NP-hard for an arbitrary rigid graph G in R 2.  相似文献   

14.
Xiao-Shan Chen  Wen Li 《Calcolo》2008,45(2):99-109
In this paper, the variations of both subunitary polar factors and Hermitian positive semidefinite polar factors in the polar decomposition are studied. New perturbation bounds of both polar factors are given without the restriction that A and its perturbed matrix à have the same rank. These improve recent results.  相似文献   

15.
In this paper, we propose a variational soft segmentation framework inspired by the level set formulation of multiphase Chan-Vese model. We use soft membership functions valued in [0,1] to replace the Heaviside functions of level sets (or characteristic functions) such that we get a representation of regions by soft membership functions which automatically satisfies the sum to one constraint. We give general formulas for arbitrary N-phase segmentation, in contrast to Chan-Vese’s level set method only 2 m -phase are studied. To ensure smoothness on membership functions, both total variation (TV) regularization and H 1 regularization used as two choices for the definition of regularization term. TV regularization has geometric meaning which requires that the segmentation curve length as short as possible, while H 1 regularization has no explicit geometric meaning but is easier to implement with less parameters and has higher tolerance to noise. Fast numerical schemes are designed for both of the regularization methods. By changing the distance function, the proposed segmentation framework can be easily extended to the segmentation of other types of images. Numerical results on cartoon images, piecewise smooth images and texture images demonstrate that our methods are effective in multiphase image segmentation.  相似文献   

16.
Summary In this paper, we investigate the discretization of an elliptic boundary value problem in 3D by means of the hp-version of the finite element method using a mesh of tetrahedrons. We present several bases based on integrated Jacobi polynomials in which the element stiffness matrix has nonzero entries, where p denotes the polynomial degree. The proof of the sparsity requires the assistance of computer algebra software. Several numerical experiments show the efficiency of the proposed bases for higher polynomial degrees p.   相似文献   

17.
The (n + 1)-dimensional Einstein-Gauss-Bonnet (EGB) model is considered. For diagonal cosmological metrics, the equations of motion are written as a set of Lagrange equations with the effective Lagrangian containing two “minisuperspace” metrics on ℝ n : a 2-metric of pseudo-Euclidean signature and a Finslerian 4-metric proportional to the n-dimensional Berwald-Moor 4-metric. For the case of the “pure” Gauss-Bonnet model, two exact solutions are presented, those with power-law and exponential dependences of the scale factors (w.r.t. the synchronous time variable) are presented. (The power-law solution was considered earlier by N. Deruelle, A. Toporensky, P. Tretyakov, and S. Pavluchenko.) In the case of EGB cosmology, it is shown that for any nontrivial solution with an exponential dependence of scale factors, a i (τ) = A i exp(v i τ), there are no more than three different numbers among v 1, …, v n .  相似文献   

18.
We introduce a preconditioner based on a hierarchical low-rank compression scheme of Schur complements. The construction is inspired by standard nested dissection, and relies on the assumption that the Schur complements can be approximated, to high precision, by Hierarchically-Semi-Separable matrices. We build the preconditioner as an approximate \(LDM^t\) factorization of a given matrix A, and no knowledge of A in assembled form is required by the construction. The \(LDM^t\) factorization is amenable to fast inversion, and the action of the inverse can be determined fast as well. We investigate the behavior of the preconditioner in the context of DG finite element approximations of elliptic and hyperbolic problems, with respect to both the mesh size and the order of approximation.  相似文献   

19.
Higman showed that if A is any language then SUBSEQ(A) is regular. His proof was nonconstructive. We show that the result cannot be made constructive. In particular we show that if f takes as input an index e of a total Turing Machine M e , and outputs a DFA for SUBSEQ(L(M e )), then ″≤T f (f is Σ 2-hard). We also study the complexity of going from A to SUBSEQ(A) for several representations of A and SUBSEQ(A).  相似文献   

20.
We study two related network design problems with two cost functions. In the buy-at-bulk k-Steiner tree problem we are given a graph G(V,E) with a set of terminals TV including a particular vertex s called the root, and an integer k≤|T|. There are two cost functions on the edges of G, a buy cost b:E→ℝ+ and a distance cost r:E→ℝ+. The goal is to find a subtree H of G rooted at s with at least k terminals so that the cost ∑ eH b(e)+∑ tTs dist(t,s) is minimized, where dist(t,s) is the distance from t to s in H with respect to the r cost. We present an O(log 4 n)-approximation algorithm for the buy-at-bulk k-Steiner tree problem. The second and closely related one is bicriteria approximation algorithm for Shallow-light k-Steiner trees. In the shallow-light k-Steiner tree problem we are given a graph G with edge costs b(e) and distance costs r(e), and an integer k. Our goal is to find a minimum cost (under b-cost) k-Steiner tree such that the diameter under r-cost is at most some given bound D. We develop an (O(log n),O(log 3 n))-approximation algorithm for a relaxed version of Shallow-light k-Steiner tree where the solution has at least terminals. Using this we obtain an (O(log 2 n),O(log 4 n))-approximation algorithm for the shallow-light k-Steiner tree and an O(log 4 n)-approximation algorithm for the buy-at-bulk k-Steiner tree problem. Our results are recently used to give the first polylogarithmic approximation algorithm for the non-uniform multicommodity buy-at-bulk problem (Chekuri, C., et al. in Proceedings of 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS’06), pp. 677–686, 2006). A preliminary version of this paper appeared in the Proceedings of 9th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX) 2006, LNCS 4110, pp. 153–163, 2006. M.T. Hajiaghayi supported in part by IPM under grant number CS1383-2-02. M.R. Salavatipour supported by NSERC grant No. G121210990, and a faculty start-up grant from University of Alberta.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号