首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper is concerned with a dynamic traffic network performance model, known as dynamic network loading (DNL), that is frequently employed in the modeling and computation of analytical dynamic user equilibrium (DUE). As a key component of continuous-time DUE models, DNL aims at describing and predicting the spatial-temporal evolution of traffic flows on a network that is consistent with established route and departure time choices of travelers, by introducing appropriate dynamics to flow propagation, flow conservation, and travel delays. The DNL procedure gives rise to the path delay operator, which associates a vector of path flows (path departure rates) with the corresponding path travel costs. In this paper, we establish strong continuity of the path delay operator for networks whose arc flows are described by the link delay model (Friesz et al., Oper Res 41(1):80–91, 1993; Carey, Networks and Spatial Economics 1(3):349–375, 2001). Unlike the result established in Zhu and Marcotte (Transp Sci 34(4):402–414, 2000), our continuity proof is constructed without assuming a priori uniform boundedness of the path flows. Such a more general continuity result has a few important implications to the existence of simultaneous route-and-departure-time DUE without a priori boundedness of path flows, and to any numerical algorithm that allows convergence to be rigorously analyzed.  相似文献   

2.
Intuitionistic fuzzy set is capable of handling uncertainty with counterpart falsities which exist in nature. Proximity measure is a convenient way to demonstrate impractical significance of values of memberships in the intuitionistic fuzzy set. However, the related works of Pappis (Fuzzy Sets Syst 39(1):111–115, 1991), Hong and Hwang (Fuzzy Sets Syst 66(3):383–386, 1994), Virant (2000) and Cai (IEEE Trans Fuzzy Syst 9(5):738–750, 2001) did not model the measure in the context of the intuitionistic fuzzy set but in the Zadeh’s fuzzy set instead. In this paper, we examine this problem and propose new notions of δ-equalities for the intuitionistic fuzzy set and δ-equalities for intuitionistic fuzzy relations. Two fuzzy sets are said to be δ-equal if they are equal to an extent of δ. The applications of δ-equalities are important to fuzzy statistics and fuzzy reasoning. Several characteristics of δ-equalities that were not discussed in the previous works are also investigated. We apply the δ-equalities to the application of medical diagnosis to investigate a patient’s diseases from symptoms. The idea is using δ-equalities for intuitionistic fuzzy relations to find groups of intuitionistic fuzzified set with certain equality or similar degrees then combining them. Numerical examples are given to illustrate validity of the proposed algorithm. Further, we conduct experiments on real medical datasets to check the efficiency and applicability on real-world problems. The results obtained are also better in comparison with 10 existing diagnosis methods namely De et al. (Fuzzy Sets Syst 117:209–213, 2001), Samuel and Balamurugan (Appl Math Sci 6(35):1741–1746, 2012), Szmidt and Kacprzyk (2004), Zhang et al. (Procedia Eng 29:4336–4342, 2012), Hung and Yang (Pattern Recogn Lett 25:1603–1611, 2004), Wang and Xin (Pattern Recogn Lett 26:2063–2069, 2005), Vlachos and Sergiadis (Pattern Recogn Lett 28(2):197–206, 2007), Zhang and Jiang (Inf Sci 178(6):4184–4191, 2008), Maheshwari and Srivastava (J Appl Anal Comput 6(3):772–789, 2016) and Support Vector Machine (SVM).  相似文献   

3.
Consider a random graph model where each possible edge e is present independently with some probability p e . Given these probabilities, we want to build a large/heavy matching in the randomly generated graph. However, the only way we can find out whether an edge is present or not is to query it, and if the edge is indeed present in the graph, we are forced to add it to our matching. Further, each vertex i is allowed to be queried at most t i times. How should we adaptively query the edges to maximize the expected weight of the matching? We consider several matching problems in this general framework (some of which arise in kidney exchanges and online dating, and others arise in modeling online advertisements); we give LP-rounding based constant-factor approximation algorithms for these problems. Our main results are the following:
  • We give a 4 approximation for weighted stochastic matching on general graphs, and a 3 approximation on bipartite graphs. This answers an open question from Chen et al. (ICALP’09, LNCS, vol. 5555, pp. 266–278, [2009]).
  • We introduce a generalization of the stochastic online matching problem (Feldman et al. in FOCS’09, pp. 117–126, [2009]) that also models preference-uncertainty and timeouts of buyers, and give a constant factor approximation algorithm.
  相似文献   

4.
The Hybrid Search for Minimal Perturbation Problems algorithm in Dynamic CSP (HS_MPP) (Zivan, Constraints, 16(3), 228–249, 2011) ensures for a given dynamic problem and its solution to the previous CSP, to find the optimal solution to the newly generated CSP. This proposed method exploits the fact that its reported solution must satisfy two requirements. First, that it is a solution for T complete assignment for the derived CSP and second, that it is as close as possible to the solution of the former CSP. Unfortunately, the pseudo-code of the algorithm in Zivan (Constraints, 16(3), 228–249, 2011) is confusing and may lead to an implementation in which HS_MPP may not perform the expected outcomes of a given instance of Dynamic CSPs correctly. In this erratum, we demonstrate the possible undesired outcomes and give corrections in HS_MPP’s pseudo-code.  相似文献   

5.
We show that the NP-hard optimization problems minimum and maximum weight exact satisfiability (XSAT) for a CNF formula C over n propositional variables equipped with arbitrary real-valued weights can be solved in O(||C||20.2441n ) time. To the best of our knowledge, the algorithms presented here are the first handling weighted XSAT optimization versions in non-trivial worst case time. We also investigate the corresponding weighted counting problems, namely we show that the number of all minimum, resp. maximum, weight exact satisfiability solutions of an arbitrarily weighted formula can be determined in O(n 2·||C||?+?20.40567n ) time. In recent years only the unweighted counterparts of these problems have been studied (Dahllöf and Jonsson, An algorithm for counting maximum weighted independent sets and its applications. In: Proceedings of the 13th ACM-SIAM Symposium on Discrete Algorithms, pp. 292–298, 2002; Dahllöf et al., Theor Comp Sci 320: 373–394, 2004; Porschen, On some weighted satisfiability and graph problems. In: Proceedings of the 31st Conference on Current Trends in Theory and Practice of Informatics (SOFSEM 2005). Lecture Notes in Comp. Science, vol. 3381, pp. 278–287. Springer, 2005).  相似文献   

6.
Idempotency requires any phonotactically licit forms to be faithfully realized. Output-drivenness requires any discrepancies between underlying and output forms to be driven exclusively by phonotactics. These formal notions are relevant for phonological theory (they capture counter-feeding and counter-bleeding opacity) and play a crucial role in learnability. Tesar (Output-driven phonology: theory and learning. Cambridge studies in linguistics, 2013) and Magri (J of Linguistics, 2017) provide tight guarantees for OT output-drivenness and idempotency through conditions on the faithfulness constraints. This paper derives analogous faithfulness conditions for HG idempotency and output-drivenness and develops an intuitive interpretation of the various OT and HG faithfulness conditions thus obtained. The intuition is that faithfulness constraints measure the phonological distance between underlying and output forms. They should thus comply with a crucial axiom of the definition of distance, namely that any side of a triangle is shorter than the sum of the other two sides. This intuition leads to a faithfulness triangle inequality which is shown to be equivalent to the faithfulness conditions for idempotency and output-drivenness. These equivalences hold under various assumptions, crucially including McCarthy’s (Phonology 20(1):75–138, 2003b) generalization that (faithfulness) constraints are all categorical.  相似文献   

7.
A product variant table is a table that lists legal combinations of product features. Variant tables can be used to constrain the variability offered for a personalized product. The concept of such a table is easy to understand. Hence, variant tables are natural to use when ensuring the completeness and correctness of a quote/order for a customizable product. They are also used to filter out inadmissible choices for features in an interactive specification (configuration) process. Variant tables can be maintained as relational (database) tables, using spreadsheets, or in proprietary ways offered by the product modeling environment. Variant tables can become quite large. A way of compressing them is then sought that supports a space-efficient representation and a time-efficient evaluation. The motivation of this work is to develop a simple approach to compress/compile a variant table into an easy to read, but possibly hard to write form that can be deployed in a business setting at acceptable cost and risk in a similar manner as a database. The main result is a simple compression and evaluation scheme for an individual variant table called a Variant Decomposition Diagram (VDD). A VDD supports efficient consistency checks, the filtering of inadmissible features, and iteration over the table. A simple static heuristic for decomposition order is proposed that suggests itself from a “column oriented viewpoint”. This heuristic is not always optimal, but it has the advantage of allowing fast compilation of a variant table into a VDD. Compression results for a publicly available model of a Renault Megane are given. With the proposed heuristic the VDD is a specialization of a Zero-suppressed (binary) Decision Diagram (ZDD) (Knuth 2011) and also maps to a Multi-valued Decision Diagram (MDD) (Andersen et al. 2007; Berndt et al. 2012).  相似文献   

8.
A flow-shop batching problem with consistent batches is considered in which the processing times of all jobs on each machine are equal to p and all batch set-up times are equal to s. In such a problem, one has to partition the set of jobs into batches and to schedule the batches on each machine. The processing time of a batch B i is the sum of processing times of operations in B i and the earliest start of B i on a machine is the finishing time of B i on the previous machine plus the set-up time s. Cheng et al. (Naval Research Logistics 47:128–144, 2000) provided an O(n) pseudopolynomial-time algorithm for solving the special case of the problem with two machines. Mosheiov and Oron (European Journal of Operational Research 161:285–291, 2005) developed an algorithm of the same time complexity for the general case with more than two machines. Ng and Kovalyov (Journal of Scheduling 10:353–364, 2007) improved the pseudopolynomial complexity to \(O(\sqrt{n})\). In this paper, we provide a polynomial-time algorithm of time complexity O(log?3 n).  相似文献   

9.
On conditional diagnosability and reliability of the BC networks   总被引:1,自引:1,他引:0  
An n-dimensional bijective connection network (in brief, BC network), denoted by X n , is an n-regular graph with 2 n nodes and n2 n?1 edges. Hypercubes, crossed cubes, twisted cubes, and Möbius cubes all belong to the class of BC networks (Fan and He in Chin. J. Comput. 26(1):84–90, [2003]). We prove that the super connectivity of X n is 2n?2 for n≥3 and the conditional diagnosability of X n is 4n?7 for n≥5. As a corollary of this result, we obtain the super connectivity and conditional diagnosability of the hypercubes, twisted cubes, crossed cubes, and Möbius cubes.  相似文献   

10.
Liouville numbers were the first class of real numbers which were proven to be transcendental. It is easy to construct non-normal Liouville numbers. Kano (1993) and Bugeaud (2002) have proved, using analytic techniques, that there are normal Liouville numbers. Here, for a given base k ≥ 2, we give a new construction of a Liouville number which is normal to the base k. This construction is combinatorial, and is based on de Bruijn sequences.  相似文献   

11.
12.
In this paper, we present a method for Hamiltonian simulation in the context of eigenvalue estimation problems, which improves earlier results dealing with Hamiltonian simulation through the truncated Taylor series. In particular, we present a fixed-quantum circuit design for the simulation of the Hamiltonian dynamics, \({\mathcal {H}}(t)\), through the truncated Taylor series method described by Berry et al. (Phys Rev Lett 114:090502, 2015). The circuit is general and can be used to simulate any given matrix in the phase estimation algorithm by only changing the angle values of the quantum gates implementing the time variable t in the series. The circuit complexity depends on the number of summation terms composing the Hamiltonian and requires O(Ln) number of quantum gates for the simulation of a molecular Hamiltonian. Here, n is the number of states of a spin orbital, and L is the number of terms in the molecular Hamiltonian and generally is bounded by \(O(n^4)\). We also discuss how to use the circuit in adaptive processes and eigenvalue-related problems along with a slightly modified version of the iterative phase estimation algorithm. In addition, a simple divide-and-conquer method is presented for mapping a matrix which are not given as sums of unitary matrices into the circuit. The complexity of the circuit is directly related to the structure of the matrix and can be bounded by \(O(\mathrm{poly}(n))\) for a matrix with \(\mathrm{poly}(n)\)-sparsity.  相似文献   

13.
In this paper, a steganographic scheme adopting the concept of the generalized K d -distance N-dimensional pixel matching is proposed. The generalized pixel matching embeds a B-ary digit (B is a function of K and N) into a cover vector of length N, where the order-d Minkowski distance-measured embedding distortion is no larger than K. In contrast to other pixel matching-based schemes, a N-dimensional reference table is used. By choosing d, K, and N adaptively, an embedding strategy which is suitable for arbitrary relative capacity can be developed. Additionally, an optimization algorithm, namely successive iteration algorithm (SIA), is proposed to optimize the codeword assignment in the reference table. Benefited from the high dimensional embedding and the optimization algorithm, nearly maximal embedding efficiency is achieved. Compared with other content-free steganographic schemes, the proposed scheme provides better image quality and statistical security. Moreover, the proposed scheme performs comparable to state-of-the-art content-based approaches after combining with image models.  相似文献   

14.
The development of dedicated numerical codes has recently pushed forward the study of N-body gravitational dynamics, leading to a better and wider understanding of processes involving the formation of natural bodies in the Solar System. A major branch includes the study of asteroid formation: evidence from recent studies and observations support the idea that small and medium size asteroids between 100 m and 100 km may be gravitational aggregates with no cohesive force other than gravity. This evidence implies that asteroid formation depends on gravitational interactions between different boulders and that asteroid aggregation processes can be naturally modeled with N-body numerical codes implementing gravitational interactions. This work presents a new implementation of an N-body numerical solver. The code is based on Chrono::Engine (2006). It handles the contact and collision of large numbers of complex-shaped objects, while simultaneously evaluating the effect of N to N gravitational interactions. A special case of study is considered, investigating the relative dynamics between the N bodies and highlighting favorable conditions for the formation of a stable gravitationally bound aggregate from a cloud of N boulders. The code is successfully validated for the case of study by comparing relevant results obtained for typical known dynamical scenarios. The outcome of the numerical simulations shows good agreement with theory and observation, and suggests the ability of the developed code to predict natural aggregation phenomena.  相似文献   

15.
We propose a new computing model called chemical reaction automata (CRAs) as a simplified variant of reaction automata (RAs) studied in recent literature (Okubo in RAIRO Theor Inform Appl 48:23–38 2014; Okubo et al. in Theor Comput Sci 429:247–257 2012a, Theor Comput Sci 454:206–221 2012b). We show that CRAs in maximally parallel manner are computationally equivalent to Turing machines, while the computational power of CRAs in sequential manner coincides with that of the class of Petri nets, which is in marked contrast to the result that RAs (in both maximally parallel and sequential manners) have the computing power of Turing universality (Okubo 2014; Okubo et al. 2012a). Intuitively, CRAs are defined as RAs without inhibitor functioning in each reaction, providing an offline model of computing by chemical reaction networks (CRNs). Thus, the main results in this paper not only strengthen the previous result on Turing computability of RAs but also clarify the computing powers of inhibitors in RA computation.  相似文献   

16.
A 3D binary image I can be naturally represented by a combinatorial-algebraic structure called cubical complex and denoted by Q(I), whose basic building blocks are vertices, edges, square faces and cubes. In Gonzalez-Diaz et al. (Discret Appl Math 183:59–77, 2015), we presented a method to “locally repair” Q(I) to obtain a polyhedral complex P(I) (whose basic building blocks are vertices, edges, specific polygons and polyhedra), homotopy equivalent to Q(I), satisfying that its boundary surface is a 2D manifold. P(I) is called a well-composed polyhedral complex over the picture I. Besides, we developed a new codification system for P(I), encoding geometric information of the cells of P(I) under the form of a 3D grayscale image, and the boundary face relations of the cells of P(I) under the form of a set of structuring elements. In this paper, we build upon (Gonzalez-Diaz et al. 2015) and prove that, to retrieve topological and geometric information of P(I), it is enough to store just one 3D point per polyhedron and hence neither grayscale image nor set of structuring elements are needed. From this “minimal” codification of P(I), we finally present a method to compute the 2-cells in the boundary surface of P(I).  相似文献   

17.
Kaltofen (Randomness in computation, vol 5, pp 375–412, 1989) proved the remarkable fact that multivariate polynomial factorization can be done efficiently, in randomized polynomial time. Still, more than twenty years after Kaltofen’s work, many questions remain unanswered regarding the complexity aspects of polynomial factorization, such as the question of whether factors of polynomials efficiently computed by arithmetic formulas also have small arithmetic formulas, asked in Kopparty et al. (2014), and the question of bounding the depth of the circuits computing the factors of a polynomial. We are able to answer these questions in the affirmative for the interesting class of polynomials of bounded individual degrees, which contains polynomials such as the determinant and the permanent. We show that if \({P(x_{1},\ldots,x_{n})}\) is a polynomial with individual degrees bounded by r that can be computed by a formula of size s and depth d, then any factor \({f(x_{1},\ldots, x_{n})}\) of \({P(x_{1},\ldots,x_{n})}\) can be computed by a formula of size \({\textsf{poly}((rn)^{r},s)}\) and depth d + 5. This partially answers the question above posed in Kopparty et al. (2014), who asked if this result holds without the dependence on r. Our work generalizes the main factorization theorem from Dvir et al. (SIAM J Comput 39(4):1279–1293, 2009), who proved it for the special case when the factors are of the form \({f(x_{1}, \ldots, x_{n}) \equiv x_{n} - g(x_{1}, \ldots, x_{n-1})}\). Along the way, we introduce several new technical ideas that could be of independent interest when studying arithmetic circuits (or formulas).  相似文献   

18.
This paper considers a conflict situation on the plane as follows. A fast evader E has to break out the encirclement of slow pursuers P j1,...,j n = {P j1,..., P jn }, n ≥ 3, with a miss distance not smaller than r ≥ 0. First, we estimate the minimum guaranteed miss distance from E to a pursuer P a , a ∈ {j 1,..., j n }, when the former moves along a given straight line. Then the obtained results are used to calculate the guaranteed estimates to a group of two pursuers P b,c = {P b , P c }, b, c ∈ {j 1,..., j n }, bc, when E maneuvers by crossing the rectilinear segment P b P c , and the state passes to the domain of the game space where E applies a strategy under which the miss distance to any of the pursuers is not decreased. In addition, we describe an approach to the games with a group of pursuers P j1,... jn , n ≥ 3, in which E seeks to break out the encirclement by passing between two pursuers P b and P c , entering the domain of the game space where E can increase the miss distance to all pursuers by straight motion. By comparing the guaranteed miss distances with r for all alternatives b, c ∈ {j 1,..., j n }, bc, and a ? {b, c}, it is possible to choose the best alternative and also to extract the histories of the game in which the designed evasion strategies guarantee a safe break out from the encirclement.  相似文献   

19.
We present a method to construct a theoretically fast algorithm for computing the discrete Fourier transform (DFT) of order N = 2 n . We show that the DFT of a complex vector of length N is performed with complexity of 3.76875N log2 N real operations of addition, subtraction, and scalar multiplication.  相似文献   

20.
In the Fixed Cost k-Flow problem, we are given a graph G = (V, E) with edge-capacities {u e eE} and edge-costs {c e eE}, source-sink pair s, tV, and an integer k. The goal is to find a minimum cost subgraph H of G such that the minimum capacity of an st-cut in H is at least k. By an approximation-preserving reduction from Group Steiner Tree problem to Fixed Cost k-Flow, we obtain the first polylogarithmic lower bound for the problem; this also implies the first non-constant lower bounds for the Capacitated Steiner Network and Capacitated Multicommodity Flow problems. We then consider two special cases of Fixed Cost k-Flow. In the Bipartite Fixed-Cost k-Flow problem, we are given a bipartite graph G = (AB, E) and an integer k > 0. The goal is to find a node subset S ? AB of minimum size |S| such G has k pairwise edge-disjoint paths between SA and SB. We give an \(O(\sqrt {k\log k})\) approximation for this problem. We also show that we can compute a solution of optimum size with Ω(k/polylog(n)) paths, where n = |A| + |B|. In the Generalized-P2P problem we are given an undirected graph G = (V, E) with edge-costs and integer charges {b v : vV}. The goal is to find a minimum-cost spanning subgraph H of G such that every connected component of H has non-negative charge. This problem originated in a practical project for shift design [11]. Besides that, it generalizes many problems such as Steiner Forest, k-Steiner Tree, and Point to Point Connection. We give a logarithmic approximation algorithm for this problem. Finally, we consider a related problem called Connected Rent or Buy Multicommodity Flow and give a log3+?? n approximation scheme for it using Group Steiner Tree techniques.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号