首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The Deutsch–Jozsa problem is one of the most basic ways to demonstrate the power of quantum computation. Consider a Boolean function f : {0, 1} n → {0, 1} and suppose we have a black-box to compute f. The Deutsch–Jozsa problem is to determine if f is constant (i.e. f(x) = const, "x ? {0,1}nf(x) = \hbox {const, } \forall x \in \{0,1\}^n) or if f is balanced (i.e. f(x) = 0 for exactly half the possible input strings x ? {0,1}nx \in \{0,1\}^n) using as few calls to the black-box computing f as is possible, assuming f is guaranteed to be constant or balanced. Classically it appears that this requires at least 2 n−1 + 1 black-box calls in the worst case, but the well known quantum solution solves the problem with probability one in exactly one black-box call. It has been found that in some cases the algorithm can be de-quantised into an equivalent classical, deterministic solution. We explore the ability to extend this de-quantisation to further cases, and examine with more detail when de-quantisation is possible, both with respect to the Deutsch–Jozsa problem, as well as in more general cases.  相似文献   

2.
We propose a protocol to construct the 35 \(f\) -controlled phase gates of a three-qubit refined Deutsch–Jozsa (DJ) algorithm, by using single-qubit \(\sigma _z\) gates, two-qubit controlled phase gates, and two-target-qubit controlled phase gates. Using this protocol, we discuss how to implement the three-qubit refined DJ algorithm with superconducting transmon qutrits resonantly coupled to a single cavity. Our numerical calculation shows that implementation of this quantum algorithm is feasible within the present circuit QED technique. The experimental realization of this algorithm would be an important step toward more complex quantum computation in circuit QED.  相似文献   

3.
Deutsch–Jozsa algorithm has been implemented via quantum adiabatic evolutions by Das et al. (Phys Rev A 65:062310, 2002) and Wei et al. (Phys Lett A 354:271, 2006). In the latter literature, the authors have shown a modified version of the adiabatic evolution which can improve the performance of the algorithm of S. Das et al’s to constant time. In this paper, we also improve the algorithm of S. Das et al’s in a constant time but by using a different construction of adiabatic evolution, i.e., adding ancillary qubits. The algorithm in this paper provides an alternative option to potential users.  相似文献   

4.
D. R. Simon stated a problem, so-called Simons problem, whose computational complexity is in the class BQP but not in BPP, where is the function or oracle given in the problem. This result indicates that BPP may be strictly included in its quantum counterpart, BQP. Later, G. Brassard and P. Høyer showed that Simons problem and its extended version can be solved by a deterministic polynomial time quantum algorithm. That is, these problems are in the class EQP. In this paper, we show that Simons problem and its extended version can be deterministically solved in a simpler and more concrete way than that proposed by G. Brassard and P. Høyer.  相似文献   

5.
In partitioned fluid–structure interaction simulations, the flow equations and the structural equations are solved separately. As a result, a coupling algorithm is needed to enforce the equilibrium on the fluid–structure interface in cases with strong interaction. This coupling algorithm performs coupling iterations between the solver of the flow equations and the solver of the structural equations. Current coupling algorithms couple one flow solver with one structural solver. Here, a new class of multi-solver quasi-Newton coupling algorithms for unsteady fluid–structure interaction simulations is presented. More than one flow solver and more than one structural solver are used for a single simulation. The numerical experiments demonstrate that the duration of a simulation decreases as the number of solvers is increased.  相似文献   

6.
7.
In Dijkstra (Commun ACM 17(11):643–644, 1974) introduced the notion of self-stabilizing algorithms and presented three such algorithms for the problem of mutual exclusion on a ring of n processors. The third algorithm is the most interesting of these three but is rather non intuitive. In Dijkstra (Distrib Comput 1:5–6, 1986) a proof of its correctness was presented, but the question of determining its worst case complexity—that is, providing an upper bound on the number of moves of this algorithm until it stabilizes—remained open. In this paper we solve this question and prove an upper bound of 3\frac1318 n2 + O(n){3\frac{13}{18} n^2 + O(n)} for the complexity of this algorithm. We also show a lower bound of 1\frac56 n2 - O(n){1\frac{5}{6} n^2 - O(n)} for the worst case complexity. For computing the upper bound, we use two techniques: potential functions and amortized analysis. We also present a new-three state self-stabilizing algorithm for mutual exclusion and show a tight bound of \frac56 n2 + O(n){\frac{5}{6} n^2 + O(n)} for the worst case complexity of this algorithm. In Beauquier and Debas (Proceedings of the second workshop on self-stabilizing systems, pp 17.1–17.13, 1995) presented a similar three-state algorithm, with an upper bound of 5\frac34n2+O(n){5\frac{3}{4}n^2+O(n)} and a lower bound of \frac18n2-O(n){\frac{1}{8}n^2-O(n)} for its stabilization time. For this algorithm we prove an upper bound of 1\frac12n2 + O(n){1\frac{1}{2}n^2 + O(n)} and show a lower bound of n 2O(n). As far as the worst case performance is considered, the algorithm in Beauquier and Debas (Proceedings of the second workshop on self-stabilizing systems, pp 17.1–17.13, 1995) is better than the one in Dijkstra (Commun ACM 17(11):643–644, 1974) and our algorithm is better than both.  相似文献   

8.
This paper addresses the problem of representing the intruder’s knowledge in the formal verification of cryptographic protocols, whose main challenges are to represent the intruder’s knowledge efficiently and without artificial limitations on the structure and size of messages. The new knowledge representation strategy proposed in this paper achieves both goals and leads to practical implementation because it is incrementally computable and is easily amenable to work with various term representation languages. In addition, it handles associative and commutative term composition operators, thus going beyond the free term algebra framework. An extensive computational complexity analysis of the proposed representation strategy is included in the paper. This work was partially supported by the Italian National Council of Research, grant number CNRC00FE45, and by the Center for Multimedia Radio Communications of Politecnico di Torino.  相似文献   

9.
Based on the dipole source method, all components of the Green's functions in spectral domain are restructured concisely by four basis functions, and in terms of the two-level discrete complex image method (DCIM) with the high order Sommerfeld identities, an efficient algorithm for closed-form Green's functions in spatial domain in multilayered media is presented. This new work enjoys the advantages of the surface wave pole extraction directly carried out by the generalized integral path without troubles of that all components of Green's function in spectral domain should be reformed respectively in transmission line network analogy, and then the Green's functions for mixed-potential integral equation (MPIE) analysis in both near-field and far-field in multilayered media are obtained. In addition, the curl operator for coupled field in MPIE is avoided conveniently. It is especially applicable and useful to characterize the electromagnetic scattering by, and radiation in the presence of, the electrically large 3-D objects in multilayered media. The numerical results of the S-parameters of a microstrip periodic bandgap (PBG) filter, the radar cross section (RCS) of a large microstrip antenna array, the characteristics of scattering, and radiation from the three-dimensional (3-D) targets in multilayered media are obtained, to demonstrate better effectiveness and accuracy of this technique.  相似文献   

10.
A smart Information and Communication Technology (ICT) enables a synchronized interplay of different key factors, aligning infrastructures, consumers, and governmental policy-making needs. In the harbor’s logistics context, smart ICT has been driving a multi-year wave of growth. Although there is a standalone value in the technological innovation of a task, the impact of a new smart technology is unknown without quantitative analysis methods on the end-to-end process. In this paper, we first present a review of the smart ICT for marine container terminals, and then we propose to evaluate the impact of such smart ICT via business process model and notation (BPMN) modeling and simulation. The proposed approach is discussed in a real-world modeling and simulation analysis, made on a pilot terminal of the Port of Leghorn (Italy).  相似文献   

11.
The flow behaviors of nanofluids were studied in this paper using molecular dynamics (MD) simulation. Two MD simulation systems that are the near-wall model and main flow model were built. The nanofluid model consisted of one copper nanoparticle and liquid argon as base liquid. For the near-wall model, the nanoparticle that was very close to the wall would not move with the main flowing due to the overlap between the solid-like layer near the wall and the adsorbed layer around the nanoparticle, but it still had rotational motion. When the nanoparticle is far away from the wall (d > 11 Å), the nanoparticle not only had rotational motion, but also had translation. In the main flow model, the nanoparticle would rotate and translate besides main flowing. There was slip velocity between nanoparticles and liquid argon in both of the two simulation models. The flow behaviors of nanofluids exhibited obviously characteristics of two-phase flow. Because of the irregular motions of nanoparticles and the slip velocity between the two phases, the velocity fluctuation in nanofluids was enhanced.  相似文献   

12.
International Journal of Control, Automation and Systems - A range of motion (ROM) has been measured using several indices to judge the progress of ankylosing spondylitis (AS). However, measuring...  相似文献   

13.
We are concerned with a variation of the standard 0–1 knapsack problem, where the values of items differ under possible S scenarios. By applying the ‘pegging test’ the ordinary knapsack problem can be reduced, often significantly, in size; but this is not directly applicable to our problem. We introduce a kind of surrogate relaxation to derive upper and lower bounds quickly, and show that, with this preprocessing, the similar pegging test can be applied to our problem. The reduced problem can be solved to optimality by the branch-and-bound algorithm. Here, we make use of the surrogate variables to evaluate the upper bound at each branch-and-bound node very quickly by solving a continuous knapsack problem. Through numerical experiments we show that the developed method finds upper and lower bounds of very high accuracy in a few seconds, and solves larger instances to optimality faster than the previously published algorithms.  相似文献   

14.
It is proved that Yablo’s paradox and the Liar paradox are equiparadoxical, in the sense that their paradoxicality is based upon exactly the same circularity condition—for any frame ${\mathcal{K}}$ , the following are equivalent: (1) Yablo’s sequence leads to a paradox in ${\mathcal{K}}$ ; (2) the Liar sentence leads to a paradox in ${\mathcal{K}}$ ; (3) ${\mathcal{K}}$ contains odd cycles. This result does not conflict with Yablo’s claim that his sequence is non-self-referential. Rather, it gives Yablo’s paradox a new significance: his construction contributes a method by which we can eliminate the self-reference of a paradox without changing its circularity condition.  相似文献   

15.
The Heckscher–Ohlin (H–O) theorem is one of the classical results in international trade theory. In the real world, however, this tendency has not been observed. Two restrictive assumptions are required for this theorem to hold. One is the identity of utility functions between the two trading countries, and the other relates to production functions. In this paper, simulations are conducted to identity which assumption is more important in order for the H–O theorem to hold. Production functions (constant returns to scale) and utility functions are assumed to be of Cobb–Douglas type. In the first simulation, 10,000 pairs of parameters on production and utility functions are selected randomly, where production function on both countries are identical and utility functions can be different. The H–O property is observed for approximately 70% of the solutions. In the second simulation, the same simulation is conducted where utility functions in both countries are identical and production functions can be different. Then, H–O property is observed for approximately 50% of the solutions.  相似文献   

16.
This work focuses on the most commonly used binarization method: Sauvola’s. It performs relatively well on classical documents, however, three main defects remain: the window parameter of Sauvola’s formula does not fit automatically to the contents, it is not robust to low contrasts, and it is not invariant with respect to contrast inversion. Thus, on documents such as magazines, the contents may not be retrieved correctly, which is crucial for indexing purpose. In this paper, we describe how to implement an efficient multiscale implementation of Sauvola’s algorithm in order to guarantee good binarization for both small and large objects inside a single document without adjusting manually the window size to the contents. We also describe how to implement it in an efficient way, step by step. This algorithm remains notably fast compared to the original one. For fixed parameters, text recognition rates and binarization quality are equal or better than other methods on text with low and medium x-height and are significantly improved on text with large x-height. Pixel-based accuracy and OCR evaluations are performed on more than 120 documents. Compared to awarded methods in the latest binarization contests, Sauvola’s formula does not give the best results on historical documents. On the other hand, on clean magazines, it outperforms those methods. This implementation improves the robustness of Sauvola’s algorithm by making the results almost insensible to the window size whatever the object sizes. Its properties make it usable in full document analysis toolchains.  相似文献   

17.
18.
Let (n) be the minimum number of arithmetic operations required to build the integer from the constants 1 and 2. A sequence xn is said to be easy to compute if there exists a polynomial p such that for all It is natural to conjecture that sequences such as or n! are not easy to compute. In this paper we show that a proof of this conjecture for the first sequence would imply a superpolynomial lower bound for the arithmetic circuit size of the permanent polynomial. For the second sequence, a proof would imply a superpolynomial lower bound for the permanent or P PSPACE.  相似文献   

19.
The neutralization of contrasts in form or meaning that is sometimes observed in language production and comprehension is at odds with the classical view that language is a systematic one-to-one pairing of forms and meanings. This special issue is concerned with patterns of forms and meanings in language. The papers in this special issue arose from a series of workshops that were organized to explore variants of bidirectional Optimality Theory and Game Theory as models of the interplay between the speaker’s and the hearer’s perspective.  相似文献   

20.
This review investigates the landscapes of hybrid quantum–classical optimization algorithms that are prevalent in many rapidly developing quantum technologies, where the objective function is computed by either a natural quantum system or an engineered quantum ansatz, but the optimizer is classical. In any particular case, the nature of the underlying control landscape is fundamentally important for systematic optimization of the objective. In early studies on the optimal control of few-body dynamics, the optimizer could take full control of the relatively low-dimensional quantum systems to be manipulated. Stepping into the noisy intermediate-scale quantum (NISQ) era, the experimentally growing computational power of the ansatz expressed as quantum hardware may bring quantum advantage over classical computers, but the classical optimizer is often limited by the available control resources. Across these different scales, we will show that the landscape’s geometry experiences morphological changes from favorable trap-free landscapes to easily trapping rugged landscapes, and eventually to barren-plateau landscapes on which the optimizer can hardly move. This unified view provides the basis for understanding classes of systems that may be readily controlled out to those with special consideration, including the difficulties and potential advantages of NISQ technologies, as well as seeking possible ways to escape traps or plateaus, in particular circumstances.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号