首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In Schlingemann (J Math Phys 45:4322, 2004) it was proved that for any calculated error syndrome for quantum graph codes exists an appropriate local correction operation. In this paper we propose an explicit operator to perform the calculation of the syndrome to these codes. Our method makes use of the inverse quantum Fourier transform.  相似文献   

2.
Ensembles of binary random LDPC block codes constructed using Hamming codes as constituent codes are studied for communicating over the binary symmetric channel. These ensembles are known to contain codes that asymptotically almost meet the Gilbert-Varshamov bound. It is shown that in these ensembles there exist codes which can correct a number of errors that grows linearly with the code length, when decoded with a low-complexity iterative decoder, which requires a number of iterations that is a logarithmic function of the code length. The results are supported by numerical examples, for various choices of the code parameters.  相似文献   

3.
We show how to convert an arbitrary stabilizer code into a bipartite quantum code. A bipartite quantum code is one that involves two senders and one receiver. The two senders exploit both nonlocal and local quantum resources to encode quantum information with local encoding circuits. They transmit their encoded quantum data to a single receiver who then decodes the transmitted quantum information. The nonlocal resources in a bipartite code are ebits and nonlocal information qubits, and the local resources are ancillas and local information qubits. The technique of bipartite quantum error correction is useful in both the quantum communication scenario described above and in fault-tolerant quantum computation. It has application in fault-tolerant quantum computation because we can prepare nonlocal resources offline and exploit local encoding circuits. In particular, we derive an encoding circuit for a bipartite version of the Steane code that is local and additionally requires only nearest-neighbor interactions. We have simulated this encoding in the CNOT extended rectangle with a publicly available fault-tolerant simulation software. The result is that there is an improvement in the “pseudothreshold” with respect to the baseline Steane code, under the assumption that quantum memory errors occur less frequently than quantum gate errors.  相似文献   

4.
We present a method of concatenated quantum error correction in which improved classical processing is used with existing quantum codes and fault-tolerant circuits to more reliably correct errors. Rather than correcting each level of a concatenated code independently, our method uses information about the likelihood of errors having occurred at lower levels to maximize the probability of correctly interpreting error syndromes. Results of simulations of our method applied to the [[4,1,2]] subsystem code indicate that it can correct a number of discrete errors up to half of the distance of the concatenated code, which is optimal.  相似文献   

5.
We apply Linear Error Correction (LEC) code to a novel encoding scheme to assure two fundamental requirements for transmission channels and storage units: security and dependability. Our design has the capacity to adapt itself to different applications and their various characteristics such as availability, error rate, and vulnerabilities. Based on simple logic operations, our scheme affords fast encryption, scalability (dual or more column erasures), and flexibility (LEC encoder employed as a front end to any conventional compression scheme). Performance results are very promising: Experiments on dual erasures outperform conventional compression algorithms including Arithmetic Coding, Huffman, and LZ77.  相似文献   

6.
Quantum annealing is a promising approach for solving optimization problems, but like all other quantum information processing methods, it requires error correction to ensure scalability. In this work, we experimentally compare two quantum annealing correction (QAC) codes in the setting of antiferromagnetic chains, using two different quantum annealing processors. The lower-temperature processor gives rise to higher success probabilities. The two codes differ in a number of interesting and important ways, but both require four physical qubits per encoded qubit. We find significant performance differences, which we explain in terms of the effective energy boost provided by the respective redundantly encoded logical operators of the two codes. The code with the higher energy boost results in improved performance, at the expense of a lower-degree encoded graph. Therefore, we find that there exists an important trade-off between encoded connectivity and performance for quantum annealing correction codes.  相似文献   

7.
A method is presented for the construction of binary linear burst error correcting recurrent codes of type B2, viz. those codes which correct any burst of lengthl=rb or less provided it is confined amongr successive blocks of lengthb. The proposed codes are optimal in the sense they meet the lower bound obtained by Wyner and Ash, and are constructed through a particular bordering of the characteristic matrices of analogous codes of shorter block length. Finally, existing linear recurrent codes are compared for efficiency. Lavoro eseguito nell’ambito dell’Università di Roma.  相似文献   

8.
The success probability in an ancilla-based circuit generally decreases exponentially in the number of qubits consisted in the ancilla. Although the probability can be amplified through the amplitude amplification process, the input dependence of the amplitude amplification makes difficult to sequentially combine two or more ancilla-based circuits. A new version of the amplitude amplification known as the oblivious amplitude amplification runs independently of the input to the system register. This allows us to sequentially combine two or more ancilla-based circuits. However, this type of the amplification only works when the considered system is unitary or non-unitary but somehow close to a unitary. In this paper, we present a general framework to simulate non-unitary processes on ancilla-based quantum circuits in which the success probability is maximized by using the oblivious amplitude amplification. In particular, we show how to extend a non-unitary matrix to an almost unitary matrix. We then employ the extended matrix by using an ancilla-based circuit design along with the oblivious amplitude amplification. Measuring the distance of the produced matrix to the closest unitary matrix, a lower bound for the fidelity of the final state obtained from the oblivious amplitude amplification process is presented. Numerical simulations for random matrices of different sizes show that independent of the system size, the final amplified probabilities are generally around 0.75 and the fidelity of the final state is mostly high and around 0.95. Furthermore, we discuss the complexity analysis and show that combining two such ancilla-based circuits, a matrix product can be implemented. This may lead us to efficiently implement matrix functions represented as infinite matrix products on quantum computers.  相似文献   

9.
With the emergence and popularity of identity verification means by biometrics, the biometric system which can assure security and privacy has received more and more concentration from both the research and industry communities. In the field of secure biometric authentication, one branch is to combine the biometrics and cryptography. Among all the solutions in this branch, fuzzy commitment scheme is a pioneer and effective security primitive. In this paper, we propose a novel binary length-fixed feature generation method of fingerprint. The alignment procedure, which is thought as a difficult task in the encrypted domain, is avoided in the proposed method due to the employment of minutiae triplets. Using the generated binary feature as input and based on fuzzy commitment scheme, we construct the biometric cryptosystems by combining various of error correction codes, including BCH code, a concatenated code of BCH code and Reed-Solomon code, and LDPC code. Experiments conducted on three fingerprint databases, including one in-house and two public domain, demonstrate that the proposed binary feature generation method is effective and promising, and the biometric cryptosystem constructed by the feature outperforms most of the existing biometric cryptosystems in terms of ZeroFAR and security strength. For instance, in the whole FVC2002 DB2, a 4.58% ZeroFAR is achieved by the proposed biometric cryptosystem with the security strength 48 bits.  相似文献   

10.
Construction of a fault-tolerant quantum computer remains a challenging problem due to unavoidable noise and fragile quantum states. However, this goal can be achieved by introducing quantum error-correcting codes. Here, we experimentally realize an automated error correction code and demonstrate the nondestructive discrimination of GHZ states in IBM 5-qubit quantum computer. After performing quantum state tomography, we obtain the experimental results with a high fidelity. Finally, we generalize the investigated code for maximally entangled n-qudit case, which could both detect and automatically correct any arbitrary phase-change error, or any phase-flip error, or any bit-flip error, or combined error of all types of error.  相似文献   

11.
In this work, we explore the accuracy of quantum error correction depending of the order of the implemented syndrome measurements. CSS codes require that bit-flip and phase-flip syndromes be measured separately. To comply with fault-tolerant demands and to maximize accuracy, this set of syndrome measurements should be repeated allowing for flexibility in the order of their implementation. We examine different possible orders of Shor-state and Steane-state syndrome measurements for the [[7,1,3]] quantum error correction code. We find that the best choice of syndrome order, determined by the fidelity of the state after noisy error correction, will depend on the error environment. We also compare the fidelity when syndrome measurements are done with Shor states versus Steane states and find that Steane states generally, but not always, lead to final states with higher fidelity. Together, these results allow a quantum computer programmer to choose the optimal syndrome measurement scheme based on the system’s error environment.  相似文献   

12.
13.
In the paper, the problem of simulation of quantum error correction by means of error correcting codes is discussed. Examples of error correction by means of quantum circuits constructed with the help of the QuantumCircuit package written in the language of the computer algebra system Mathematica are presented.  相似文献   

14.
Hung-Yu Chien 《Computer Networks》2013,57(14):2705-2717
Secure authentication of low cost Radio Frequency Identification (RFID) with low computing capacity is a big challenge, due to the constraint of the limited resources and the privacy concern of their mobility and traceability. Here, we not only concern authentication but also privacy (anonymity and un-traceability) to protect privacy of these mobile devices and their holders. In this paper, we delicately combine Rabin cryptosystem and error correction codes to design lightweight authentication scheme with anonymity and un-traceability. Compared to its previous counterpart [4], the proposed schemes improve the number of supported tags from O(k) to O(2k), where k is the dimension of the codes. The scheme is attractive to low-end devices, especially those low-cost cryptographic RFIDs. We, additionally, show the security weaknesses of a recently published Rabin cryptosystem-based RFID authentication scheme.  相似文献   

15.
Wang  Junli  Li  Ruihu  Liu  Yang  Guo  Guanmin 《Quantum Information Processing》2020,19(2):1-12
Quantum Information Processing - Any quantum communication task requires a common reference frame (i.e., phase, coordinate system). In particular, quantum key distribution requires different bases...  相似文献   

16.
针对在动态的网络环境中,实时视频在基于数据报协议的传输中遇到大量丢包的问题,提出了一种基于遗传算法改进的BP神经网络自适应前向纠错码的方法。该方法通过设计自适应的前向纠错编码来恢复丢失的数据包。首先,设计一个基于帧级别的前向纠错的框架,主要包括视频编码解码器、RS前向纠错码编解码模块和前向纠错码冗余度计算模块,用来模拟一般视频传输的环境;然后,设计GA-BP模型,将其应用到RS-FEC的冗余度计算,实现一种帧级不均等的保护方案;最后,基于GE信道模拟网络,生成丢包数据,训练离线模型,进行实验验证。实验结果表明,相比StaticRS和DeepRS,所提方法能够在较低冗余度下较高地恢复丢失的数据包,得到更高质量的视频。  相似文献   

17.
The dual of an entanglement-assisted quantum error-correcting (EAQEC) code is the code resulting from exchanging the original code’s information qubits with its ebits. To introduce this notion, we show how entanglement-assisted repetition codes and accumulator codes are dual to each other, much like their classical counterparts, and we give an explicit, general quantum shift-register circuit that encodes both classes of codes. We later show that our constructions are optimal, and this result completes our understanding of these dual classes of codes. We also establish the Gilbert–Varshamov bound and the Plotkin bound for EAQEC codes, and we use these to examine the existence of some EAQEC codes. Finally, we provide upper bounds on the block error probability when transmitting maximal-entanglement EAQEC codes over the depolarizing channel, and we derive variations of the hashing bound for EAQEC codes, which is a lower bound on the maximum rate at which reliable communication over Pauli channels is possible with the use of pre-shared entanglement.  相似文献   

18.
The efficiency of reconciliation in the continuous key distribution is the main factor which limits the ratio of secret key distribution. However,the efficiency depends on the computational complexity of the algorithm. This paper optimizes the two main aspects of the reconciliation process of the continuous key distribution:the partition of interval and the estimation of bit. We use Gaussian approximation to effectively speed up the convergence of algorithm. We design the estimation function as the estimato...  相似文献   

19.
It is suggested that Gray codes be used to improve the performance of methods for partial match and range queries. Specifically, the author illustrates the improved clustering of similar records that Gray codes can achieve with multiattribute hashing. Gray codes are used instead of binary codes to map record signatures to buckets. In Gray codes, successive codewords differ in the value of exactly one bit position; thus, successive buckets hold records with similar record signatures. The proposed method achieves better clustering of similar records, thus reducing the I/O time. A mathematical model is developed to derive formulas giving the average performance of both methods, and it is shown that the proposed method achieves 0-50% relative savings over the binary codes. The author also discusses how Gray codes could be applied to some retrieval methods designed for range queries, such as the grid file and the approach based on the so-called z-ordering. Gray codes are also used to design good distance-preserving functions, which map a k-dimensional (k-D) space into a one-dimensional one, in such a way that points are close in the k-D space are likely to be close in the 1-D space  相似文献   

20.
In this paper, the bias-compensation-based recursive least-squares (LS) estimation algorithm with a forgetting factor is proposed for output error models. First, for the unknown white noise, the so-called weighted average variance is introduced. With this weighted average variance, a bias-compensation term is first formulated to achieve the bias-eliminated estimates of the system parameters. Then, the weighted average variance is estimated. Finally, the final estimation algorithm is obtained by combining the estimation of the weighted average variance and the recursive LS estimation algorithm with a forgetting factor. The effectiveness of the proposed identification algorithm is verified by a numerical example.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号