首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This study presents a partial-parallel decoder architecture for π-rotation low-density parity-check (LDPC) codes, which have regular rotation structure and linear time encoding architecture. One improved construction method, which deletes one parity-check bit corresponding to the actually redundant weight-1 column, is proposed, and then an effective encoding algorithm, which utilises only the index of one permutation sub-matrix, is presented. Based on the group-structured and permutation characteristics, twodimensional arrays are used to store the check/variable node information during iterations, and then a cycle reuse mapping architecture is proposed for messages passing among memories, bit functional units (BFUs) and check function units (CFUs). Partial-parallel decoder with this mapping architecture is reconfigurable by only changing four mapping patterns, and needs no address generators which exist in some architectureaware (AA) LDPC decoders, such as quasi-cyclic LDPC (QC-LDPC) decoders. Simulation results show that the proposed methods are feasible and effective.  相似文献   

2.
The authors deal with the sum-product algorithm (SPA) based on the hyperbolic tangent (tanh) rule when it is applied for decoding low-density parity-check (LDPC) codes. Motivated by the finding that, because of the large number of multiplications required by the algorithm, an overflow in the decoder may occur, two novel modifications of the tanh function (and its inverse) are proposed. By means of computer simulations, both methods are evaluated using random-based LDPC codes with binary phase shift keying (BPSK) signals transmitted over the additive white Gaussian noise (AWGN) channel. It is shown that the proposed modifications improve the bit error rate (BER) performance up to 1 dB with respect to the conventional SPA. These results have also shown that the error floor is removed at BER lower than 10-6. Furthermore, two novel approximations are presented to reduce the computational complexity of the tanh function (and its inverse), based on either a piecewise linear function or a quantisation table. It is shown that the proposed approximations can slightly improve the BER performance (up to 0.13 dB) in the former case, whereas small BER performance degradation is observed (<0.25 dB) in the latter case. In both cases, however, the decoding complexity is reduced significantly  相似文献   

3.
Wireless Sensor Networks (WSNs) have hardware and software limitations and are deployed in hostile environments. The problem of energy consumption in WSNs has become a very important axis of research. To obtain good performance in terms of the network lifetime, several routing protocols have been proposed in the literature. Hierarchical routing is considered to be the most favorable approach in terms of energy efficiency. It is based on the concept parent-child hierarchy where the child nodes forward their messages to their parent, and then the parent node forwards them, directly or via other parent nodes, to the base station (sink). In this paper, we present a new Energy-Efficient clustering protocol for WSNs using an Objective Function and Random Search with Jumps (EEOFRSJ) in order to reduce sensor energy consumption. First, the objective function is used to find an optimal cluster formation taking into account the ratio of the mean Euclidean distance of the nodes to their associated cluster heads (CH) and their residual energy. Then, we find the best path to transmit data from the CHs nodes to the base station (BS) using a random search with jumps. We simulated our proposed approach compared with the Energy-Efficient in WSNs using Fuzzy C-Means clustering (EEFCM) protocol using Matlab Simulink. Simulation results have shown that our proposed protocol excels regarding energy consumption, resulting in network lifetime extension.  相似文献   

4.
Improved parallel weighted bit-flipping decoding algorithm for LDPC codes   总被引:1,自引:0,他引:1  
《Communications, IET》2009,3(1):91-99
Aiming at seeking a low-complexity decoder with fast decoding convergence speed for short or medium low-density parity-check (LDPC) codes, an improved parallel weighted bit-flipping (IPWBF) algorithm, which is applied flexibly for two classes of codes is presented here. For LDPC codes with low column weight in their parity check matrix, both bootstrapping and loop detection procedures, described in the existing literature, are included in IPWBF. Furthermore, a novel delay-handling procedure is introduced to prevent the codeword bits of high reliability from being flipped too hastily. For large column weight finite geometry LDPC codes, only the delay-handling procedure is included in IPWBF to show its effectiveness. Extensive simulation results demonstrate that the proposed algorithm achieves a good tradeoff between performance and complexity.  相似文献   

5.
We report a silicon area efficient method for designing a quasi-cyclic (QC) low-density parity-check (LDPC) code decoder. Our design method is geared to magnetic recording that demands high code rate and very high decoding throughput under stringent silicon cost constraints. The key to designing the decoder is to transform the conventional formulation of the min-sum decoding algorithm in such a way that we can readily develop a hardware architecture with several desirable features: 1) silicon area saving potential inherent in the min-sum algorithm for high-rate codes can be fully exploited; 2) the decoder circuit critical path may be greatly reduced; and 3) check node processing and variable node processing can operate concurrently. For the purpose of demonstration, we designed application-specific integrated circuit decoders for four rate-8/9 regular-(4, 36) QC-LDPC codes that contain 512-byte, 1024-byte, 2048-byte, and 4096-byte user data per codeword, respectively. Synthesis results show that our design method can meet the beyond-2 Gb/s throughput requirement in future magnetic recording at minimal silicon area cost.  相似文献   

6.
By implementing a field-programmable gate array (FPGA)-based simulator, we investigate the performance of randomly constructed high-rate quasi-cyclic (QC) low-density parity-check (LDPC) codes for the magnetic recording channel at very low block sector error rates. On the basis of extensive simulations, we conjecture guidelines for designing randomly constructed high-rate regular QC-LDPC codes with low error floor for the magnetic recording channel. Experimental results show that our high-rate regular QC-LDPC codes do not suffer from error floor, at least at block error rates of 10-9, and can realize significant coding gains over Reed-Solomon codes that are used in current practice. Furthermore, we develop a QC-LDPC decoder hardware architecture that is well suited to achieving high decoding throughput. Finally, to evaluate the implementation feasibility of LDPC codes for the magnetic recording channel, using 0.13 mum standard cell and memory libraries, we designed a read channel signal processing datapath consisting of a parallel max-log-MAP detector and a QC-LDPC decoder, which can achieve a throughput up to 1.8 Gb/s  相似文献   

7.
Powerful rate-compatible codes are essential for achieving high throughput in hybrid automatic repeat request (ARQ) systems for networks utilising packet data transmission. The paper focuses on the construction of efficient rate-compatible low-density parity-check (RC-LDPC) codes over a wide range of rates. Two LDPC code families are considered; namely, regular LDPC codes which are known for good performance and low error floor, and semi-random LDPC codes which offer performance similar to regular LDPC codes with the additional property of linear-time encoding. An algorithm for the design of punctured regular RC-LDPC codes that have low error floor is presented. Furthermore, systematic algorithms for the construction of semi-random RC-LDPC codes are proposed based on puncturing and extending. The performance of a type-ll hybrid ARQ system employing the proposed RC-LDPC codes is investigated. Compared with existing hybrid ARQ systems based on regular LDPC codes, the proposed ARQ system based on semi-random LDPC codes offers the advantages of linear-time encoding and higher throughput.  相似文献   

8.
Esmaeili  M. Gholami  M. 《Communications, IET》2008,2(10):1251-1262
A class of maximum-girth geometrically structured regular (n, 2, k ⩾ 5) (column-weight 2 and rowweight k) quasi-cyclic low-density parity-check (LDPC) codes is presented. The method is based on cylinder graphs and the slope concept. It is shown that the maximum girth achieved by these codes is 12. A lowcomplexity algorithm producing all such maximum-girth LDPC codes is given. The shortest constructed code has a length of 105. The minimum length n of a regular (2, k) LDPC code with girth g ? 12 determined by the Gallager bound has been achieved by the constructed codes. From the perspective of performance these codes outperform the column-weight 2 LDPC codes constructed by the previously reported methods. These codes can be encoded using an erasure decoding process.  相似文献   

9.
We investigate the application of low-density parity-check (LDPC) codes in volume holographic memory (VHM) systems. We show that a carefully designed irregular LDPC code has a very good performance in VHM systems. We optimize high-rate LDPC codes for the nonuniform error pattern in holographic memories to reduce the bit error rate extensively. The prior knowledge of noise distribution is used for designing as well as decoding the LDPC codes. We show that these codes have a superior performance to that of Reed-Solomon (RS) codes and regular LDPC counterparts. Our simulation shows that we can increase the maximum storage capacity of holographic memories by more than 50 percent if we use irregular LDPC codes with soft-decision decoding instead of conventionally employed RS codes with hard-decision decoding. The performance of these LDPC codes is close to the information theoretic capacity.  相似文献   

10.
We present a novel algorithm for imposing the maximum-transition-run (MTR) code constraint in the decoding of low-density parity-check (LDPC) codes over a partial response channel. This algorithm provides a gain of about 0.2 dB. We also develop log and max-log versions of the MTR enforcer, similar to the well-known "log-MAP" (maximum a posteriori ) and "max-log-MAP" variants of the LDPC decoder, that have performance equivalent to that of the original version.  相似文献   

11.
Wireless sensor networks (WSNs) consist of small nodes that are capable of sensing, computing, and communication. One of the greatest challenges in WSNs is the limitation of energy resources in nodes. This limitation applies to all of the protocols and algorithms that are used in these networks. Routing protocols in these networks should be designed considering this limitation. Many papers have been published examining low energy consumption networks. One of the techniques that has been used in this context is cross-layering. In this technique, to reduce the energy consumption, layers are not independent but they are related to each other and exchange information with each other. In this paper, a cross-layer design is presented to reduce the energy consumption in WSNs. In this design, the communication between the network layer and medium access layer has been established to help the control of efforts to access the line to reduce the number of failed attempts. In order to evaluate our proposed design, we used the NS2 software for simulation. Then, we compared our method with a cross-layer design based on an Ad-hoc On-demand Distance Vector routing algorithm. Simulation results show that our proposed idea reduces energy consumption and it also improves the packet delivery ratio and decreases the end-to-end delay in WSNs.  相似文献   

12.
Free-space optical (FSO) communication is of supreme importance for designing next-generation networks. Over the past decades, the radio frequency (RF) spectrum has been the main topic of interest for wireless technology. The RF spectrum is becoming denser and more employed, making its availability tough for additional channels. Optical communication, exploited for messages or indications in historical times, is now becoming famous and useful in combination with error-correcting codes (ECC) to mitigate the effects of fading caused by atmospheric turbulence. A free-space communication system (FSCS) in which the hybrid technology is based on FSO and RF. FSCS is a capable solution to overcome the downsides of current schemes and enhance the overall link reliability and availability. The proposed FSCS with regular low-density parity-check (LDPC) for coding techniques is deliberated and evaluated in terms of signal-to-noise ratio (SNR) in this paper. The extrinsic information transfer (EXIT) methodology is an incredible technique employed to investigate the sum-product decoding algorithm of LDPC codes and optimize the EXIT chart by applying curve fitting. In this research work, we also analyze the behavior of the EXIT chart of regular/irregular LDPC for the FSCS. We also investigate the error performance of LDPC code for the proposed FSCS.  相似文献   

13.
Optimal code rates for the Lorentzian channel: Shannon codes and LDPC codes   总被引:2,自引:0,他引:2  
We take an information-theoretic approach to obtaining optimal code rates for error-control codes on a magnetic storage channel approximated by the Lorentzian channel. Code rate optimality is in the sense of maximizing the information-theoretic user density along a track. To arrive at such results, we compute the achievable information rates for the Lorentzian channel as a function of signal-to-noise ratio and channel density, and then use these information rate calculations to obtain optimal code rates and maximal linear user densities. We call such (hypothetical) optimal codes "Shannon codes." We then examine optimal code rates on a Lorentzian channel assuming low-density parity-check (LDPC) codes instead of Shannon codes. We employ as our tool extrinsic information transfer (EXIT) charts, which provide a simple way of determining the capacity limit (or decoding threshold) for an LDPC code. We demonstrate that the optimal rates for LDPC codes coincide with those of Shannon codes and, more important, that LDPC codes are essentially capacity-achieving codes on the Lorentzian channel. Finally, we use the above results to estimate the optimal bit-aspect ratio, where optimality is in the sense of maximizing areal density.  相似文献   

14.
A new low-complexity generating method is given for the construction of long low-density parity-check (LDPC) codes. The method is based on performing a combinatorial operation between two given configurations. Combinatorial structures such as lattices, affine and projective planes are considered as the constituent configurations. Using this method, we present several classes of well-structured four-cycle free LDPC codes of high rates most of which are quasi-cyclic. From among the main advantages of this approach, we may refer to its low-complexity property and the fact that from performance perspective the constructed codes compete with the pseudorandom LDPC codes.  相似文献   

15.
可逆变长编码的解码器设计及VLSI实现   总被引:1,自引:0,他引:1  
变长编码(VLCs,Variable Length Codes)因其高效的数据压缩能力被广泛地应用在多媒体压缩领域,但VLCs的自身性质使它对信道错误的恢复能力很弱。随着在不可靠信道,如无线信道和网络上进行视频传送需求的增加,视频通讯的错误控制和错误恢复技术变得越来越重要。可逆变长编码(RVLCs,Reversible Variable Length Codes),当遇到传输错误时,充分利用了可用的数据,错误恢复能力强于VLCs。许多视频标准,如ITU H.263 ,ISO MPEG-4已经采用了RVLCs。本文详细描述了RVLCs解码器的解码算法和解码器的体系结构设计,给出了一个基于MPEG-4 ASP@L5的解码器VLSI实现。结果表明,该实现完全适用于MPEG-4实时编解码系统。  相似文献   

16.
Wireless Sensor Networks (WSNs) are large-scale and high-density networks that typically have coverage area overlap. In addition, a random deployment of sensor nodes cannot fully guarantee coverage of the sensing area, which leads to coverage holes in WSNs. Thus, coverage control plays an important role in WSNs. To alleviate unnecessary energy wastage and improve network performance, we consider both energy efficiency and coverage rate for WSNs. In this paper, we present a novel coverage control algorithm based on Particle Swarm Optimization (PSO). Firstly, the sensor nodes are randomly deployed in a target area and remain static after deployment. Then, the whole network is partitioned into grids, and we calculate each grid’s coverage rate and energy consumption. Finally, each sensor nodes’ sensing radius is adjusted according to the coverage rate and energy consumption of each grid. Simulation results show that our algorithm can effectively improve coverage rate and reduce energy consumption  相似文献   

17.
Wireless sensor networks (WSNs) and Internet of Things (IoT) have gained more popularity in recent years as an underlying infrastructure for connected devices and sensors in smart cities. The data generated from these sensors are used by smart cities to strengthen their infrastructure, utilities, and public services. WSNs are suitable for long periods of data acquisition in smart cities. To make the networks of smart cities more reliable for sensitive information, the blockchain mechanism has been proposed. The key issues and challenges of WSNs in smart cities is efficiently scheduling the resources; leading to extending the network lifetime of sensors. In this paper, a linear network coding (LNC) for WSNs with blockchain-enabled IoT devices has been proposed. The consumption of energy is reduced for each node by applying LNC. The efficiency and the reliability of the proposed model are evaluated and compared to those of the existing models. Results from the simulation demonstrate that the proposed model increases the efficiency in terms of the number of live nodes, packet delivery ratio, throughput, and the optimized residual energy compared to other current techniques.  相似文献   

18.
Zhao  L. Guo  L. Zhang  J. Zhang  H. 《Communications, IET》2009,3(8):1274-1283
In the traditional medium access control (MAC) protocols for wireless sensor networks (WSNs), energy consumption is traded for throughput and delay. However, in future WSNs, throughput and delay performance had better not be sacrificed for energy conservation. Here first, an incompletely cooperative game-theoretic heuristic-based constraint optimisation framework is introduced to achieve the goals of throughput, delay and energy conservation simultaneously. Then a simplified game-theoretic MAC (G-MAC) protocol is presented, which can be easily implemented in WSNs. Simulation results show that compared with two typical MAC protocols for WSNs, sensor MAC and timeout MAC, G-MAC can increase system throughput, and decrease delay and packet-loss-rate, while maintaining relatively low energy consumption.  相似文献   

19.
In the past few decades, Energy Efficiency (EE) has been a significant challenge in Wireless Sensor Networks (WSNs). WSN requires reduced transmission delay and higher throughput with high quality services, it further pays much attention in increased energy consumption to improve the network lifetime. To collect and transmit data Clustering based routing algorithm is considered as an effective way. Cluster Head (CH) acts as an essential role in network connectivity and perform data transmission and data aggregation, where the energy consumption is superior to non-CH nodes. Conventional clustering approaches attempts to cluster nodes of same size. Moreover, owing to randomly distributed node distribution, a cluster with equal nodes is not an obvious possibility to reduce the energy consumption. To resolve this issue, this paper provides a novel, Balanced-Imbalanced Cluster Algorithm (B-IBCA) with a Stabilized Boltzmann Approach (SBA) that attempts to balance the energy dissipation across uneven clusters in WSNs. BIBCA utilizes stabilizing logic to maintain the consistency of energy consumption among sensor nodes’. So as to handle the changing topological characteristics of sensor nodes, this stability based Boltzmann estimation algorithm allocates proper radius amongst the sensor nodes. The simulation shows that the proposed B-IBCA outperforms effectually over other approaches in terms of energy efficiency, lifetime, network stability, average residual energy and so on.  相似文献   

20.
In this paper, we propose to leverage the simple and explicit parity checks inherent in low-density parity-check (LDPC) codes to detect dominant error events without code rate penalty. This is enabled by enforcing a very weak constraint on the LDPC code parity check matrix structure. Such a constraint can be readily satisfied by most structured LDPC codes reported in the open literature, such as quasi-cyclic (QC) LDPC codes. Moreover, this zero-redundancy dominant error events detection can be extended to handle the bit errors that occur when deliberate bit-flipping is used to enforce -constraints. We have demonstrated the effectiveness of the proposed method in computer simulations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号