首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Aziz  Ahmed  Osamy  Walid  Alfawaz  Oruba  Khedr  Ahmed M. 《Wireless Networks》2022,28(6):2375-2391
Wireless Networks - The problem of Data acquisition in large distributed Wireless Sensor Networks (WSNs) scale is a hindrance in the growth of the Internet of Things (IoT). Recently, the...  相似文献   

2.
The unreliable links and packet losing are ubiquitous in WSN.The performance of data collection algorithm based on compressive sensing is sensitive to packet losing.Firstly,the relationship between packet loss rate and CS-based reconstruction precision was analyzed,and the sparsest block measurement (SBM) matrix was formulated to keep the data gathering consumption smallest and make sure the low-rank property of measurements.Then,combined with the matrix completion (MC) and compressive sensing (CS),the CS data gathering algorithm based on sparsest block measurement matrix (CS-SBM) algorithm was proposed.CS-SBM gathered data in a period and recovered the loss data based on MC to weaken the impact of packet loss on data gathering.CS-SBM reconstructed data based on CS to reduce measurement number and energy consumption and prolong the network lifetime.Simulation analysis indicates that the proposed algorithm reconstruct the whole data with high-accuracy under 50% packet loss rate,resisting unreliable links effectively.  相似文献   

3.
The high number of transmissions in sensor nodes having a limited amount of energy leads to a drastic decrease in the lifetime of wireless sensor networks. For dense sensor networks, the provided data potentially have spatial and temporal correlations. The correlations between the data of the nodes make it possible to utilize compressive sensing theory during the data gathering phase; however, applying this technique leads to some errors during the reconstruction phase. In this paper, a method based on weighted spatial-temporal compressive sensing is proposed to improve the accuracy of the reconstructed data. Simulation results confirm that the reconstruction error of the proposed method is approximately 16 times less than the closest compared method. It should be noted that due to applying weighted spatial-temporal compressive sensing, some extra transmissions are posed to the network. However, considering both lifetime and accuracy factors as a compound metric, the proposed method yields a 12% improvement compared to the closest method in the literature.  相似文献   

4.
In big data wireless sensor networks, the volume of data sharply increases at an unprecedented rate and the dense deployment of sensor nodes will lead to high spatial-temporal correlation and redundancy of sensors’ readings. Compressive data aggregation may be an indispensable way to eliminate the redundancy. However, the existing compressive data aggregation requires a large number of sensor nodes to take part in each measurement, which may cause heavy load in data transmission. To solve this problem, in this paper, we propose a new compressive data aggregation scheme based on compressive sensing. We apply the deterministic binary matrix based on low density parity check codes as measurement matrix. Each row of the measurement matrix represents a projection process. Owing to the sparsity characteristics of the matrix, only the nodes whose corresponding elements in the matrix are non-zero take part in each projection. Each projection can form an aggregation tree with minimum energy consumption. After all the measurements are collected, the sink node can recover original readings precisely. Simulation results show that our algorithm can efficiently reduce the number of the transmitted packets and the energy consumption of the whole network while reconstructing the original readings accurately.  相似文献   

5.
Zhou  Siwang  Zhong  Qian  Ou  Bo  Liu  Yonghe 《Wireless Networks》2019,25(2):675-687

The latest research progress of the theory of compressive sensing (CS) over graphs makes it possible that the advantage of CS can be utilized by data ferries to gather data for wireless sensor networks. In this paper, we leverage the non-uniform distribution of the sensing data field to significantly reduce the required number of data ferries, yet ensuring the recovered data quality. Specially, we propose an intelligent compressive data gathering scheme consisting of an efficient stopping criterion and a novel learning strategy. The proposed stopping criterion is based only on the gathered data, without relying on the priori knowledge on the sparsity of unknown sensing data. Our learning strategy minimizes the number of data ferries while guaranteeing the data quality by learning the statistical distribution of the gathered data. Simulation results show that the proposed scheme improves the reconstruction accuracy and stability compared to the existing ones.

  相似文献   

6.
Hyperspectral data processing typically demands enormous computational resources in terms of storage, computation, and input/output throughputs, particularly when real-time processing is desired. In this paper, a proof-of-concept study is conducted on compressive sensing (CS) and unmixing for hyperspectral imaging. Specifically, we investigate a low-complexity scheme for hyperspectral data compression and reconstruction. In this scheme, compressed hyperspectral data are acquired directly by a device similar to the single-pixel camera based on the principle of CS. To decode the compressed data, we propose a numerical procedure to compute directly the unmixed abundance fractions of given endmembers, completely bypassing high-complexity tasks involving the hyperspectral data cube itself. The reconstruction model is to minimize the total variation of the abundance fractions subject to a preprocessed fidelity equation with a significantly reduced size and other side constraints. An augmented Lagrangian-type algorithm is developed to solve this model. We conduct extensive numerical experiments to demonstrate the feasibility and efficiency of the proposed approach, using both synthetic data and hardware-measured data. Experimental and computational evidences obtained from this paper indicate that the proposed scheme has a high potential in real-world applications.  相似文献   

7.
虞晓韩  董克明  李霞  陈超 《电信科学》2019,35(12):67-78
压缩感知技术在信号处理、图像处理、数据收集与分析等方面有很大优势,是近年来的研究热点。研究了如何安全高效地运用压缩感知技术来收集无线传感器网络中的数据。传统的基于压缩感知技术的数据收集方法并不考虑数据收集的安全性,而且网络内的所有节点都会参与每个测量值的收集。将El Gamal加密算法和基于稀疏随机矩阵的压缩感知技术相结合,提出了一种基于El Gamal加密算法的稀疏压缩数据收集方法(El Gamal based sparse compressive data gathering,ESCDG)。理论分析和数值实验表明,ESCDG不仅能降低网络资源的消耗而且能抵御多项式算力的内部攻击和外部攻击。  相似文献   

8.
Wireless Personal Communications - The advancements of technology in the field of communication made WSN based IoT attractive and applicable to various areas. It is comprised IoT nodes that work on...  相似文献   

9.
A wireless sensor network (WSN) is a prominent technology that could assist in the fourth industrial revolution. Sensor nodes present in the WSNs are functioned by a battery. It is impossible to recharge or replace the battery, hence energy is the most important resource of WSNs. Many techniques have been devised and used over the years to conserve this scarce resource of WSNs. Clustering has turned out to be one of the most efficient methods for this purpose. This paper intends to propose an efficient technique for election of cluster heads in WSNs to increase the network lifespan. For the achievement of this task, grey wolf optimizer (GWO) has been employed. In this paper, the general GWO has been modified to cater to the specific purpose of cluster head selection in WSNs. The objective function for the proposed formulation considers average intra‐cluster distance, sink distance, residual energy, and CH balancing factor. The simulations are carried out in diverse conditions. On comparison of the proposed protocol, ie, GWO‐C protocol with some well‐known clustering protocols, the obtained results prove that the proposed protocol outperforms with respect to the consumption of the energy, throughput, and the lifespan of the network. The proposed protocol forms energy‐efficient and scalable clusters.  相似文献   

10.
To solve the problem that the ubiquitous unreliable links in the WSN influence the performance of the compressive sensing (CS) based data gathering,first the relationship between the reconstruction SNR of CS-based data gathering algorithm and the bit-error-ratio (BER) were simulated quantitatively.Then classify two cases were classified,namely light-payload and heavy-payload,relying on the analysis of wireless link packet loss characteristics.The random packet loss model was conceived to describe the packet loss under light-payload scenario.Further the neighbor topology spatial correlation prediction-based CS data gathering (CS-NTSC) algorithm was proposed,which utilized the nodes spatial correlation to reduce the impact of error.Additionally,the node pseudo-failure model was conceived to describe the packet loss occurred in network congestion,and then the sparse schedule-aided CS data gathering (CS-SSDG) algorithm were conceived,for the purpose of changing the sparsity of measurement matrix and avoiding measurements amongst the nodes affected by unreliable links,thus weakening the impact of error/loss on data reconstruction.Simulation analysis indicates that the proposed algorithms are not only capable of improving the accuracy of the data reconstruction without extra energy,but also effectively reducing the impact affected by the unreliable links imposed on CS-based data gathering.  相似文献   

11.
提出一种应用于大规模环境监测领域无线传感器网络(WSN)中改进的基于压缩感知(CS)数据收集方案。该方案改进联合稀疏模型(JSM)进行数据分析;并采用分簇路由采集并传输数据,即每簇簇内使用基于CS的数据收集方法,簇头之间采用最短路径路由到达汇聚节点。仿真结果表明,该方案不仅缩小了数据处理的分布范围,降低了恢复误差,而且大大减少了数据传输次数,维持了整个网络的能耗平衡。  相似文献   

12.
Data collection is a fundamental operation in wireless sensor networks (WSNs). In, a quick convergecast solution is proposed for data collection in a ZigBee beacon-enabled tree-based WSN. However, it does not consider the network repair issue. When a ZigBee router loses its link to its parent, all its descendants have to rejoin the network. The rejoining procedure is time-consuming and may incur high communication overheads. The proposed network repair scheme consists of a regular repair and an instant repair schemes. Periodically, the network coordinator can issue regular repair to refresh the network (so as to keep the network in good shape). During normal operations, if a router loses its parent, it tries instant repair to reconnect to a new parent. Our design thus improves over ZigBee in that nodes can continue their operations even during instant repair.  相似文献   

13.
14.
水声数据传输系统中,由于水下信道的资源紧张,传输的数据量非常有限,因此需要传输大数据量时对发送节点的数据处理能力提出了一个非常大的挑战。随着压缩感知(CS)理论的成熟和广泛应用,本文将此应用在水声数据的高效传输上,提出了一种切实可行的传输方案,并在网络时延、信道利用率、误码率等关键参数上和传统系统来比较水声数据传输性能。仿真结果表明,压缩感知理论能够在有效地压缩发送数据量的前提下把数据发送到接收端,并大大改善了传输性能。  相似文献   

15.
压缩感知图像融合   总被引:1,自引:0,他引:1  
徐静 《现代电子技术》2012,35(18):119-121
目前图像融合的方法大多数都是基于小波变换的图像融合方法,通过对小渡变换之后的低频系数和高频系数分别采用不同的融合准则,来达到所需要的图像以进行下一步处理,这些方法需要知道原始图像,也就是对硬件要求较高。采用压缩感知图像融合,即,将压缩感知用于图像融合,使得只知道原始图像在某个变换下的投影值的情况下,通过对已知的投影值使用融合规则得到融合后的投影值,然后用重构算法重构出图像,大大降低了对硬件的要求。在此给出了压缩感知融合方法与基于小波变换的图像融合方法的实验结果,融合结果表明,在不降低融合效果和视觉效果的基础上,该方法能够极大地降低硬件成本。采用熵作为衡量融合效果的指标,并对用两种方法融合的结果图像做了对比,研究结果表明,CS融合方法要优于基于小渡变换的图像融合方法。  相似文献   

16.
本文从关口局话单数据采集优化调整入手,初步探讨了如何提高采集效率,节约系统存储空间,快速提高结算系统处理能力问题.通过一系列优化调整,不仅为关口局、采集和结算系统节约了大量的物理存储空间,结算系统的的处理能力也得到了显著提升.  相似文献   

17.
压缩感知是针对稀疏或可压缩信号,在采样的同时即可对信号数据进行适当压缩的新理论,采用该理论,可以仅需少量信号的观测值来实现精确重构信号。文中概述了CS理论框架及关键技术问题,介绍了信号稀疏表示、观测矩阵和重构算法。最后仿真实现了基于压缩感知的信号重构,并对正交匹配追踪(OMP)重构算法性能作了分析。  相似文献   

18.
蒋伟  杨俊杰 《电视技术》2016,40(11):12-17
针对基于压缩感知的图像编码系统,分析了系统中编码参数和码率以及失真的关系,在此基础上提出了基于压缩感知的图像编码系统的码率-失真模型.根据所提模型设计了率失真优化的压缩感知图像编码算法.在给定码率的条件下,优化编码参数,使得编码器失真最小.算法在Matlab的编码平台上进行了仿真和实验,结果证明提出的码率-失真模型能够很好地拟合实际率失真曲线,并且基于该模型的率失真优化算法有效的提高了压缩感知图像编码系统的性能.  相似文献   

19.
A new approach to compress non-stationary signals is proposed in this paper. The sparse basis of non-stationary signals is constructed at first and then compressive sensing technique is used to decrease enormously the number of samples in the process. The reconstructed signal can well approximate the original signal in time domain, frequency domain as well as the time-frequency domain. Computer simulation on linear frequency modulated (LFM) signal shows the validity of this novel method.  相似文献   

20.
In order to reduce the effect of noise folding (NF) phenomenon on the performance of sparse signal recon-struction,a new denoising recovery algorithm based on selective measure was proposed.Firstly,the NF phenomenon in compressive sensing (CS) was explained in theory.Secondly,a new statistic based on compressive measurement data was proposed,and its probability density function (PDF) was deduced and analyzed.Then a noise filter matrix was constructed based on the PDF to guide the optimization of measurement matrix.The optimized measurement matrix can selectively sense the sparse signal and suppress the noise to improve the SNR of the measurement data,resulting in the improvement of sparse reconstruction performance.Finally,it was pointed out that increasing the measurement times can further enhance the performance of denoising reconstruction.Simulation results show that the proposed denoising recon-struction algorithm has a better improvement in the performance of reconstruction of noisy signal,especially under low SNR.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号