首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
针对无线传感器网络在林火监控应用中存在的问题,提出了一种分层聚簇数据融合算法。簇内传感器节点使用加权平均法对原始数据进行数据级融合处理,以消除原始数据中的冗余成分,减少从簇内传感器节点到簇头节点的通信量;簇头节点采用D-S证据理论建立识别框架,通过对本簇成员的反馈信号进行决策级融合处理,提高了火灾事件的识别精度和网络的鲁棒性。实验结果表明,该算法能有效消除无线传感器网络的冗余数据,并能够在失效节点数不超过总节点数40%的情况下正确工作。  相似文献   

2.
一种节能的无线传感器网络路由协议的设计与实现   总被引:1,自引:0,他引:1  
在无线传感器网络的路由协议中,基于簇的路由协议在拓扑管理、能量利用、数据融合等方面具有优势。本文针对目前已有协议能量消耗大、网络寿命短等问题,提出了一种能量感知的基于分布式簇算法的无线传感器网络协议EA-HEED。此协议改进了分布式的簇头选举算法,分配时分复用时隙并在簇头节点建立一棵路由树,从而提高簇头选举效率;设计了休眠冗余节点的簇内活动节点调度算法,减少能耗;采用考虑节点能量和节点与基站距离的簇头节点组织路由树方法、最小化网络开销以及能量负载平衡方法,优化路由协议,有效延长网络寿命。仿真结果表明,与LEACH和HEED协议相比,EAHEED协议可以进一步延长网络寿命。  相似文献   

3.
利用区域分割的方法建立了一种覆盖区域冗余节点的优化调度机制,实现对完全覆盖区域内冗余节点的休眠调度,并将该机制引入无线传感器网络的分簇结构中,提出一种基于分簇拓扑的节点调度优化算法。算法通过控制簇内冗余节点进行休眠,减少簇首的数据通信量和簇成员中工作的冗余节点个数,降低了网络能耗。仿真结果表明,与未考虑冗余节点休眠调度的分簇算法相比,该算法有效提高了网络能量利用率,延长了网络生命期。  相似文献   

4.
无线传感器网络中存在大量的数据冗余,数据融合技术通过对采样数据进行压缩,消除冗余,有效的减少了节点发送的数据量,延长传感器网络的寿命.提出了压缩感知与数据转发相结合的数据融合算法,在网络采样数据收集的过程中根据节点的子节点个数选择利用压缩感知对数据进行压缩还是直接对数据进行数据转发.仿真结果表明,和基于压缩感知的数据融合算法相比,数据转发与压缩感知相结合的数据融合算法,有效地在平衡节点间负载的同时减少节点的发送量.  相似文献   

5.
在无线传感器网络中进行覆盖控制能有效缓解无线传感器网络中节点能量受限的问题,通常采用的是基于二元感知模型的几何方法计算休眠冗余节点,其算法在实际应用中受到局限,不够精确。针对此问题,将提高能量利用效率作为重要指标,采用概率感知模型,提出一种新的覆盖控制算法(PSMC)。仿真结果表明,PSMC算法在较好地保持网络覆盖度的同时,可关闭大量冗余节点,有效地延长了网络寿命。  相似文献   

6.
孙环  陈宏滨 《计算机应用》2021,41(2):492-497
节点部署是无线传感器网络研究的重要问题之一.针对节点部署过程中的能量空洞问题,提出了一种基于萤火虫算法(FA)的节点重部署(NRBFA)策略.首先,在节点随机部署的传感器网络中,利用k-means算法进行分簇并引入冗余节点;然后,利用FA移动冗余节点,以分担簇头(CH)负载并均衡网络中节点的能耗;最后,再次利用FA寻找...  相似文献   

7.
洪利  王国强  徐顺杰  周正 《计算机工程》2010,36(1):102-103,
在无线传感器网络中,定向扩散算法多源节点在数据传播、路径加强时存在链路冗余,会造成不必要的网络能量消耗。针对该问题,提出一种新的源节点成簇的路由算法。网络中所有源节点组成一个簇,根据节点向心度推选簇头,sink节点仅与簇头进行通信,避免网络中过多的链路冗余。理论分析和仿真实验表明,该算法的能量损耗低于定向扩散路由算法,改进效率与网络规模及网络运行时间相关。  相似文献   

8.
在无线传感器网络中,定向扩散算法多源节点在数据传播、路径加强时存在链路冗余,会造成不必要的网络能量消耗。针对该问题,提出一种新的源节点成簇的路由算法。网络中所有源节点组成一个簇,根据节点向心度推选簇头,sink节点仅与簇头进行通信,避免网络中过多的链路冗余。理论分析和仿真实验表明,该算法的能量损耗低于定向扩散路由算法,改进效率与网络规模及网络运行时间相关。  相似文献   

9.
在分析无线传感器网络时空相关性模型的基础上,提出一种基于感知网格的无线传感器网络动态采样策略.将监测区域划分为多个感知网格,感知网格内只有簇头节点保持活跃状态,当出现异常数据后再激活感知网格内其他节点来获得更详细的信息.该策略通过减少无线传感器节点之间相同的或相近的采样数据上传来降低冗余信息的传输.仿真结果表明:该策略显著提高了无线传感器网络能量效率.  相似文献   

10.
针对无线传感器网络(WSNs)中多跳通信造成的“热区”以及数据冗余问题,提出了一种能量高效的分簇数据融合算法(EECDA).该算法在分簇阶段综合考虑节点的剩余能量、到基站的距离和邻居节点的数目,周期性地选择簇首和划分不同规模的簇;对簇内数据进行融合,利用辛普森积分法则计算预测接收数据,在保证采集数据实时性和准确性的前提下,降低数据的冗余性,减少通信负载,提高网络的能量利用率.仿真结果表明:该算法能够对数据进行高效预测,减少网络通信量,相较已有的算法,能够有效延长网络的生存周期.  相似文献   

11.
12.
随着互联网的高速发展,特别是近年来云计算、物联网等新兴技术的出现,社交网络等服务的广泛应用,人类社会的数据的规模正快速地增长,大数据时代已经到来。如何获取,分析大数据已经成为广泛的问题。但随着带来的数据的安全性必须引起高度重视。本文从大数据的概念和特征说起,阐述大数据面临的安全挑战,并提出大数据的安全应对策略。  相似文献   

13.
The optimization capabilities of RDBMSs make them attractive for executing data transformations. However, despite the fact that many useful data transformations can be expressed as relational queries, an important class of data transformations that produce several output tuples for a single input tuple cannot be expressed in that way.

To overcome this limitation, we propose to extend Relational Algebra with a new operator named data mapper. In this paper, we formalize the data mapper operator and investigate some of its properties. We then propose a set of algebraic rewriting rules that enable the logical optimization of expressions with mappers and prove their correctness. Finally, we experimentally study the proposed optimizations and identify the key factors that influence the optimization gains.  相似文献   


14.
As the amount of multimedia data is increasing day-by-day thanks to cheaper storage devices and increasing number of information sources, the machine learning algorithms are faced with large-sized datasets. When original data is huge in size small sample sizes are preferred for various applications. This is typically the case for multimedia applications. But using a simple random sample may not obtain satisfactory results because such a sample may not adequately represent the entire data set due to random fluctuations in the sampling process. The difficulty is particularly apparent when small sample sizes are needed. Fortunately the use of a good sampling set for training can improve the final results significantly. In KDD’03 we proposed EASE that outputs a sample based on its ‘closeness’ to the original sample. Reported results show that EASE outperforms simple random sampling (SRS). In this paper we propose EASIER that extends EASE in two ways. (1) EASE is a halving algorithm, i.e., to achieve the required sample ratio it starts from a suitable initial large sample and iteratively halves. EASIER, on the other hand, does away with the repeated halving by directly obtaining the required sample ratio in one iteration. (2) EASE was shown to work on IBM QUEST dataset which is a categorical count data set. EASIER, in addition, is shown to work on continuous data of images and audio features. We have successfully applied EASIER to image classification and audio event identification applications. Experimental results show that EASIER outperforms SRS significantly. Surong Wang received the B.E. and M.E. degree from the School of Information Engineering, University of Science and Technology Beijing, China, in 1999 and 2002 respectively. She is currently studying toward for the Ph.D. degree at the School of Computer Engineering, Nanyang Technological University, Singapore. Her research interests include multimedia data processing, image processing and content-based image retrieval. Manoranjan Dash obtained Ph.D. and M. Sc. (Computer Science) degrees from School of Computing, National University of Singapore. He has worked in academic and research institutes extensively and has published more than 30 research papers (mostly refereed) in various reputable machine learning and data mining journals, conference proceedings, and books. His research interests include machine learning and data mining, and their applications in bioinformatics, image processing, and GPU programming. Before joining School of Computer Engineering (SCE), Nanyang Technological University, Singapore, as Assistant Professor, he worked as a postdoctoral fellow in Northwestern University. He is a member of IEEE and ACM. He has served as program committee member of many conferences and he is in the editorial board of “International journal of Theoretical and Applied Computer Science.” Liang-Tien Chia received the B.S. and Ph.D. degrees from Loughborough University, in 1990 and 1994, respectively. He is an Associate Professor in the School of Computer Engineering, Nanyang Technological University, Singapore. He has recently been appointed as Head, Division of Computer Communications and he also holds the position of Director, Centre for Multimedia and Network Technology. His research interests include image/video processing & coding, multimodal data fusion, multimedia adaptation/transmission and multimedia over the Semantic Web. He has published over 80 research papers.  相似文献   

15.
Time series analysis has always been an important and interesting research field due to its frequent appearance in different applications. In the past, many approaches based on regression, neural networks and other mathematical models were proposed to analyze the time series. In this paper, we attempt to use the data mining technique to analyze time series. Many previous studies on data mining have focused on handling binary-valued data. Time series data, however, are usually quantitative values. We thus extend our previous fuzzy mining approach for handling time-series data to find linguistic association rules. The proposed approach first uses a sliding window to generate continues subsequences from a given time series and then analyzes the fuzzy itemsets from these subsequences. Appropriate post-processing is then performed to remove redundant patterns. Experiments are also made to show the performance of the proposed mining algorithm. Since the final results are represented by linguistic rules, they will be friendlier to human than quantitative representation.  相似文献   

16.
Compression-based data mining of sequential data   总被引:3,自引:1,他引:2  
The vast majority of data mining algorithms require the setting of many input parameters. The dangers of working with parameter-laden algorithms are twofold. First, incorrect settings may cause an algorithm to fail in finding the true patterns. Second, a perhaps more insidious problem is that the algorithm may report spurious patterns that do not really exist, or greatly overestimate the significance of the reported patterns. This is especially likely when the user fails to understand the role of parameters in the data mining process. Data mining algorithms should have as few parameters as possible. A parameter-light algorithm would limit our ability to impose our prejudices, expectations, and presumptions on the problem at hand, and would let the data itself speak to us. In this work, we show that recent results in bioinformatics, learning, and computational theory hold great promise for a parameter-light data-mining paradigm. The results are strongly connected to Kolmogorov complexity theory. However, as a practical matter, they can be implemented using any off-the-shelf compression algorithm with the addition of just a dozen lines of code. We will show that this approach is competitive or superior to many of the state-of-the-art approaches in anomaly/interestingness detection, classification, and clustering with empirical tests on time series/DNA/text/XML/video datasets. As a further evidence of the advantages of our method, we will demonstrate its effectiveness to solve a real world classification problem in recommending printing services and products. Responsible editor: Johannes Gehrke  相似文献   

17.
18.
Linear combinations of translates of a given basis function have long been successfully used to solve scattered data interpolation and approximation problems. We demonstrate how the classical basis function approach can be transferred to the projective space ℙ d−1. To be precise, we use concepts from harmonic analysis to identify positive definite and strictly positive definite zonal functions on ℙ d−1. These can then be applied to solve problems arising in tomography since the data given there consists of integrals over lines. Here, enhancing known reconstruction techniques with the use of a scattered data interpolant in the “space of lines”, naturally leads to reconstruction algorithms well suited to limited angle and limited range tomography. In the medical setting algorithms for such incomplete data problems are desirable as using them can limit radiation dosage.  相似文献   

19.
自互联网出现以来,数据保护一直是个难题。当社交媒体网站在数字市场上大展拳脚的那一刻,对用户数据和信息的保护让决策者们不得不保持警惕。在数字经济时代的背景下,数据逐渐成为企业提升竞争力的重要要素,围绕着数据展开的市场竞争越来越多。数字经济时代,企业对数据资源的重视与争夺,将网络平台权利与用户个人信息保护、互联网企业之间有关数据不正当竞争的纠纷和冲突,推上了风口浪尖。因此,如何协调和把握数据的合理利用和保护之间的关系,规制不正当竞争行为,以求在数字经济快速发展的洪流中,占据竞争优势显得尤为重要。文章将通过分析数据的二元性,讨论数据在数字经济时代的价值,并结合反不正当竞争法和实践案例,进一步讨论数据利用和保护的关系。  相似文献   

20.
Existing automated test data generation techniques tend to start from scratch, implicitly assuming that no pre‐existing test data are available. However, this assumption may not always hold, and where it does not, there may be a missed opportunity; perhaps the pre‐existing test cases could be used to assist the automated generation of additional test cases. This paper introduces search‐based test data regeneration, a technique that can generate additional test data from existing test data using a meta‐heuristic search algorithm. The proposed technique is compared to a widely studied test data generation approach in terms of both efficiency and effectiveness. The empirical evaluation shows that test data regeneration can be up to 2 orders of magnitude more efficient than existing test data generation techniques, while achieving comparable effectiveness in terms of structural coverage and mutation score. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号