共查询到20条相似文献,搜索用时 62 毫秒
1.
2.
一种节能的无线传感器网络路由协议的设计与实现 总被引:1,自引:0,他引:1
在无线传感器网络的路由协议中,基于簇的路由协议在拓扑管理、能量利用、数据融合等方面具有优势。本文针对目前已有协议能量消耗大、网络寿命短等问题,提出了一种能量感知的基于分布式簇算法的无线传感器网络协议EA-HEED。此协议改进了分布式的簇头选举算法,分配时分复用时隙并在簇头节点建立一棵路由树,从而提高簇头选举效率;设计了休眠冗余节点的簇内活动节点调度算法,减少能耗;采用考虑节点能量和节点与基站距离的簇头节点组织路由树方法、最小化网络开销以及能量负载平衡方法,优化路由协议,有效延长网络寿命。仿真结果表明,与LEACH和HEED协议相比,EAHEED协议可以进一步延长网络寿命。 相似文献
3.
4.
无线传感器网络中存在大量的数据冗余,数据融合技术通过对采样数据进行压缩,消除冗余,有效的减少了节点发送的数据量,延长传感器网络的寿命.提出了压缩感知与数据转发相结合的数据融合算法,在网络采样数据收集的过程中根据节点的子节点个数选择利用压缩感知对数据进行压缩还是直接对数据进行数据转发.仿真结果表明,和基于压缩感知的数据融合算法相比,数据转发与压缩感知相结合的数据融合算法,有效地在平衡节点间负载的同时减少节点的发送量. 相似文献
5.
6.
节点部署是无线传感器网络研究的重要问题之一.针对节点部署过程中的能量空洞问题,提出了一种基于萤火虫算法(FA)的节点重部署(NRBFA)策略.首先,在节点随机部署的传感器网络中,利用k-means算法进行分簇并引入冗余节点;然后,利用FA移动冗余节点,以分担簇头(CH)负载并均衡网络中节点的能耗;最后,再次利用FA寻找... 相似文献
7.
8.
9.
在分析无线传感器网络时空相关性模型的基础上,提出一种基于感知网格的无线传感器网络动态采样策略.将监测区域划分为多个感知网格,感知网格内只有簇头节点保持活跃状态,当出现异常数据后再激活感知网格内其他节点来获得更详细的信息.该策略通过减少无线传感器节点之间相同的或相近的采样数据上传来降低冗余信息的传输.仿真结果表明:该策略显著提高了无线传感器网络能量效率. 相似文献
10.
11.
12.
杨炜墩 《网络安全技术与应用》2014,(8):136-136
随着互联网的高速发展,特别是近年来云计算、物联网等新兴技术的出现,社交网络等服务的广泛应用,人类社会的数据的规模正快速地增长,大数据时代已经到来。如何获取,分析大数据已经成为广泛的问题。但随着带来的数据的安全性必须引起高度重视。本文从大数据的概念和特征说起,阐述大数据面临的安全挑战,并提出大数据的安全应对策略。 相似文献
13.
The optimization capabilities of RDBMSs make them attractive for executing data transformations. However, despite the fact that many useful data transformations can be expressed as relational queries, an important class of data transformations that produce several output tuples for a single input tuple cannot be expressed in that way.
To overcome this limitation, we propose to extend Relational Algebra with a new operator named data mapper. In this paper, we formalize the data mapper operator and investigate some of its properties. We then propose a set of algebraic rewriting rules that enable the logical optimization of expressions with mappers and prove their correctness. Finally, we experimentally study the proposed optimizations and identify the key factors that influence the optimization gains. 相似文献
14.
As the amount of multimedia data is increasing day-by-day thanks to cheaper storage devices and increasing number of information
sources, the machine learning algorithms are faced with large-sized datasets. When original data is huge in size small sample
sizes are preferred for various applications. This is typically the case for multimedia applications. But using a simple random
sample may not obtain satisfactory results because such a sample may not adequately represent the entire data set due to random
fluctuations in the sampling process. The difficulty is particularly apparent when small sample sizes are needed. Fortunately
the use of a good sampling set for training can improve the final results significantly. In KDD’03 we proposed EASE that outputs a sample based on its ‘closeness’ to the original sample. Reported results show that EASE outperforms simple random sampling (SRS). In this paper we propose EASIER that extends EASE in two ways. (1) EASE is a halving algorithm, i.e., to achieve the required sample ratio it starts from a suitable initial large sample and iteratively
halves. EASIER, on the other hand, does away with the repeated halving by directly obtaining the required sample ratio in one iteration.
(2) EASE was shown to work on IBM QUEST dataset which is a categorical count data set. EASIER, in addition, is shown to work on continuous data of images and audio features. We have successfully applied EASIER to image classification and audio event identification applications. Experimental results show that EASIER outperforms SRS significantly.
Surong Wang received the B.E. and M.E. degree from the School of Information Engineering, University of Science and Technology Beijing,
China, in 1999 and 2002 respectively. She is currently studying toward for the Ph.D. degree at the School of Computer Engineering,
Nanyang Technological University, Singapore. Her research interests include multimedia data processing, image processing and
content-based image retrieval.
Manoranjan Dash obtained Ph.D. and M. Sc. (Computer Science) degrees from School of Computing, National University of Singapore. He has worked
in academic and research institutes extensively and has published more than 30 research papers (mostly refereed) in various
reputable machine learning and data mining journals, conference proceedings, and books. His research interests include machine
learning and data mining, and their applications in bioinformatics, image processing, and GPU programming. Before joining
School of Computer Engineering (SCE), Nanyang Technological University, Singapore, as Assistant Professor, he worked as a
postdoctoral fellow in Northwestern University. He is a member of IEEE and ACM. He has served as program committee member
of many conferences and he is in the editorial board of “International journal of Theoretical and Applied Computer Science.”
Liang-Tien Chia received the B.S. and Ph.D. degrees from Loughborough University, in 1990 and 1994, respectively. He is an Associate Professor
in the School of Computer Engineering, Nanyang Technological University, Singapore. He has recently been appointed as Head,
Division of Computer Communications and he also holds the position of Director, Centre for Multimedia and Network Technology.
His research interests include image/video processing & coding, multimodal data fusion, multimedia adaptation/transmission
and multimedia over the Semantic Web. He has published over 80 research papers. 相似文献
15.
Time series analysis has always been an important and interesting research field due to its frequent appearance in different applications. In the past, many approaches based on regression, neural networks and other mathematical models were proposed to analyze the time series. In this paper, we attempt to use the data mining technique to analyze time series. Many previous studies on data mining have focused on handling binary-valued data. Time series data, however, are usually quantitative values. We thus extend our previous fuzzy mining approach for handling time-series data to find linguistic association rules. The proposed approach first uses a sliding window to generate continues subsequences from a given time series and then analyzes the fuzzy itemsets from these subsequences. Appropriate post-processing is then performed to remove redundant patterns. Experiments are also made to show the performance of the proposed mining algorithm. Since the final results are represented by linguistic rules, they will be friendlier to human than quantitative representation. 相似文献
16.
Compression-based data mining of sequential data 总被引:3,自引:1,他引:2
Eamonn Keogh Stefano Lonardi Chotirat Ann Ratanamahatana Li Wei Sang-Hee Lee John Handley 《Data mining and knowledge discovery》2007,14(1):99-129
The vast majority of data mining algorithms require the setting of many input parameters. The dangers of working with parameter-laden
algorithms are twofold. First, incorrect settings may cause an algorithm to fail in finding the true patterns. Second, a perhaps
more insidious problem is that the algorithm may report spurious patterns that do not really exist, or greatly overestimate
the significance of the reported patterns. This is especially likely when the user fails to understand the role of parameters
in the data mining process. Data mining algorithms should have as few parameters as possible. A parameter-light algorithm
would limit our ability to impose our prejudices, expectations, and presumptions on the problem at hand, and would let the
data itself speak to us. In this work, we show that recent results in bioinformatics, learning, and computational theory hold great promise
for a parameter-light data-mining paradigm. The results are strongly connected to Kolmogorov complexity theory. However, as
a practical matter, they can be implemented using any off-the-shelf compression algorithm with the addition of just a dozen
lines of code. We will show that this approach is competitive or superior to many of the state-of-the-art approaches in anomaly/interestingness
detection, classification, and clustering with empirical tests on time series/DNA/text/XML/video datasets. As a further evidence
of the advantages of our method, we will demonstrate its effectiveness to solve a real world classification problem in recommending
printing services and products.
Responsible editor: Johannes Gehrke 相似文献
17.
18.
Linear combinations of translates of a given basis function have long been successfully used to solve scattered data interpolation
and approximation problems. We demonstrate how the classical basis function approach can be transferred to the projective
space ℙ
d−1. To be precise, we use concepts from harmonic analysis to identify positive definite and strictly positive definite zonal
functions on ℙ
d−1. These can then be applied to solve problems arising in tomography since the data given there consists of integrals over
lines. Here, enhancing known reconstruction techniques with the use of a scattered data interpolant in the “space of lines”,
naturally leads to reconstruction algorithms well suited to limited angle and limited range tomography. In the medical setting
algorithms for such incomplete data problems are desirable as using them can limit radiation dosage. 相似文献
19.
《信息安全与技术》2020,(1)
自互联网出现以来,数据保护一直是个难题。当社交媒体网站在数字市场上大展拳脚的那一刻,对用户数据和信息的保护让决策者们不得不保持警惕。在数字经济时代的背景下,数据逐渐成为企业提升竞争力的重要要素,围绕着数据展开的市场竞争越来越多。数字经济时代,企业对数据资源的重视与争夺,将网络平台权利与用户个人信息保护、互联网企业之间有关数据不正当竞争的纠纷和冲突,推上了风口浪尖。因此,如何协调和把握数据的合理利用和保护之间的关系,规制不正当竞争行为,以求在数字经济快速发展的洪流中,占据竞争优势显得尤为重要。文章将通过分析数据的二元性,讨论数据在数字经济时代的价值,并结合反不正当竞争法和实践案例,进一步讨论数据利用和保护的关系。 相似文献
20.
Existing automated test data generation techniques tend to start from scratch, implicitly assuming that no pre‐existing test data are available. However, this assumption may not always hold, and where it does not, there may be a missed opportunity; perhaps the pre‐existing test cases could be used to assist the automated generation of additional test cases. This paper introduces search‐based test data regeneration, a technique that can generate additional test data from existing test data using a meta‐heuristic search algorithm. The proposed technique is compared to a widely studied test data generation approach in terms of both efficiency and effectiveness. The empirical evaluation shows that test data regeneration can be up to 2 orders of magnitude more efficient than existing test data generation techniques, while achieving comparable effectiveness in terms of structural coverage and mutation score. Copyright © 2010 John Wiley & Sons, Ltd. 相似文献