首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
《Computer Networks》2008,52(14):2713-2727
Secure wireless sensor networks (WSNs) must be able to associate a set of reported data with a valid location. Many algorithms exist for the localization service that determines a WSN node’s location, and current research is developing for location verification, where the network must determine whether or not a node’s claimed location is valid (or invalid). However, the interaction of these two services creates another challenge, since there is no method to distinguish between benign errors, e.g., errors that are inherent to the localization technique, and malicious errors, e.g., errors due to a node’s deceptive location report. In this paper, we study the problem of inherent localization errors and their impact on the location verification service. We propose a localization and location verification (LLV) server model, and define categories of LLV schemes for discrete and continuous resolution. We then designate two metrics to measure the impact of inherent localization errors—the probability of verification (for the discrete location verification schemes) and the CDF of the deviation distance (for the continuous location verification schemes)—to analyze the performance of each LLV category. Numerical results show that a proper tuning mechanism is needed to tolerate even small inherited estimation errors, otherwise the location verification can result in the rejection of almost all nodes. In addition, we propose several location verification feedback (LV-FEED) algorithms to improve the localization accuracy. Analysis of these algorithms shows that a significant improvement in localization accuracy can be accomplished in a few iterations of executing the location verification feedback schemes.  相似文献   

2.
随着物联网的迅速发展,巨量的嵌入式设备广泛应用于现代生活,安全和隐私成为了物联网发展的重要挑战.物联网设备互联构成集群网络,设备集群证明是验证集群环境内所有设备的可信状态的一种安全技术,也是物联网安全研究需要解决的一个重要问题.传统证明技术主要针对单一证明者的场景,无法满足大规模集群的全局证明需求;而简单扩展的集群证明机制通常难以抵抗合谋攻击,且效率低下.为了解决这些问题,本文提出了一种基于设备分组的高效集群证明方案.该方案将同构设备分组,并于每组设立一个管理节点负责该组的组内节点验证.当进行远程证明时,由于每个管理节点已经预先获悉该组节点可信性状态,所以只需要对全局集群环境内所有管理节点进行验证,从而提高了效率.该方案不仅高效,还具有较高的安全性,能够抵抗合谋攻击等.我们实现的原型系统实验测试结果表明,当同构设备越多,管理节点越少的时候,本文方案的证明效率更高.  相似文献   

3.
Federated learning (FL) has emerged to break data-silo and protect clients’ privacy in the field of artificial intelligence. However, deep leakage from gradient (DLG) attack can fully reconstruct clients’ data from the submitted gradient, which threatens the fundamental privacy of FL. Although cryptology and differential privacy prevent privacy leakage from gradient, they bring negative effect on communication overhead or model performance. Moreover, the original distribution of local gradient has been changed in these schemes, which makes it difficult to defend against adversarial attack. In this paper, we propose a novel federated learning framework with model decomposition, aggregation and assembling (FedDAA), along with a training algorithm, to train federated model, where local gradient is decomposed into multiple blocks and sent to different proxy servers to complete aggregation. To bring better privacy protection performance to FedDAA, an indicator is designed based on image structural similarity to measure privacy leakage under DLG attack and an optimization method is given to protect privacy with the least proxy servers. In addition, we give defense schemes against adversarial attack in FedDAA and design an algorithm to verify the correctness of aggregated results. Experimental results demonstrate that FedDAA can reduce the structural similarity between the reconstructed image and the original image to 0.014 and remain model convergence accuracy as 0.952, thus having the best privacy protection performance and model training effect. More importantly, defense schemes against adversarial attack are compatible with privacy protection in FedDAA and the defense effects are not weaker than those in the traditional FL. Moreover, verification algorithm of aggregation results brings about negligible overhead to FedDAA.  相似文献   

4.
Understanding the weaknesses and the limitations of existing digital fingerprinting schemes and designing effective collusion attacks play an important role in the development of digital fingerprinting. In this paper, we propose a collusion attack optimization framework for spread-spectrum (SS) fingerprinting. Our framework is based upon the closed-loop feedback control theory. In the framework, we at first define a measure function to test whether the fingerprint presents in the attacked signal after collusion. Then, an optimization mechanism is introduced to attenuate the fingerprints from the forgery. We evaluate the performance of the proposed framework for three different SS-based embedding methods. The experimental results show that the proposed framework is more effective than the other examined collusion attacks. About three pieces of fingerprinted content are able to interrupt the fingerprinting system which accommodates about 1000 users, if we require the detection probability to be less than 0.9. Meanwhile, a high fidelity of the attacked content is retained.  相似文献   

5.
针对基于传统双向散列链的自愈组密钥分发方法无法抵制合谋攻击的不足,提出了引入滑动窗口和轻量级子链LiBHC结构的增强型双向散列链结构,并给出了基于该结构的自愈组密钥分发方案。该方案有效地解决了组密钥的无缝切换问题,更大程度地减少了合谋攻击对系统构成的安全威胁。分析表明,本方案在保持较好的资源开销优势的前提下,获得了更好的安全性和可靠性,更适用于节点俘获攻击多发的应用场景。  相似文献   

6.
近年来,图神经网络在图表示学习领域中取得了较好表现广泛应用于日常生活中,例如电子商务、社交媒体和生物学等.但是研究表明,图神经网络容易受到精心设计的对抗攻击迷惑,使其无法正常工作.因此,提高图神经网络的鲁棒性至关重要.已有研究提出了一些提高图神经网络鲁棒性的防御方法,然而如何在确保模型主任务性能的前提下降低对抗攻击的攻击成功率仍存在挑战.通过观察不同攻击产生的对抗样本发现,对抗攻击生成的对抗连边所对应的节点对之间通常存在低结构相似性和低节点特征相似性的特点.基于上述发现,提出了一种面向图神经网络的图重构防御方法GRD-GNN,分别从图结构和节点特征考虑,采用共同邻居数和节点相似度2种相似度指标检测对抗连边并实现图重构,使得重构的图结构删除对抗连边,且添加了增强图结构关键特征的连边,从而实现有效防御.最后,论文在3个真实数据集上展开防御实验,验证了GRD-GNN相比其他防御方法均能取得最佳的防御性能,且不影响正常图数据的分类任务.此外,利用可视化方法对防御结果做解释,解析方法的有效性.  相似文献   

7.
In the node capture attack, the adversary intelligently captures nodes and extracts the cryptographic keys from their memories to destroy the security, reliability and confidentiality of the wireless sensor networks. However, it suffers from low attacking efficiency and high resource expenditure. In this paper, we approach this attack from an adversarial view and develop a matrix-based method to model the process of the node capture attack. We establish a matrix to indicate the compromising relationship between the nodes and the paths. We propose a Matrix-based node capture attack Algorithm (MA in short), which can maximize the destructiveness while consuming the minimum resource expenditure. We conduct several experiments to show the performance of MA. Experimental results manifest that MA can reduce the attacking round, shorten the execution time, enhance the attacking efficiency and conserve the energy cost.  相似文献   

8.
徐丽 《计算机应用》2012,32(5):1379-1380
张学军等提出了两个面向多服务的抗共谋叛逆者追踪方案,断言任意数量用户共谋构造解密钥在计算上是不可行的。对此两个方案进行了详细的密码学分析,基于简单的线性组合方法给出了一种具体的攻击方式,证明三个或以上的用户能够通过共谋构造多个有效的解密钥,但是管理者却不能使用黑盒追踪方式追踪到参与共谋的叛逆者。分析结果表明张学军等提出的两个追踪方案存在设计上的缺陷,完全不能实现抗共谋和追踪。  相似文献   

9.
Source location privacy, which means to protect source sensors'' locations from being leaked out of observed network tra±c, is an emerging research topic in wireless sensor networks, because it cannot be fully addressed by traditional cryptographic mechanisms, such as encryption and authentication. Current source location privacy schemes, assuming either a local or global attack model, have limitations. For example, schemes under a global attack model are subject to a so called `01'' attack, during which an attacker can potentially identify sources of real messages. Targeting on tackling this attack, we propose two perturbation schemes, one based on Uniform Distribution and the other based on Gaussian Distribution. We analyze the security properties of these two schemes. We also simulate and compare them with previous schemes, with results showing that the proposed perturbation schemes can improve sensor source location privacy significantly. Furthermore, it is realized that an attacker may employ more intelligent statistical tools, such as Univariate Distribution based Reconstruction (UDR), to analyze the traffic generation patterns and find out real sources. We propose a Risk Region (RR) based technique, to prevent the attacker from successfully doing this. Performance evaluation shows that the RR-based scheme increases the errors of the attacker, so that the attacker is not able to accurately derive real messages as well as their sources.  相似文献   

10.
基于密码学中的RSA签名方案与RSA加密方案,提出了一种能够让特定分类器输出对抗样本正确分类的对抗攻击方法。通过单像素攻击的思想使正常图像在嵌入附加信息的同时能够具有让其余分类器发生错误分类的能力。所提方法可以应用在分类器授权管理与在线图像防伪等领域。实验结果表明,所提方法生成的对抗样本对于人眼难以察觉,并能被特定分类器识别。  相似文献   

11.
The Wireless Sensor Networks (WSNs) used for the monitoring applications like pipelines carrying oil, water, and gas; perimeter surveillance; border monitoring; and subway tunnel monitoring form linearWSNs. Here, the infrastructure being monitored inherently forms linearity (straight line through the placement of sensor nodes). Therefore, suchWSNs are called linear WSNs. These applications are security critical because the data being communicated can be used for malicious purposes. The contemporary research of WSNs data security cannot fit in directly to linear WSN as only by capturing few nodes, the adversary can disrupt the entire service of linear WSN. Therefore, we propose a data aggregation scheme that takes care of privacy, confidentiality, and integrity of data. In addition, the scheme is resilient against node capture attack and collusion attacks. There are several schemes detecting the malicious nodes. However, the proposed scheme also provides an identification of malicious nodes with lesser key storage requirements. Moreover, we provide an analysis of communication cost regarding the number of messages being communicated. To the best of our knowledge, the proposed data aggregation scheme is the first lightweight scheme that achieves privacy and verification of data, resistance against node capture and collusion attacks, and malicious node identification in linear WSNs.  相似文献   

12.
深度学习目前被广泛应用于计算机视觉、机器人技术和自然语言处理等领域。然而,已有研究表明,深度神经网络在对抗样本面前很脆弱,一个精心制作的对抗样本就可以使深度学习模型判断出错。现有的研究大多通过产生微小的Lp范数扰动来误导分类器的对抗性攻击,但是取得的效果并不理想。本文提出一种新的对抗攻击方法——图像着色攻击,将输入样本转为灰度图,设计一种灰度图上色方法指导灰度图着色,最终利用经过上色的图像欺骗分类器实现无限制攻击。实验表明,这种方法制作的对抗样本在欺骗几种最先进的深度神经网络图像分类器方面有不俗表现,并且通过了人类感知研究测试。  相似文献   

13.
如何抵抗多个用户的合谋攻击是设计数字指纹方案首要考虑的一个问题。论文提出了一种基于随机码的多媒体指纹方案,其中随机码被用来构造用户的指纹,以抵抗多个用户之间的合谋攻击。实验表明,该方案具有较好的抗合谋性能。  相似文献   

14.
基于深度学习的代码漏洞检测模型因其检测效率高和精度准的优势,逐步成为检测软件漏洞的重要方法,并在代码托管平台Github的代码审计服务中发挥重要作用.然而,深度神经网络已被证明容易受到对抗攻击的干扰,这导致基于深度学习的漏洞检测模型存在遭受攻击,降低检测准确率的风险.因此,构建针对漏洞检测模型的对抗攻击,不仅可以发掘此类模型的安全缺陷,而且有助于评估模型的鲁棒性,进而通过相应的方法提升模型性能.但现有的面向漏洞检测模型的对抗攻击方法,依赖于通用的代码转换工具,并未提出针对性的代码扰动操作和决策算法,因此难以生成有效的对抗样本,且对抗样本的合法性依赖于人工检查.针对上述问题,提出了一种面向漏洞检测模型的强化学习式对抗攻击方法.本方法首先设计了一系列语义约束且漏洞保留的代码扰动操作作为扰动集合;其次,将具备漏洞的代码样本作为输入,利用强化学习模型选取具体的扰动操作序列.最后,根据代码样本的语法树节点类型寻找扰动的潜在位置,进行代码转换,从而生成对抗样本.本文基于SARD和NVD构建了两个实验数据集共14,278个代码样本并以此训练了四个具备不同特点的漏洞检测模型作为攻击目标.针对每个目标模型,训练了一个强化学习网络进行对抗攻击.结果显示,本文的攻击方法导致模型的召回率降低了74.34%,攻击成功率达到96.71%,相较基线方法,攻击成功率平均提升了68.76%.实验证明了当前的漏洞检测模型存在被攻击的风险,需要进一步研究提升模型的鲁棒性.  相似文献   

15.
By introducing mobility to some or all the nodes in a wireless sensor network (WSN), WSN can enhance its capability and flexibility to support multiple missions. In mobile wireless sensor networks, mobile nodes collect data and send data to a sink station. When the sink station employs directional antennas to send and receive data, its communication capability can increase. Using directional antennas implies the transmitters must know the direction or location of the receiver. It is necessary to predict a mobile receiver’s movement to keep the transmitter’s antenna pointing to the right direction. A mobility prediction algorithm is proposed in this paper, which is based on the knowledge extracted from real vehicles traces. The validation experiments indicate that the prediction accuracy rate of the algorithm is 96.5 % and the communication using directional antenna with movement prediction saves about 92.6 % energy consumption with a suitable beam-width and shakehand interval.  相似文献   

16.
图神经网络在半监督节点分类任务中取得了显著的性能. 研究表明, 图神经网络容易受到干扰, 因此目前已有研究涉及图神经网络的对抗鲁棒性. 然而, 基于梯度的攻击不能保证最优的扰动. 提出了一种基于梯度和结构的对抗性攻击方法, 增强了基于梯度的扰动. 该方法首先利用训练损失的一阶优化生成候选扰动集, 然后对候选集进行相似性评估, 根据评估结果排序并选择固定预算的修改以实现攻击. 通过在5个数据集上进行半监督节点分类任务来评估所提出的攻击方法. 实验结果表明, 在仅执行少量扰动的情况下, 节点分类精度显著下降, 明显优于现有攻击方法.  相似文献   

17.
针对传统匿名问卷系统不能抵抗合谋攻击及公布数据时无法保护用户隐私的问题,提出一种新的隐私保护匿名问卷方案。引入少数合谋的问卷工作节点集群,利用门限签名技术为用户进行注册,并以门限签名为问卷生成用户列表,从而抵抗合谋攻击,同时将用户回应进行同态加密上传至公开防篡改平台抵抗数据抵赖,采用差分隐私技术并借助安全多方计算技术输出隐私保护的问卷归总结果。在此基础上,将问卷过程融入零知识证明技术,保证密文的健壮性及方案的正确性。性能分析结果表明,该方案的安全模型满足匿名性、验证性、机密性及隐私保护性,与ANONIZE、Prio等方案相比,在合谋攻击抵抗、隐私保护方面更有优势,且在时间和存储开销上符合实际应用需求。  相似文献   

18.
Image classification models based on deep neural networks have made great improvements on various tasks, but they are still vulnerable to adversarial examples that could increase the possibility of misclassification. Various methods are proposed to generate adversarial examples under white-box attack circumstances that have achieved a high success rate. However, most existing adversarial attacks only achieve poor transferability when attacking other unknown models with the black-box scenario settings. In this paper, we propose a new method that generates adversarial examples based on affine-shear transformation from the perspective of deep model input layers and maximizes the loss function during each iteration. This method could improve the transferability and the input diversity of adversarial examples, and we also optimize the above adversarial examples generation process with Nesterov accelerated gradient. Extensive experiments on ImageNet Dataset indicate that our proposed method could exhibit higher transferability and achieve higher attack success rates on both single model settings and ensemble-model settings. It can also combine with other gradient-based methods and image transformation-based methods to further build more powerful attacks.  相似文献   

19.
The FIRE trust and reputation model is a de-centralized trust model that can be applied for trust management in unstructured Peer-to-Peer (P2P) overlays. The FIRE model does not, however, consider malicious activity and possible collusive behavior in nodes of network and it is therefore susceptible to collusion attacks. This investigation reveals that FIRE is vulnerable to lying and cheating attacks and presents a trust management approach to detect collusion in direct and witness interactions among nodes based on colluding node’s history of interactions. A witness ratings based graph building approach is utilized to determine possibly collusive behavior among nodes. Furthermore, various interaction policies are defined to detect and prevent collaborative behavior in colluding nodes. Finally a multidimensional trust model FIRE+ is devised for avoiding collusion attacks in direct and witness based interactions. The credibility of the proposed trust management scheme as an enhancement of the FIRE trust model is verified by extensive simulation experiments.  相似文献   

20.
计算机视觉领域倾向使用深度神经网络完成识别任务,但对抗样本会导致网络决策异常.为了防御对抗样本,主流的方法是对模型进行对抗训练.对抗训练存在算力高、训练耗时长的缺点,其应用场景受限.提出一种基于知识蒸馏的对抗样本防御方法,将大型数据集学习到的防御经验复用到新的分类任务中.在蒸馏过程中,教师模型和学生模型结构一致,利用模...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号