共查询到20条相似文献,搜索用时 109 毫秒
1.
2.
近来,网络数据泄露的事件呈现高发的态势,已成为网络生活的一个热点问题和信息安全的重大现实威胁,应当引起各界足够的重视。 相似文献
3.
每年国内外都会爆出几起重大数据泄露事故,Facebook和Twitter这样的互联网巨头被攻击的报道屡见不鲜。越大的企业越危险,因为他们的用户众多,更容易成为黑客的攻击目标。就在上个月,非盈利性组织维基媒体基金会发生了个人信息泄露事件:37,000名个人信息能够被有权限访问维基媒体“Labs-DB”的志愿者获取。 相似文献
4.
马峥 《网络安全技术与应用》2021,(4):80-81
随着近年来高校信息化建设快速发展,各高校围绕数据标准制定、业务系统数据对接、数据治理、大数据分析开展了一系列工作.数据泄露问题频发,数据安全问题也得到日益关注.本文以数据泄露防护技术为研究对象,介绍了高校面临的数据安全现状,分析对比当前数据泄露防护技术,并结合高校实际环境探索了高校场景下相关技术的应用. 相似文献
5.
针对现有Android平台隐私数据泄露动态检测方法检测效率较低的情况,文章设计并实现了一种基于权限分析的Android隐私数据泄露动态检测方法。该方法将Android静态检测中的权限分析与动态污点检测结合,根据应用程序申请的权限确定动态污点检测的隐私数据类型和隐私出口类型。检测选项保存在系统属性中。实验结果显示,该方法能够在保证隐私数据泄露检测有效性的前提下,提高动态污点检测的效率。 相似文献
6.
国产化信息数据具有很高的利用价值和挖掘价值,很容易成为泄露窃取的对象,使得国产化信息数据安全问题越来越受到关注。在此背景下,设计一种基于可信区块链的国产化信息数据泄露溯源系统。基于设计需要,在借助MVC基础架构的基础上,将系统框架设计为三层,即数据层、功能单元层和显示层。根据溯源步骤,设计三个功能单元。信息数据泄露判定单元:依据流量数据实现泄露行为判定;信息泄露源头定位:依据可信区块链技术,将国产化信息数据分块处理并以此为依据,寻找信息泄露的源头;信息泄露路径追踪:以源头为起点,构建B-M树并转换为有向无环图形式,完成国产化信息数据泄露路径绘制。结果表明:基于可信区块链的溯源系统应用下,定位误差(1.27),路径重叠指数(9.82),时间消耗均为最值(42.27 s),说明所设计的系统功能表现更好,源头寻找更加准确,溯源效率更高。 相似文献
7.
针对传统方法获取的网络通信深度信息不完整、数据传输安全性不高等问题,设计Kinect传感器的网络通信数据泄露自主感知系统.模糊推理方法结合专家经验构建Kinect传感器数据脆性评估体系,运用基于灰色关联分析及理想优基点法计算通信数据敏感度,获得敏感数据泄露漏洞挖掘目标;基于开源数据组件集,构建集数据收集整理、数据存储、... 相似文献
8.
冯凯 《自动化技术与应用》2020,39(9):135-138
随着云计算大数据技术的发展,传统的安全监测技术无法满足不间断服务的应用需求。本文所设计系统基于一种检测模型,实现对大数据平台风险进行检测,系统模型可防止主机管理环境下的入侵检测保护系统分布式DDoS攻击。模型设计过程中使用主成分分析和线性判别分析元启发式算法,被称为是Ant Lion优化,通过神经网络实现特征选择,实现对云服务器分类和配置。系统测试结果显示该模型对基于云环境的大数据平台的安全风险预测有较好的性能。 相似文献
9.
10.
《Windows IT Pro Magazine》2010,(3):44-45
在一般网络环境中,你会尽你所能,根据预算限制和手头的资源,使用防火墙、防病毒程序、入侵检测系统和身份验证等手段,构建出最佳的安全解决方案。但是你该如何封锁你们的USB端口呢? 相似文献
11.
《Information Security Journal: A Global Perspective》2013,22(5-6):247-252
ABSTRACT In the wake of undiscovered data breaches and subsequent public exposure, regulatory compliance and security audit standards are becoming more important to protecting critical assets. Despite the increase in the number of data breaches via illicit means, internal controls seem to fail when it comes to the assurance that critical assets remain uncompromised. According to the Identity Theft Resource Center, 336 breaches have been reported in 2008 alone, 69%?greater than this time last year 1 . This is a concern for security teams, especially since a lack of dedicated resources exists to combat and revert this trend. This is significantly important to take into consideration when going through the formal audit process to certify adherence to Sarbanes-Oxley (SOX), Graham Leach Bliley (GLBA), Payment Card Industry (PCI), or the Health Insurance and Portability and Accountability Act (HIPAA). With the significant increase in data exposure corporations cannot afford to take shortcuts when it comes to information assurance. Otherwise it is almost certain that one will become a victim of a serious exposure of sensitive information. This paper will explore the several disconnects between established and accepted security audit framework and the variable of hidden infections. 相似文献
12.
高级可持续威胁(advanced persistent threat, APT)级网络攻击对企业和政府的数据保护带来了极大的挑战.用0day漏洞制作恶意软件来进行攻击是APT级网络攻击的常用途径,传统基于特征的安全系统很难检测这类攻击.为了检测泄漏敏感信息的恶意软件,首先分析已出现的APT恶意软件,描绘出窃取信息的攻击步骤,以此为基础提出1个针对数据泄漏行为的恶意软件检测方案用于检测同种攻击类型的恶意软件.该方案结合异常检测和误用检测,对被保护的主机和网络进行低开销的持续监控,同时提出一系列推断规则来描述攻击步骤中可以观察到的高级恶意事件.一旦监控到可疑事件,进一步收集主机和网络的低级行为,根据推断规则关联低级行为和高级恶意事件,据此重构窃取信息的攻击步骤,从而检测出攻击的存在.通过仿真实验验证了该方案的有效性. 相似文献
13.
Adlyn Adam Teoh Norjihan Binti Abdul Ghani Muneer Ahmad Nz Jhanjhi Mohammed A. Alzain Mehedi Masud 《计算机系统科学与工程》2022,40(2):505-515
Organizational and end user data breaches are highly implicated by the role of information security conscious care behavior in respective incident responses. This research study draws upon the literature in the areas of information security, incident response, theory of planned behaviour, and protection motivation theory to expand and empirically validate a modified framework of information security conscious care behaviour formation. The applicability of the theoretical framework is shown through a case study labelled as a cyber-attack of unprecedented scale and sophistication in Singapore’s history to-date, the 2018 SingHealth data breach. The single in-depth case study observed information security awareness, policy, experience, attitude, subjective norms, perceived behavioral control, threat appraisal and self-efficacy as emerging prominently in the framework’s applicability in incident handling. The data analysis did not support threat severity relationship with conscious care behaviour. The findings from the above-mentioned observations are presented as possible key drivers in the shaping information security conscious care behaviour in real-world cyber incident management. 相似文献
14.
本论文从企业目标或商业任务角度,提出了一个基于内部威胁的需求采集框架。该框架包含了一个内部威胁识别和估方法,以及抵御风险的需求启发方法。最后,描述了组织系统防御需求的收集方法。 相似文献
15.
网络安全领域中威胁情报的描述方式多种多样,迫切需要一种对威胁情报格式化描述的标准,将非格式化情报信息,转化为格式化数据,为情报的可视化知识图谱提供支撑。针对STIX 2.0的描述规范,提取了适应于网络安全威胁情报中的本体元素,构建了一个可共享、重用、扩展的威胁情报本体模型,并对领域本体、应用本体和原子本体进行了详细分类。将该模型应用在Poisonivy攻击事件中,提取了Poisonivy研究报告中的61个实体,102个关系,并将抽取的格式化数据导入Gephi进行可视化表达。通过对威胁情报本体模型的构建,完成了情报信息从非结构化到结构化的转换,并使用统一的语法进行描述,最终以知识图谱的方式来表达情报中重要元素,可以快速定位网络安全事件中的核心元素及之间关系,为网络安全分析者和决策者,提供重要依据。 相似文献
16.
在Web日志挖掘的过程中,数据预处理是整个Web日志挖掘过程的基础,其直接影响了日志挖掘的质量和结果.由于目前大多数网页都采用框架模式,而传统的预处理技术并没有针对frame页面进行过滤,即使过滤,也会导致页面结构的混乱,从而不能够为路径补充提供正确的信息.基于此,本文提出一种基于重构网站结构的Web日志挖掘数据预处理方法以及基于它的路径补充方法. 相似文献
17.
Daiyue Weng Jun Hong David A. Bell 《International Journal of Software and Informatics》2012,6(3):453-472
A rapidly increasing number of Web databases are now become accessible via their HTML form-based query interfaces. Query result pages are dynamically generated in response to user queries, which encode structured data and are displayed for human use. Query result pages usually contain other types of information in addition to query
results, e.g., advertisements, navigation bar etc. The problem of extracting structured data from query result pages is critical for web data integration applications, such as comparison shopping, meta-search engines etc, and has been intensively studied. A number of approaches have been proposed. As the structures of Web pages become more and more complex, the existing approaches start to fail, and most of them do not remove irrelevant contents which may affect the accuracy of data record extraction. We propose an automated approach for Web data extraction. First, it makes use of visual features and query terms to identify data sections and extracts data records in these sections. We also represent several content and visual features of visual blocks in a data section, and use them to filter out noisy blocks. Second, it measures similarity between data items in different data records based on their visual and content features, and aligns them into different groups so that the data in the same group have the same semantics. The results of our experiments with a large set of Web query result pages in di?erent domains show that our proposed approaches are highly effective. 相似文献
18.
19.
面向服务的Web异构数据集成体系结构研究 总被引:1,自引:0,他引:1
目前利用传统的中间件技术进行异构数据集成研究,已经取得了很大成果,但是还存在几方面的问题:(1)对系统同构性的要求;(2)不能顺利穿越防火墙;(3)不同组件模型之间的互操作.引入Web service技术,提出利用Web service进行异构数据集成的方法,最后给出一个面向服务的Web异构数据集成体系结构,能较好地解决传统中间件技术在上述三方面的问题. 相似文献
20.
Extracting Web Data Using Instance-Based Learning 总被引:1,自引:0,他引:1
This paper studies structured data extraction from Web pages. Existing approaches to data extraction include wrapper induction
and automated methods. In this paper, we propose an instance-based learning method, which performs extraction by comparing
each new instance to be extracted with labeled instances. The key advantage of our method is that it does not require an initial
set of labeled pages to learn extraction rules as in wrapper induction. Instead, the algorithm is able to start extraction
from a single labeled instance. Only when a new instance cannot be extracted does it need labeling. This avoids unnecessary
page labeling, which solves a major problem with inductive learning (or wrapper induction), i.e., the set of labeled instances
may not be representative of all other instances. The instance-based approach is very natural because structured data on the
Web usually follow some fixed templates. Pages of the same template usually can be extracted based on a single page instance
of the template. A novel technique is proposed to match a new instance with a manually labeled instance and in the process
to extract the required data items from the new instance. The technique is also very efficient. Experimental results based
on 1,200 pages from 24 diverse Web sites demonstrate the effectiveness of the method. It also outperforms the state-of-the-art
existing systems significantly. 相似文献