共查询到19条相似文献,搜索用时 343 毫秒
1.
由于网络的不安全性,服务器遭受攻击的频率越来越高。为了保证服务器即使遭受攻击后,能够保留一个完整的日志文件.便于系统管理员进行分析和采取相应的措施。根据工作中的具体情况,设计出一种保护日志文件的方法,并给出实施步骤。 相似文献
2.
基于安全审计日志的网络文件系统数据完整性保护方法 总被引:2,自引:0,他引:2
网络文件系统在方便数据共享的同时也带来了新的安全隐患,审计日志跟踪并记录文件服务器上数据的变化,对于分析评价系统的安全性有重要价值.现有的系统因为不能防止内部攻击以保证审计日志的安全性,无法很好地满足用户需求,如攻击者可以直接通过驱动程序修改磁盘上的审计日志来删除敏感数据.提出了一种基于安全审计日志的对网络文件服务器上的数据进行完整性保护的方法.服务器中的每个文件和目录都对应一个认证符以保证其完整性,通过将文件服务器上的活动记录下来并生成日志,事后根据认证符和分析日志信息来对数据进行审计.另外,通过引入一个可信组件来生成认证符和审计日志并保证它们的安全性.根据该方法在NFS服务器上实现了原型系统Nfsd-log并对其进行了性能测试.SSH-build的测试结果表明,Nfsd-log的总时间开销比未受审计日志保护的原始NFS服务器的时间开销仅增加9.2%. 相似文献
3.
4.
本文提出了一种云环境下的网络安全处理模型,模型中的每台云服务器都拥有自己的入侵检测系统,并且所有的服务器共享一个异常管理平台,该平台负责报警信息的接收、处理和日志管理.模型采用报警级别动态调整技术和攻击信息共享方法,最大限度地降低了漏报率和服务器遭受同种攻击的可能性,有效提高了检测效率和系统安全水平. 相似文献
5.
6.
近年来随着Web应用技术的不断进步和发展,针对Web应用业务的需求越来越多,随之而来的Web应用安全攻击也呈上升趋势.目前针对网络攻击的防护技术手段也是层出不穷,但一般都是事前检测和事中防护,事后检测维护的则相应比较少.在网络中心有大量的服务器设备,Web日志文件作为服务器的一部分,详细记录设备系统每天发生的各种各样的事件,如客户端对服务器的访问请求记录、黑客对网站的入侵行为记录等,因此要想有效的管理维护设备和在攻击事件发生后及时的降低风险,分析审计日志对于事后检测和维护设备的安全是非常必要的.基于此,文章主要对基于Web应用安全日志审计系统进行研究和设计,日志审计系统主要分为三个子系统:日志采集子系统、分析引擎子系统和日志告警子系统.日志采集子系统采用多协议分析对日志进行收集,并进行相应的日志规范化和去重等处理.分析引擎子系统采用规则库和数理统计的方法,对日志特征进行提取和设置相应的统计量参数,进行比较分析.日志告警子系统则是主要配置相应策略并下发任务,对于审计结果进行界面展示或生成报告并以邮件的方式发送给用户等. 相似文献
7.
8.
9.
10.
在实行客户端去重的云存储系统中,通过所有权证明可以解决攻击者仅凭借文件摘要获得整个文件的问题。然而,基于所有权证明的去重方案容易遭受侧信道攻击。攻击者通过上传文件来观察是否发生去重,即可判断该文件是否存在于云服务器中。基于存储网关提出一种改进的所有权证明去重方案,存储网关代替用户与云服务器进行交互,使得去重过程对用户透明,并采用流量混淆的方法抵抗侧信道攻击和关联文件攻击。分析与比较表明,该方案降低了客户端计算开销,并提高了安全性。 相似文献
11.
A log is a text message that is generated in various services, frameworks, and programs. The majority of log data mining tasks rely on log parsing as the first step, which transforms raw logs into formatted log templates. Existing log parsing approaches often fail to effectively handle the trade-off between parsing quality and performance. In view of this, in this paper, we present Multi-Layer Parser (ML-Parser), an online log parser that runs in a streaming manner. Specifically, we present a multi-layer structure in log parsing to strike a balance between efficiency and effectiveness. Coarse-grained tokenization and a fast similarity measure are applied for efficiency while fine-grained tokenization and an accurate similarity measure are used for effectiveness. In experiments, we compare ML-Parser with two existing online log parsing approaches, Drain and Spell, on ten real-world datasets, five labeled and five unlabeled. On the five labeled datasets, we use the proportion of correctly parsed logs to measure the accuracy, and ML-Parser achieves the highest accuracy on four datasets. On the whole ten datasets, we use Loss metric to measure the parsing quality. ML-Parse achieves the highest quality on seven out of the ten datasets while maintaining relatively high efficiency. 相似文献
12.
Given two strings, X and Y, both of length O(n) over alphabet
Σ, a basic problem (local alignment) is to find pairs
of similar substrings, one from X and one from Y. For
substrings X' and Y' from X and Y, respectively, the
metric we use to measure their similarity is normalized
alignment value: LCS(X',Y')/(|X'|+|Y'|). Given an integer M
we consider only those substrings whose LCS length is at least
M. We present an algorithm that reports the pairs of substrings
with the highest normalized alignment value in
O(n log|Σ|+rM log log n) time (r—the number of
matches between X and Y). We also present an
O(n log|Σ|+rL log log n) algorithm (L = LCS(X,Y)) that
reports all substring pairs with a normalized alignment value
above a given threshold. 相似文献
13.
14.
Robert E. Tarjan 《Information Processing Letters》1983,17(1):37-41
In 1982 the author presented an O(m(log n)2) time algorithm for hierarchically decomposing a directed n-vertex, m-edge graph with weighted edges into strong components. Such an algorithm is useful in cluster analysis of data with an asymmetric similarity measure. The present paper gives a simpler algorithm with the faster running time of O(m log n). 相似文献
15.
16.
基于Web日志挖掘的个性化推荐技术已在电子商务网站中广泛应用,针对现有推荐系统的准确性不高等问题,提出一种基于Web日志挖掘和相关性度量的个性化推荐系统. 首先,提取用户的访问日志,并对其进行预处理,以获得精简的结构化数据. 然后,对日志进行分析,提取出特征序列. 再后,根据特征的出现频率和页面停留时间,计算出页面与交易文本文档的相关性. 最终,利用夹角余弦公式计算出用户与页面的相关性,并以此形成推荐列表. 实验结果表明,该方案能够根据用户偏好精确的给出个性化推荐. 相似文献
17.
Approximately 40% of mobile phone use studies published in scholarly communication journals base their findings on self-report data about how frequently respondents use their mobile phones. Using a subset of a larger representative sample we examine the validity of this type of self-report data by comparing it to server log data. The self-report data correlate only moderately with the server log data, indicating low criterion validity. The categorical self-report measure asking respondents to estimate “how often” they use their mobile phones fared better than the continuous self-report measure asking them to estimate their mobile phone activity “yesterday.” A multivariate exploratory analysis further suggests that it may be difficult to identify under- and overreporting using demographic variables alone. 相似文献
18.
基于归纳化会话的网络用户的聚类 总被引:7,自引:0,他引:7
为了发掘具有相似的访问兴趣的网络用户,探讨了网络用户聚类的问题。网络用户的访问信息从服务器日志文件中抽取出来,组织成会话向量的形式,会话描述为一段时间内用户向服务器发出一系列访问请求。为了减少会话向量的维度,根据网页的层次性,采用面向属性的推理方法,对这些会话进行了归纳,并且定义了一个新的距离测度来描述两个会话之间的相似度,最后采用某种非欧几里德的关系聚类算法聚类这些归纳化的会话。实验表明,这种方法对在大型的日志文件集中挖掘出有意义的网络用户的分类是高效可行的。 相似文献
19.
Genetic process mining: an experimental evaluation 总被引:4,自引:0,他引:4
A. K. A. de Medeiros A. J. M. M. Weijters W. M. P. van der Aalst 《Data mining and knowledge discovery》2007,14(2):245-304
One of the aims of process mining is to retrieve a process model from an event log. The discovered models can be used as objective starting points during the deployment of process-aware information systems (Dumas et al., eds., Process-Aware Information
Systems: Bridging People and Software Through Process Technology. Wiley, New York, 2005) and/or as a feedback mechanism to
check prescribed models against enacted ones. However, current techniques have problems when mining processes that contain
non-trivial constructs and/or when dealing with the presence of noise in the logs. Most of the problems happen because many
current techniques are based on local information in the event log. To overcome these problems, we try to use genetic algorithms to mine process models. The main
motivation is to benefit from the global search performed by this kind of algorithms. The non-trivial constructs are tackled by choosing an internal representation that
supports them. The problem of noise is naturally tackled by the genetic algorithm because, per definition, these algorithms
are robust to noise. The main challenge in a genetic approach is the definition of a good fitness measure because it guides
the global search performed by the genetic algorithm. This paper explains how the genetic algorithm works. Experiments with
synthetic and real-life logs show that the fitness measure indeed leads to the mining of process models that are complete (can reproduce all the behavior in the log) and precise (do not allow for extra behavior that cannot be derived from the event log). The genetic algorithm is implemented as a plug-in
in the ProM framework. 相似文献