首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Today Information and Communications Technology (ICT) networks are a dominating component of our daily life. Centralized logging allows keeping track of events occurring in ICT networks. Therefore a central log store is essential for timely detection of problems such as service quality degradations, performance issues or especially security-relevant cyber attacks. There exist various software tools such as security information and event management (SIEM) systems, log analysis tools and anomaly detection systems, which exploit log data to achieve this. While there are many products on the market, based on different approaches, the identification of the most efficient solution for a specific infrastructure, and the optimal configuration is still an unsolved problem. Today׳s general test environments do not sufficiently account for the specific properties of individual infrastructure setups. Thus, tests in these environments are usually not representative. However, testing on the real running productive systems exposes the network infrastructure to dangerous or unstable situations. The solution to this dilemma is the design and implementation of a highly realistic test environment, i.e. sandbox solution, that follows a different – novel – approach. The idea is to generate realistic network event sequence (NES) data that reflects the actual system behavior and which is then used to challenge network analysis software tools with varying configurations safely and realistically offline. In this paper we define a model, based on log line clustering and Markov chain simulation to create this synthetic log data. The presented model requires only a small set of real network data as an input to understand the complex real system behavior. Based on the input׳s characteristics highly realistic customer specified NES data is generated. To prove the applicability of the concept developed in this work, we conclude the paper with an illustrative example of evaluation and test of an existing anomaly detection system by using generated NES data.  相似文献   

2.
STDF文件格式是一种简单并条理分明的标准,利用它可以在半导体测试工序中分享和交换测试数据。通过对文件标准的基本结构的了解,可以使用JAVA程序来实现读取文件中的数据。  相似文献   

3.
针对现有安全性检查工具应用于政务网站文件完整性检查效率低的问题,提出了UNIX环境下的基于CityHash的文件完整性检查系统并对其架构和模块进行了描述。分析总结了现代哈希CityHash的特点和优势,将其应用到系统中文件哈希值的生成。通过性能测试证明该方法是具有可行性和高效性的,将此系统运用到政府电子政务网站的文件完整性检查中能够迅速发现被篡改的文件,将危害降低到最小。  相似文献   

4.
传统的局部上下文分析其应用效果高度依赖于初次检索的结果。针对此局限,通过对用户查询日志的统计分析和过滤,得到用户最可能感兴趣的文章,代替初始检索得到的N篇文章,作为查询扩展词来源文档集,用局部上下文分析方法计算词间相关度。实验结果表明,该方法能够较大地提高查询精度。  相似文献   

5.
本文设计了一款不依赖日志文件的oracle数据库综合检验、恢复工具。该工具可从oracle数据文件(*.dbf)中检测出所有数据表(包括"被删除"数据表)的详细信息,例如表名称、字段名、字段类型、字段长度、字段序号和表数据实际保存的数据块号。该工具可在日志文件被清除的情况下直接从数据文件中恢复出被删除的记录,准确率达到100%。该工具可从数据文件中恢复出某些"被删除"的数据表。  相似文献   

6.
模型检验技术是开发高可信系统的重要途径.提出了一种基于定理证明的模型验证方法,并实现了工具验证.它以代数规约语言CafeOBJ描述系统的无限状态并把它转换成有限状态的SMV规约.通过观察迁移系统,证明产生的SMV规约的反例即CafeOBJ规约的反例,来找出开发早期阶段的系统的潜在错误,从而避免时间、金钱的耗费及重复性的劳动.  相似文献   

7.
对信息系统中的授权用户的操作日志记录问题进行分析,得出在数据库层实现操作日志记录具有较大 的优越性,并给出了具体实现方案。  相似文献   

8.
Written text is an important component in the process of knowledge acquisition and communication. Poorly written text fails to deliver clear ideas to the reader no matter how revolutionary and ground-breaking these ideas are. Providing text with good writing style is essential to transfer ideas smoothly. While we have sophisticated tools to check for stylistic problems in program code, we do not apply the same techniques for written text. In this paper we present TextLint, a rule-based tool to check for common style errors in natural language. TextLint provides a structural model of written text and an extensible rule-based checking mechanism.  相似文献   

9.
10.
Characterizing user’s intent and behaviour while using a retrieval information tool (e.g. a search engine) is a key question on web research, as it hold the keys to know how the users interact, what they are expecting and how we can provide them information in the most beneficial way. Previous research has focused on identifying the average characteristics of user interactions. This paper proposes a stratified method for analyzing query logs that groups queries and sessions according to their hit frequency and analyzes the characteristics of each group in order to find how representative the average values are. Findings show that behaviours typically associated with the average user do not fit in most of the aforementioned groups.  相似文献   

11.
This study addresses the construction of a preset checking sequence that will not pose controllability (synchronization) and observability (undetectable output shift) problems when applied in distributed test architectures that utilize remote testers. The controllability problem manifests itself when a tester is required to send the current input and because it did not send the previous input nor did it receive the previous output it cannot determine when to send the input. The observability problem manifests itself when a tester is expecting an output in response to either the previous input or the current input and because it is not the one to send the current input, it cannot determine when to start and stop waiting for the output. Based on UIO sequences, a checking sequence construction method is proposed to yield a sequence that is free from controllability and observability problems.  相似文献   

12.
Local model checking and protocol analysis   总被引:1,自引:1,他引:1  
This paper describes a local model-checking algorithm for the alternation-free fragment of the modal mu-calculus that has been implemented in the Concurrency Factory and discusses its application to the analysis of a real-time communications protocol. The protocol considered is RETHER, a software-based, real-time Ethernet protocol developed at SUNY at Stony Brook. Its purpose is to provide guaranteed bandwidth and deterministic, periodic network access to multimedia applications over commodity Ethernet hardware. Our model-checking results show that (for a particular network configuration) RETHER makes good on its bandwidth guarantees to real-time nodes without exposing non-real-time nodes to the possibility of starvation. Our data also indicate that, in many cases, the state-exploration overhead of the local model checker is significantly smaller than the total amount that would result from a global analysis of the protocol. In the course of specifying and verifying RETHER, we also identified an alternative design of the protocol that warranted further study due to its potentially smaller run-time overhead in servicing requests for data transmission. Again, using local model checking, we showed that this alternative design also possesses the properties of interest. This observation points out one of the often-overlooked benefits of formal verification: by forcing designers to understand their designs rigorously and abstractly, these techniques often enable the designers to uncover interesting design alternatives.  相似文献   

13.
Li  Guofu  Zhu  Pengjia  Cao  Ning  Wu  Mei  Chen  Zhiyi  Cao  Guangsheng  Li  Hongjun  Gong  Chenjing 《Multimedia Tools and Applications》2019,78(15):21521-21535
Multimedia Tools and Applications - Mining the vast amount of server-side logging data is an essential step to boost the business intelligence, as well as to facilitate the system maintenance for...  相似文献   

14.
15.
Scientific data analysis and visualization have become the key component for nowadays large scale simulations. Due to the rapidly increasing data volume and awkward I/O pattern among high structured files, known serial methods/tools cannot scale well and usually lead to poor performance over traditional architectures. In this paper, we propose a new framework: ParSA (parallel scientific data analysis) for high-throughput and scalable scientific analysis, with distributed file system. ParSA presents the optimization strategies for grouping and splitting logical units to utilize distributed I/O property of distributed file system, scheduling the distribution of block replicas to reduce network reading, as well as to maximize overlapping the data reading, processing, and transferring during computation. Besides, ParSA provides the similar interfaces as the NetCDF Operator (NCO), which is used in most of climate data diagnostic packages, making it easy to use this framework. We utilize ParSA to accelerate well-known analysis methods for climate models on Hadoop Distributed File System (HDFS). Experimental results demonstrate the high efficiency and scalability of ParSA, getting the maximum 1.3 GB/s throughput on a six nodes Hadoop cluster with five disks per node. Yet, it can only get 392 MB/s throughput on a RAID-6 storage node.  相似文献   

16.
In software development, testers often focus on functional testing to validate implemented programs against their specifications. In safety-critical software development, testers are also required to show that tests exercise, or cover, the structure and logic of the implementation. To achieve different types of logic coverage, various program artifacts such as decisions and conditions are required to be exercised during testing. Use of model checking for structural test generation has been proposed by several researchers. The limited application to models used in practice and the state space explosion can, however, impact model checking and hence the process of deriving tests for logic coverage. Thus, there is a need to validate these approaches against relevant industrial systems such that more knowledge is built on how to efficiently use them in practice. In this paper, we present a tool-supported approach to handle software written in the Function Block Diagram language such that logic coverage criteria can be formalized and used by a model checker to automatically generate tests. To this end, we conducted a study based on industrial use-case scenarios from Bombardier Transportation AB, showing how our toolbox CompleteTest can be applied to generate tests in software systems used in the safety-critical domain. To evaluate the approach, we applied the toolbox to 157 programs and found that it is efficient in terms of time required to generate tests that satisfy logic coverage and scales well for most of the programs.  相似文献   

17.
We explore the automatic generation of test data that respect constraints expressed in the Object-Role Modeling (ORM) language. ORM is a popular conceptual modeling language, primarily targeting database applications, with significant uses in practice. The general problem of even checking whether an ORM diagram is satisfiable is quite hard: restricted forms are easily NP-hard and the problem is undecidable for some expressive formulations of ORM. Brute-force mapping to input for constraint and SAT solvers does not scale: state-of-the-art solvers fail to find data to satisfy uniqueness and mandatory constraints in realistic time even for small examples. We instead define a restricted subset of ORM that allows efficient reasoning yet contains most constraints overwhelmingly used in practice. We show that the problem of deciding whether these constraints are consistent (i.e., whether we can generate appropriate test data) is solvable in polynomial time, and we produce a highly efficient (interactive speed) checker. Additionally, we analyze over 160 ORM diagrams that capture data models from industrial practice and demonstrate that our subset of ORM is expressive enough to handle their vast majority.  相似文献   

18.
Microsystem Technologies - MEMS technology has been applied in many fields including deep oil exploration and seismic detection areas, of which poor performance has limited its development until...  相似文献   

19.
相比Windows操作系统而言,原代码开放、自由免费、支持SWP和NUMA体系结构,支持平台最多的Linux操作系统自1991年诞生以来,便具有无可取代的优势,目前广泛应用于嵌入式系统和服务器上。同时其多用户多任务的特性可以为用户量身定做适合的功能,从而灵活应用在各种场合。虽然在早期系统中用户界面方面不如Windows操作方便,主要采用字符界面键入命令,但现在的系统大多数也采用图形化界面深受用户喜爱。文件系统是Linux的一个突出特色,本文以Linux的典型文件系统ext2为例,介绍该文件系统的组成及工作原理,同时设计了简单的模拟实现文件系统简单的功能。  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号