共查询到20条相似文献,搜索用时 15 毫秒
1.
郑立钧 《数字社区&智能家居》2007,1(2):1017-1018
STDF文件格式是一种简单并条理分明的标准,利用它可以在半导体测试工序中分享和交换测试数据。通过对文件标准的基本结构的了解,可以使用JAVA程序来实现读取文件中的数据。 相似文献
2.
传统的局部上下文分析其应用效果高度依赖于初次检索的结果。针对此局限,通过对用户查询日志的统计分析和过滤,得到用户最可能感兴趣的文章,代替初始检索得到的N篇文章,作为查询扩展词来源文档集,用局部上下文分析方法计算词间相关度。实验结果表明,该方法能够较大地提高查询精度。 相似文献
3.
徐国天 《网络安全技术与应用》2014,(11):32-33
本文设计了一款不依赖日志文件的oracle数据库综合检验、恢复工具。该工具可从oracle数据文件(*.dbf)中检测出所有数据表(包括"被删除"数据表)的详细信息,例如表名称、字段名、字段类型、字段长度、字段序号和表数据实际保存的数据块号。该工具可在日志文件被清除的情况下直接从数据文件中恢复出被删除的记录,准确率达到100%。该工具可从数据文件中恢复出某些"被删除"的数据表。 相似文献
4.
对信息系统中的授权用户的操作日志记录问题进行分析,得出在数据库层实现操作日志记录具有较大 的优越性,并给出了具体实现方案。 相似文献
5.
模型检验技术是开发高可信系统的重要途径.提出了一种基于定理证明的模型验证方法,并实现了工具验证.它以代数规约语言CafeOBJ描述系统的无限状态并把它转换成有限状态的SMV规约.通过观察迁移系统,证明产生的SMV规约的反例即CafeOBJ规约的反例,来找出开发早期阶段的系统的潜在错误,从而避免时间、金钱的耗费及重复性的劳动. 相似文献
6.
7.
Characterizing user’s intent and behaviour while using a retrieval information tool (e.g. a search engine) is a key question on web research, as it hold the keys to know how the users interact, what they are expecting and how we can provide them information in the most beneficial way. Previous research has focused on identifying the average characteristics of user interactions. This paper proposes a stratified method for analyzing query logs that groups queries and sessions according to their hit frequency and analyzes the characteristics of each group in order to find how representative the average values are. Findings show that behaviours typically associated with the average user do not fit in most of the aforementioned groups. 相似文献
8.
Li Guofu Zhu Pengjia Cao Ning Wu Mei Chen Zhiyi Cao Guangsheng Li Hongjun Gong Chenjing 《Multimedia Tools and Applications》2019,78(15):21521-21535
Multimedia Tools and Applications - Mining the vast amount of server-side logging data is an essential step to boost the business intelligence, as well as to facilitate the system maintenance for... 相似文献
9.
10.
Local model checking and protocol analysis 总被引:1,自引:1,他引:1
Xiaoqun Du Scott A. Smolka Rance Cleaveland 《International Journal on Software Tools for Technology Transfer (STTT)》1999,2(3):219-241
This paper describes a local model-checking algorithm for the alternation-free fragment of the modal mu-calculus that has
been implemented in the Concurrency Factory and discusses its application to the analysis of a real-time communications protocol.
The protocol considered is RETHER, a software-based, real-time Ethernet protocol developed at SUNY at Stony Brook. Its purpose is to provide guaranteed bandwidth
and deterministic, periodic network access to multimedia applications over commodity Ethernet hardware. Our model-checking
results show that (for a particular network configuration) RETHER makes good on its bandwidth guarantees to real-time nodes without exposing non-real-time nodes to the possibility of starvation.
Our data also indicate that, in many cases, the state-exploration overhead of the local model checker is significantly smaller
than the total amount that would result from a global analysis of the protocol. In the course of specifying and verifying
RETHER, we also identified an alternative design of the protocol that warranted further study due to its potentially smaller run-time
overhead in servicing requests for data transmission. Again, using local model checking, we showed that this alternative design
also possesses the properties of interest. This observation points out one of the often-overlooked benefits of formal verification:
by forcing designers to understand their designs rigorously and abstractly, these techniques often enable the designers to
uncover interesting design alternatives. 相似文献
11.
Scientific data analysis and visualization have become the key component for nowadays large scale simulations. Due to the rapidly increasing data volume and awkward I/O pattern among high structured files, known serial methods/tools cannot scale well and usually lead to poor performance over traditional architectures. In this paper, we propose a new framework: ParSA (parallel scientific data analysis) for high-throughput and scalable scientific analysis, with distributed file system. ParSA presents the optimization strategies for grouping and splitting logical units to utilize distributed I/O property of distributed file system, scheduling the distribution of block replicas to reduce network reading, as well as to maximize overlapping the data reading, processing, and transferring during computation. Besides, ParSA provides the similar interfaces as the NetCDF Operator (NCO), which is used in most of climate data diagnostic packages, making it easy to use this framework. We utilize ParSA to accelerate well-known analysis methods for climate models on Hadoop Distributed File System (HDFS). Experimental results demonstrate the high efficiency and scalability of ParSA, getting the maximum 1.3 GB/s throughput on a six nodes Hadoop cluster with five disks per node. Yet, it can only get 392 MB/s throughput on a RAID-6 storage node. 相似文献
12.
In software development, testers often focus on functional testing to validate implemented programs against their specifications. In safety-critical software development, testers are also required to show that tests exercise, or cover, the structure and logic of the implementation. To achieve different types of logic coverage, various program artifacts such as decisions and conditions are required to be exercised during testing. Use of model checking for structural test generation has been proposed by several researchers. The limited application to models used in practice and the state space explosion can, however, impact model checking and hence the process of deriving tests for logic coverage. Thus, there is a need to validate these approaches against relevant industrial systems such that more knowledge is built on how to efficiently use them in practice. In this paper, we present a tool-supported approach to handle software written in the Function Block Diagram language such that logic coverage criteria can be formalized and used by a model checker to automatically generate tests. To this end, we conducted a study based on industrial use-case scenarios from Bombardier Transportation AB, showing how our toolbox CompleteTest can be applied to generate tests in software systems used in the safety-critical domain. To evaluate the approach, we applied the toolbox to 157 programs and found that it is efficient in terms of time required to generate tests that satisfy logic coverage and scales well for most of the programs. 相似文献
13.
Yannis Smaragdakis Christoph Csallner Ranjith Subramanian 《Automated Software Engineering》2009,16(1):73-99
We explore the automatic generation of test data that respect constraints expressed in the Object-Role Modeling (ORM) language.
ORM is a popular conceptual modeling language, primarily targeting database applications, with significant uses in practice.
The general problem of even checking whether an ORM diagram is satisfiable is quite hard: restricted forms are easily NP-hard
and the problem is undecidable for some expressive formulations of ORM. Brute-force mapping to input for constraint and SAT
solvers does not scale: state-of-the-art solvers fail to find data to satisfy uniqueness and mandatory constraints in realistic
time even for small examples. We instead define a restricted subset of ORM that allows efficient reasoning yet contains most
constraints overwhelmingly used in practice. We show that the problem of deciding whether these constraints are consistent
(i.e., whether we can generate appropriate test data) is solvable in polynomial time, and we produce a highly efficient (interactive
speed) checker. Additionally, we analyze over 160 ORM diagrams that capture data models from industrial practice and demonstrate
that our subset of ORM is expressive enough to handle their vast majority. 相似文献
14.
Microsystem Technologies - MEMS technology has been applied in many fields including deep oil exploration and seismic detection areas, of which poor performance has limited its development until... 相似文献
15.
段海梦 《网络安全技术与应用》2014,(8):30-30
相比Windows操作系统而言,原代码开放、自由免费、支持SWP和NUMA体系结构,支持平台最多的Linux操作系统自1991年诞生以来,便具有无可取代的优势,目前广泛应用于嵌入式系统和服务器上。同时其多用户多任务的特性可以为用户量身定做适合的功能,从而灵活应用在各种场合。虽然在早期系统中用户界面方面不如Windows操作方便,主要采用字符界面键入命令,但现在的系统大多数也采用图形化界面深受用户喜爱。文件系统是Linux的一个突出特色,本文以Linux的典型文件系统ext2为例,介绍该文件系统的组成及工作原理,同时设计了简单的模拟实现文件系统简单的功能。 相似文献
16.
提出了一个防火墙日志分析系统的完整实现方案,对日志格式的特点、日志预处理方法进行了深入的分析。利用一个后台处理程序实现了日志数据的自动导入及维护,并提供了基于ASP.NET的用户查询界面。另外,后台程序还以用户自定义规则的形式,根据防火墙日志的统计结果实现自动报警,或以C#调用VBScript脚本的方式,自动地对防火墙设备进行配置,从而使防火墙的参数配置与日志相关联,具有一定的自适应性。 相似文献
17.
针对城轨交通的设备维护及检修,利用目前新兴的数据挖掘技术对日志进行关联分析,以期达到事故防患于未然的目的。介绍了关联规则的设计、算法的演示以及挖掘过程,并提出了一种改进算法,最后针对PMI日志做了相关分析研究。 相似文献
18.
19.
A transaction log analysis of a digital library 总被引:1,自引:0,他引:1
Steve Jones Sally Jo Cunningham Rodger McNab Stefan Boddie 《International Journal on Digital Libraries》2000,3(2):152-169
As experimental digital library testbeds gain wider acceptance and develop significant user bases, it becomes important to
investigate the ways in which users interact with the systems in practice. Transaction logs are one source of usage information,
and the information on user behavior can be culled from them both automatically (through calculation of summary statistics)
and manually (by examining query strings for semantic clues on search motivations and searching strategy). We have conducted
a transaction log analysis on user activity in the Computer Science Technical Reports Collection of the New Zealand Digital
Library, and report insights gained and identify resulting search interface design issues. Specifically, we present the user
demographics available with our library, discuss the use of operators and search options in queries, and examine patterns
in query construction and refinement. We also describe common mistakes in searching, and examine the distribution of query
terms appearing in the logs.
Received: 17 December 1998/Revised: 25 May 1999 相似文献
20.
Strom R.E. Yellin D.M. 《IEEE transactions on pattern analysis and machine intelligence》1993,19(5):478-485
The authors present a practical extension to typestate checking, which is capable of proving programs free of uninitialized variable errors even when these programs contain conditionally initialized variables where the initialization of a variable depends upon the equality of one or more tag variables to a constant. The user need not predeclare the relationship between a conditionally initialized variable and its tags, and this relationship may change from one point in the program to another. The technique generalizes liveness analysis to conditional liveness analysis. Like typestate checking, this technique incorporates a dataflow analysis algorithm in which each point in a program is labeled with a lattice point describing statically tracked information, including the initialization of variables. The labeling is then used to check for programming errors such as referencing a variable which may be uninitialized 相似文献