首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
目的 数字图像的真实性问题备受人们关注,被动取证是解决该问题的有效途径。然而,如果伪造者在篡改图像的同时利用反取证技术对篡改的痕迹进行消除或伪造,那么已有的大量被动取证技术都将失效。回顾图像反取证技术的研究现状(包括兴起原因、实现原理、技术特点以及应用前景),并根据已有文献总结反取证技术面临的主要挑战和机遇。方法 由于现有的被动取证技术大都基于遗留痕迹和固有特征的异同来辨识图像真伪,因此本文以不同的取证特征为线索来评述和比较反取证技术的原理和策略。结果 根据取证特征的不同,将反取证技术归纳为遗留痕迹隐藏、固有特征伪造和反取证检测等三类,并展示了当前各类反取证技术面临的难点和挑战。结论 对数字图像反取证技术进行总结和展望,并指出其算法未来在通用性、安全性、可靠性等方面将有待进一步的深入研究。  相似文献   

2.
ABSTRACT

Forensic analysis is the science of collecting, examining, and presenting evidence in order to support or refute a hypothesis. With the increasing size of storage devices and growing popularity of digital hand-held devices connecting to the Internet, performing an effective digital forensic investigation is becoming more challenging to investigators. In this article, we evaluate the existing tools of timeline analysis and identify the need for a solid timeline analysis tool. For this reason, the article studies an existing but discontinued project called Zeitline, presents its features and shortcomings, and develops new capabilities to overcome these limitations. A case study is then presented in which the application’s functionality is tested.  相似文献   

3.
In this paper we discuss how to generate inductive invariants for safety verification of hybrid systems. A hybrid symbolic-numeric method is presented to compute inequality inductive invariants of the given systems. A numerical invariant of the given system can be obtained by solving a parameterized polynomial optimization problem via sum-of-squares (SOS) relaxation. And a method based on Gauss-Newton refinement and rational vector recovery is used to obtain the invariants with rational coefficients, which exactly satisfy the conditions of invariants. Several examples are given to illustrate our algorithm.  相似文献   

4.
ContextMany researchers adopting systematic reviews (SRs) have also published papers discussing problems with the SR methodology and suggestions for improving it. Since guidelines for SRs in software engineering (SE) were last updated in 2007, we believe it is time to investigate whether the guidelines need to be amended in the light of recent research.ObjectiveTo identify, evaluate and synthesize research published by software engineering researchers concerning their experiences of performing SRs and their proposals for improving the SR process.MethodWe undertook a systematic review of papers reporting experiences of undertaking SRs and/or discussing techniques that could be used to improve the SR process. Studies were classified with respect to the stage in the SR process they addressed, whether they related to education or problems faced by novices and whether they proposed the use of textual analysis tools.ResultsWe identified 68 papers reporting 63 unique studies published in SE conferences and journals between 2005 and mid-2012. The most common criticisms of SRs were that they take a long time, that SE digital libraries are not appropriate for broad literature searches and that assessing the quality of empirical studies of different types is difficult.ConclusionWe recommend removing advice to use structured questions to construct search strings and including advice to use a quasi-gold standard based on a limited manual search to assist the construction of search stings and evaluation of the search process. Textual analysis tools are likely to be useful for inclusion/exclusion decisions and search string construction but require more stringent evaluation. SE researchers would benefit from tools to manage the SR process but existing tools need independent validation. Quality assessment of studies using a variety of empirical methods remains a major problem.  相似文献   

5.
Several automation tools have been developed over the years for forensic document examination (FDE) of handwritten items. Integrating the developed tools into a unified framework is considered and the essential role of the human in the process is discussed. The task framework is developed by considering the approach of computational thinking whose components are abstraction, algorithms, mathematical models and ability to scale. Beginning with the human FDE procedure expressed in algorithmic form, mathematical and software implementations of individual steps of the algorithm are described. Advantages of the framework are discussed, including efficiency (ability to scale to tasks with many handwritten items), reproducibility and validation/improvement of existing manual procedures. It is indicated that as with other expert systems, such as for medical diagnosis, current automation tools are useful only as part of a larger manually intensive procedure. This viewpoint is illustrated with a well-known FDE case, concerning the Lindbergh kidnapping with a new hypothesis – in this case, there are multiple questioned documents, possibility of multiple writers of the same document, determining whether the writing is disguised, known writing is formal while questioned writing is informal, etc. Observations are made for future developments, where human examiners provide handwriting characteristics while computational methods provide the necessary statistical analysis.  相似文献   

6.
In digital forensic, evidence images are stored on the disk by a forensic tool. However, the stored images can be damaged due to unexpected internal and external electromagnetic effects. Existing forensic tools only provide integrity and authenticity of the evidence images by utilizing legacy cryptographic methods, i.e., applying hash values and digital signatures. Accordingly, such integrity and authenticity applied to those evidence images can be easily corrupted when the disk is damaged. In this paper, we focus on such limitations of the existing forensic tools and introduce a new scheme that can recover and protect the evidence images on the disk. Specifically, evidence images are divided into blocks; linkage relations between those blocks are formed; and a meta-block is applied to restore the damaged blocks. Blocks in the damaged areas detected using CRC information are subject to a multi-dimensional block operation for recovery of damaged blocks and protection for evidence images.  相似文献   

7.
Digital forensic examiners face challenges outside the technical aspects of collecting, investigating, and storing digital information. Rules about admissibility and the licensing requirements for forensic professionals must also be taken into account. The use of digital data in an expanding number of US court cases and business investigations has precipitated changes in evidence handling and admissibility requirements, most notably in the 2006 changes to the Federal Rules of Civil Procedure. Knowledge of these rules and the ensuing case law is an essential component of any examiner's toolkit because improper evidence handling can lead to inadmissible evidence. The court's acceptance of such evidence is also greatly affected by the examiner's proper licensure. Unfortunately, these requirements vary by state (sometimes even by city) and are constantly changing. Therefore, digital forensic investigators must heed both the court's rules regarding evidence handling and the state's rules for licensing in order to be most effective.  相似文献   

8.
LUKS是Linux系统下的常用的磁盘加密技术之一,通用于Linux的各个版本,具有支持多用户/密码对同一个设备的访问、加密密钥不依赖密码、可以改变密码而无需重新加密数据、采用一种数据分割技术来保存加密密钥保证密钥的安全性等特点。针对目前取证软件无法直接对LUKS加密分区的快速取证的问题,文章首先研究了LUKS加密分区的加密原理,并在此基础上提出了LUKS的解密方法,能够摆脱对Linux系统的依赖,极大提高了取证效率。  相似文献   

9.
ContextLarge-scale distributed systems are becoming commonplace with the large popularity of peer-to-peer and cloud computing. The increasing importance of these systems contrasts with the lack of integrated solutions to build trustworthy software. A key concern of any large-scale distributed system is the validation of global properties, which cannot be evaluated on a single node. Thus, it is necessary to gather data from distributed nodes and to aggregate these data into a global view. This turns out to be very challenging because of the system’s dynamism that imposes very frequent changes in local values that affect global properties. This implies that the global view has to be frequently updated to ensure an accurate validation of global properties.ObjectiveIn this paper, we present a model-based approach to define a dynamic oracle for checking global properties. Our objective is to abstract relevant aspects of such systems into models. These models are updated at runtime, by monitoring the corresponding distributed system.MethodWe conduce real-scale experimental validation to evaluate the ability of our approach to check global properties. In this validation, we apply our approach to test two open-source implementations of distributed hash tables. The experiments are deployed on two clusters of 32 nodes.ResultsThe experiments reveal an important defect on one implementation and show clear performance differences between the two implementations. The defect would not be detected without a global view of the system.ConclusionTesting global properties on distributed software consists of gathering data from different nodes and building a global view of the system, where properties are validated. This process requires a distributed test architecture and tools for representing and validating global properties. Model-based techniques are an expressive mean for building oracles that validate global properties on distributed systems.  相似文献   

10.
ContextThe paper addresses the use of a Software Product Line approach in the context of developing software for a high-integrity, regulated domain such as civil aerospace. The success of a Software Product Line approach must be judged on whether useful products can be developed more effectively (lower cost, reduced schedule) than with traditional single-system approaches. When developing products for regulated domains, the usefulness of the product is critically dependent on the ability of the development process to provide approval evidence for scrutiny by the regulating authority.ObjectiveThe objective of the work described is to propose a framework for arguing that a product instantiated using a Software Product Line approach can be approved and used within a regulated domain, such that the development cost of that product would be less than if it had been developed in isolation.MethodThe paper identifies and surveys the issues relating the adoption of Software Product Lines as currently understood (including related technologies such as feature modelling, component-based development and model transformation) when applied to high-integrity software development. We develop an argument framework using Goal Structuring Notation to structure the claims made and the evidence required to support the approval of an instantiated product in such domains. Any unsubstantiated claims or missing/sub-standard evidence is identified, and we propose potential approaches or pose research questions to help address this.ResultsThe paper provides an argument framework supporting the use of a Software Product Line approach within a high-integrity regulated domain. It shows how lifecycle evidence can be collected, managed and used to credibly support a regulatory approval process, and provides a detailed example showing how claims regarding model transformation may be supported. Any attempt to use a Software Product Line approach in a regulated domain will need to provide evidence to support their approach in accordance with the argument outlined in the paper.ConclusionProduct Line practices may complicate the generation of convincing evidence for approval of instantiated products, but it is possible to define a credible Trusted Product Line approach.  相似文献   

11.
ABSTRACT

Embedded devices are becoming ubiquitous in both domestic and commercial environments. Although smartphones, tablets, and video game consoles are all labeled by their primary function, most of these devices offer additional features and are capable of additional interactivity. Given the proprietary nature of such devices in terms of hardware and software and the protection mechanisms incorporated into these systems, it is and will continue to be extremely difficult to use “traditional digital forensics” methodologies to access storage media and acquire data for analysis. This paper examines how consumer law may be stifling research that the forensic community could ultimately depend upon to examine devices.  相似文献   

12.
《Software, IEEE》2006,23(4):98-99
An interesting issue facing software engineering relates to the evidence for adopting new techniques, tools, languages methodologies, and so on. We shouldn't always reject new models based on pure argument and logic, but ideally, we should subject such developments to some form of validation. The software engineering community has addressed this issue in part by the establishment of specialist conferences. Two of these are merging, and the Technical Council on Software Engineering thought you would like to know why.  相似文献   

13.
刘在强  林东岱  冯登国 《软件学报》2007,18(10):2635-2644
网络取证是对现有网络安全体系的必要扩展,已日益成为研究的重点.但目前在进行网络取证时仍存在很多挑战:如网络产生的海量数据;从已收集数据中提取的证据的可理解性;证据分析方法的有效性等.针对上述问题,利用模糊决策树技术强大的学习能力及其分析结果的易理解性,开发了一种基于模糊决策树的网络取证分析系统,以协助网络取证人员在网络环境下对计算机犯罪事件进行取证分析.给出了该方法的实验结果以及与现有方法的对照分析结果.实验结果表明,该系统可以对大多数网络事件进行识别(平均正确分类率为91.16%),能为网络取证人员提供可理解的信息,协助取证人员进行快速高效的证据分析.  相似文献   

14.
Agile software development methodologies are increasingly adopted by organizations because they focus on the client’s needs, thus safeguarding business value for the final product. At the same time, as the economy and society move toward globalization, more organizations shift to distributed development of software projects. From this perspective, while adopting agile techniques seems beneficial, there are still a number of challenges that need to be addressed; among these notable is the effective cooperation between the stakeholders and the geographically distributed development team. In addition, data collection and validation for requirements engineering demands efficient processing techniques in order to handle the volume of data as well as to manage different inconsistencies, when the data are collected using online tools. In this paper, we present “PBURC,” a patterns-based, unsupervised requirements clustering framework, which makes use of machine-learning methods for requirements validation, being able to overcome data inconsistencies and effectively determine appropriate requirements clusters for optimal definition of software development sprints.  相似文献   

15.
Currently, most records are produced and stored digitally using various types of media storage and computer systems. Unlike physical records such as paper-based records, identifying, collecting, and analyzing digital records require technical knowledge and tools that are not found in archival institutions. As a result, archival institutions face challenges in their attempt to collect digital archives. One approach to overcome this problem is for archival institutions to use digital forensic knowledge and technologies. In this paper, we propose the Digital Archive Management System that integrates digital forensic technologies and archival information management systems to acquire, identify, analyze, and manage digital records in archival intuitions.  相似文献   

16.
随着微软新一代操作系统Windows7的普及,计算机取证调查人员将在实际工作中越来越多的遇到Windows7操作系统,本文介绍了Windows7环境下调查人员需要了解的取证特点以及对取证的影响进行简要的介绍和分析。  相似文献   

17.
In this paper, we use the UML MARTE profile to model high-performance embedded systems (HPES) in the GASPARD2 framework. We address the design correctness issue on the UML model by using the formal validation tools associated with synchronous languages, i.e., the SIGALI model checker, etc. This modeling and validation approach benefits from the advantages of UML as a standard, and from the number of validation tools built around synchronous languages. In our context, model transformations act as a bridge between UML and the chosen validation technologies. They are implemented according to a model-driven engineering approach. The modeling and validation are illustrated using the multimedia functionality of a new-generation cellular phone.  相似文献   

18.
ContextThe questions of how many individuals and how much time to use for a single testing task are critical in software verification and validation. In software review and usability evaluation contexts, positive effects of using multiple individuals for a task have been found, but software testing has not been studied from this viewpoint.ObjectiveWe study how adding individuals and imposing time pressure affects the effectiveness and efficiency of manual testing tasks. We applied the group productivity theory from social psychology to characterize the type of software testing tasks.MethodWe conducted an experiment where 130 students performed manual testing under two conditions, one with a time restriction and pressure, i.e., a 2-h fixed slot, and another where the individuals could use as much time as they needed.ResultsWe found evidence that manual software testing is an additive task with a ceiling effect, like software reviews and usability inspections. Our results show that a crowd of five time-restricted testers using 10 h in total detected 71% more defects than a single non-time-restricted tester using 9.9 h. Furthermore, we use F-score measure from the information retrieval domain to analyze the optimal number of testers in terms of both effectiveness and validity of testing results. We suggest that future studies on verification and validation practices use F-score to provide a more transparent view of the results.ConclusionsThe results seem promising for the time-pressured crowds by indicating that multiple time-pressured individuals deliver superior defect detection effectiveness in comparison to non-time-pressured individuals. However, caution is needed, as the limitations of this study need to be addressed in future works. Finally, we suggest that the size of the crowd used in software testing tasks should be determined based on the share of duplicate and invalid reports produced by the crowd and by the effectiveness of the duplicate handling mechanisms.  相似文献   

19.
ContextThe development of distributed software systems has become an important problem for the software engineering community. Service-based applications are a common solution for this kind of systems. Services provide a uniform mechanism for discovering, integrating and using these resources. In the development of service based applications not only the functionality of services and compositions should be considered, but also conditions in which the system operates. These conditions are called non-functional requirements (NFR). The conformance of applications to NFR is crucial to deliver software that meets the expectations of its users.ObjectiveThis paper presents the results of a systematic mapping carried out to analyze how NFR have been addressed in the development of service-based applications in the last years, according to different points of view.MethodOur analysis applies the systematic mapping approach. It focuses on the analysis of publications organized by categories called facets, which are combined to answer specific research questions. The facets compose a classification schema which is part of the contribution and results.ResultsThis paper presents our findings on how NFR have been supported in the development of service-based applications by proposing a classification scheme consisting in five facets: (i) programming paradigm (object/service oriented); (ii) contribution (methodology, system, middleware); (iii) software process phase; (iv) technique or mathematical model used for expressing NFR; and (v) the types of NFR addressed by the papers, based on the classification proposed by the ISO/IEC 9126 specification. The results of our systematic mapping are presented as bubble charts that provide a quantitative analysis to show the frequencies of publications for each facet. The paper also proposes a qualitative analysis based on these plots. This analysis discusses how NFR (quality properties) have been addressed in the design and development of service-based applications, including methodologies, languages and tools devised to support different phases of the software process.ConclusionThis systematic mapping showed that NFR are not fully considered in all software engineering phases for building service based applications. The study also let us conclude that work has been done for providing models and languages for expressing NFR and associated middleware for enforcing them at run time. An important finding is that NFR are not fully considered along all software engineering phases and this opens room for proposing methodologies that fully model NFR. The data collected by our work and used for this systematic mapping are available in https://github.com/placidoneto/systematic-mapping_service-based-app_nfr.  相似文献   

20.
Digital forensic data from volatile system memory possesses the following distinctive features: volatility, transience, phased stability, complexity, relevance of collected data, and phased behavior predictability. We present a computer forensic analysis model (CERM) for the reconstruction of a chain of evidence of volatile memory data. CERM frees analysts from being confined to the traditional analysis approach of digital forensic data that requires single evidence-oriented analysis. In CERM, they can focus on higher abstract levels involving the relationships of independent pieces of evidence and analyze patterns to construct a chain of evidence from the perspective of Evidence Law. In addition to CERM, we have designed a correlation analysis algorithm based on time series. Experimental tests have been conducted to verify the established model and designed algorithm. The experimental result shows that CERM is feasible and efficient, thus providing a new analysis perspective for digital forensic data from volatile system memory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号