首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper studies the problem of achieving watermark semifragility in watermark-based authentication systems through a composite hypothesis testing approach. Embedding a semifragile watermark serves to distinguish legitimate distortions caused by signal-processing manipulations from illegitimate ones caused by malicious tampering. This leads us to consider authentication verification as a composite hypothesis testing problem with the watermark as side information. Based on the hypothesis testing model, we investigate effective embedding strategies to assist the watermark verifier to make correct decisions. Our results demonstrate that quantization-based watermarking is more appropriate than spread-spectrum-based methods to achieve the semifragility tradeoff between two error probabilities. This observation is confirmed by a case study of an additive Gaussian white noise channel with a Gaussian source using two figures of merit: 1) relative entropy of the two hypothesis distributions and 2) the receiver operating characteristic. Finally, we focus on common signal-processing distortions, such as JPEG compression and image filtering, and investigate the discrimination statistic and optimal decision regions to distinguish legitimate and illegitimate distortions. The results of this paper show that our approach provides insights for authentication watermarking and allows for better control of semifragility in specific applications.   相似文献   

2.
程序调试能力是学生程序设计能力的一个非常重要的方面。运用现象描述分析学的方法从学生对程序调试能力的总体评价、程序出错的原因分析、出错后的解决办法等方面进行深入的研究,得到一些非常重要的信息,并提出改进学生程序调试能力的办法。  相似文献   

3.
4.
In this paper we introduce a statistical approach to estimate the performance of inventory systems. We briefly survey the existing methods, present a stratified sampling methodology, and describe a new technique to estimate seasonal factors and safety stocks. The paper concludes with an example based on real life data.  相似文献   

5.
《Computer》1978,11(12):23-35
Given certain facts about a project that are known early, this macro-estimating technique generates an expected life-cycle curve of manpower against time.  相似文献   

6.
7.
一种基于组合测试的软件故障调试方法   总被引:13,自引:3,他引:13  
在研究了组合测试基本模型的基础上,提出了一种基于组合测试的故障原因诊断方法.该方法基于组合测试的结果,补充一些附加测试用例进行重新测试,并对其结果作进一步分析和验证,从而迅速将故障原因锁定在很小的范围内,这样可为软件的调试和测试工作提供更方便、更有价值的线索和参考.  相似文献   

8.
用Boltzmann统计状态熵度量怪引子的展开程度,从而提出选取延迟时间变量的第一极大熵准则。实际使用时,仅需等价地计算重构图所及的状态个数。通过Rossler吸引子的重构说明了本文方法的有效性。  相似文献   

9.
传感器网络调试研究综述   总被引:3,自引:0,他引:3  
越来越多面向不同应用领域的传感器网络被部署在真实环境中,帮助人们以新的方式观测周围的物理世界.然而,这些系统常常会出现各种不可预期的故障,能否快速有效地对这些故障进行检测、定位和修复,是传感器网络调试需要研究的重要内容.在概述了传感器网络调试问题之后,文章总结、比较了传感器网络调试过程中常用的系统状态信息获取技术,然后从故障检测、故障定位和故障修复3个方面综述了代表性关键技术及相关工具,最后探讨了该领域未来的研究方向.  相似文献   

10.
面向无线传感器网络应用的自适应调试方法   总被引:1,自引:0,他引:1  
李丰  霍玮  冯晓兵 《计算机学报》2011,34(7):1195-1213
传感网技术是物联网得以实现的重要基础.然而,受到资源有限以及程序行为不确定等因素的影响,无线传感器网络上编程和调试的难度尤甚于普通的分布式程序.文中提出了一种面向无线传感器网络程序的源码级错误诊断方法.该方法采用基于全局量计数器的方法进行程序追踪,然后根据追踪日志重放错误执行轨迹,支持属性违反错误的分析和调试.同时,通...  相似文献   

11.
A Statistical Language Modeling Approach to Online Deception Detection   总被引:1,自引:0,他引:1  
Online deception is disrupting our daily life, organizational process, and even national security. Existing approaches to online deception detection follow a traditional paradigm by using a set of cues as antecedents for deception detection, which may be hindered by ineffective cue identification. Motivated by the strength of statistical language models (SLMs) in capturing the dependency of words in text without explicit feature extraction, we developed SLMs to detect online deception. We also addressed the data sparsity problem in building SLMs in general and in deception detection in specific using smoothing and vocabulary pruning techniques. The developed SLMs were evaluated empirically with diverse datasets. The results showed that the proposed SLM approach to deception detection outperformed a state-of-the-art text categorization method as well as traditional feature-based methods.  相似文献   

12.
A Statistical Approach to Texture Classification from Single Images   总被引:9,自引:0,他引:9  
We investigate texture classification from single images obtained under unknown viewpoint and illumination. A statistical approach is developed where textures are modelled by the joint probability distribution of filter responses. This distribution is represented by the frequency histogram of filter response cluster centres (textons). Recognition proceeds from single, uncalibrated images and the novelty here is that rotationally invariant filters are used and the filter response space is low dimensional.Classification performance is compared with the filter banks and methods of Leung and Malik [IJCV, 2001], Schmid [CVPR, 2001] and Cula and Dana [IJCV, 2004] and it is demonstrated that superior performance is achieved here. Classification results are presented for all 61 materials in the Columbia-Utrecht texture database.We also discuss the effects of various parameters on our classification algorithm—such as the choice of filter bank and rotational invariance, the size of the texton dictionary as well as the number of training images used. Finally, we present a method of reliably measuring relative orientation co-occurrence statistics in a rotationally invariant manner, and discuss whether incorporating such information can enhance the classifiers performance.  相似文献   

13.
In this paper, we investigate material classification from single images obtained under unknown viewpoint and illumination. It is demonstrated that materials can be classified using the joint distribution of intensity values over extremely compact neighborhoods (starting from as small as 3times3 pixels square) and that this can outperform classification using filter banks with large support. It is also shown that the performance of filter banks is inferior to that of image patches with equivalent neighborhoods. We develop novel texton-based representations which are suited to modeling this joint neighborhood distribution for Markov random fields. The representations are learned from training images and then used to classify novel images (with unknown viewpoint and lighting) into texture classes. Three such representations are proposed and their performance is assessed and compared to that of filter banks. The power of the method is demonstrated by classifying 2,806 images of all 61 materials present in the Columbia-Utrecht database. The classification performance surpasses that of recent state-of-the-art filter bank-based classifiers such as Leung and Malik (IJCV 01), Cula and Dana (IJCV 04), and Varma and Zisserman (IJCV 05). We also benchmark performance by classifying all of the textures present in the UIUC, Microsoft Textile, and San Francisco outdoor data sets. We conclude with discussions on why features based on compact neighborhoods can correctly discriminate between textures with large global structure and why the performance of filter banks is not superior to that of the source image patches from which they were derived.  相似文献   

14.
Statistical Approach for Voice Personality Transformation   总被引:1,自引:0,他引:1  
A voice transformation method which changes the source speaker's utterances so as to sound similar to those of a target speaker is described. Speaker individuality transformation is achieved by altering the LPC cepstrum, average pitch period and average speaking rate. The main objective of the work involves building a nonlinear relationship between the parameters for the acoustical features of two speakers, based on a probabilistic model. The conversion rules involve the probabilistic classification and a cross correlation probability between the acoustic features of the two speakers. The parameters of the conversion rules are estimated by estimating the maximum likelihood of the training data. To obtain transformed speech signals which are perceptually closer to the target speaker's voice, prosody modification is also involved. Prosody modification is achieved by scaling excitation spectrum and time scale modification with appropriate modification factors. An evaluation by objective tests and informal listening tests clearly indicated the effectiveness of the proposed transformation method. We also confirmed that the proposed method leads to smoothly evolving spectral contours over time, which, from a perceptual standpoint, produced results that were superior to conventional vector quantization (VQ)-based methods  相似文献   

15.
Large-scale data-intensive cloud computing with the MapReduce framework is becoming pervasive for the core business of many academic, government, and industrial organizations. Hadoop, a state-of-the-art open source project, is by far the most successful realization of MapReduce framework. While MapReduce is easy- to-use, efficient and reliable for data-intensive computations, the excessive configuration parameters in Hadoop impose unexpected challenges on running various workloads with a Hadoop cluster effectively. Consequently, developers who have less experience with the Hadoop configuration system may devote a significant effort to write an application with poor performance, either because they have no idea how these configurations would influence the performance, or because they are not even aware that these configurations exist. There is a pressing need for comprehensive analysis and performance modeling to ease MapReduce application development and guide performance optimization under different Hadoop configurations. In this paper, we propose a statistical analysis approach to identify the relationships among workload characteristics, Hadoop configurations and workload performance. We apply principal component analysis and cluster analysis to 45 different metrics, which derive relationships between workload characteristics and corresponding performance under different Hadoop configurations. Regression models are also constructed that attempt to predict the performance of various workloads under different Hadoop configurations. Several non-intuitive relationships between workload characteristics and performance are revealed through our analysis and the experimental results demonstrate that our regression models accurately predict the performance of MapReduce workloads under different Hadoop configurations.  相似文献   

16.
《Micro, IEEE》1986,6(3):34-42
This specialized hardware assists program debugging and testing and program performance evaluation. It is installed like any other peripheral device.  相似文献   

17.
This research has investigated dynamic, execution-based rule analysis through the development of a Prototype Environment for Active Rule Debugging, called PEARD. PEARD simulates the execution of active database rules, supporting the Event-Condition-Action rule paradigm. Rule definition is flexible, where changes to rules can be applied immediately during a debugging session without recompiling the system. A breakpoint debugging tool allows breakpoints to be set so that the state of variables may be inspected and changed anytime a breakpoint is reached during rule execution. A rule visualization tool displays the rule triggering process in graph form, supporting different visualization granularities to help the user to understand rule execution. Color coding is also used as part of the visualization tool to help the user see where the different parts of an ECA rule are executed due to deferred coupling modes. Users can examine different parts of the rule graph display to inspect the state of a transaction at different rule execution points. Other debugging features include a means for detecting potential cycles in rule execution and a utility to examine different rule execution paths from the same point in the rule triggering process. Our experience with PEARD has helped to identify some of the useful functional components of an active rule debugging tool and to identify research directions for future active rule development environments.This research was partially supported by NSF Grant No. IRI-9410993.  相似文献   

18.
Criteria defining the sign of entropy variation in the open system were considered. They enabled the author to explain the origin of the system properties and formulate the principles to be taken into account upon modeling the self-organizing systems.  相似文献   

19.
The use of information theory concepts in statistical decision problems is examined. It is shown that the average risk of decision-making is related to the amount of information contained in observations. This is used as a basis for estimating the minimum average risk of decision-making. A number of examples are considered  相似文献   

20.
We propose a convex optimization approach to solving the nonparametric regression estimation problem when the underlying regression function is Lipschitz continuous. This approach is based on the minimization of the sum of empirical squared errors, subject to the constraints implied by Lipschitz continuity. The resulting optimization problem has a convex objective function and linear constraints, and as a result, is efficiently solvable. The estimated function computed by this technique, is proven to convergeto the underlying regression function uniformly and almost surely, when the sample size grows to infinity, thus providing a very strong form of consistency. Wealso propose a convex optimization approach to the maximum likelihood estimation of unknown parameters in statistical models, where the parameters depend continuously on some observable input variables. For a number of classical distributional forms, the objective function in the underlying optimization problem is convex and the constraints are linear. These problems are, therefore, also efficiently solvable.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号