共查询到20条相似文献,搜索用时 0 毫秒
1.
《Information Forensics and Security, IEEE Transactions on》2009,4(2):179-192
2.
程序调试能力是学生程序设计能力的一个非常重要的方面。运用现象描述分析学的方法从学生对程序调试能力的总体评价、程序出错的原因分析、出错后的解决办法等方面进行深入的研究,得到一些非常重要的信息,并提出改进学生程序调试能力的办法。 相似文献
3.
4.
In this paper we introduce a statistical approach to estimate the performance of inventory systems. We briefly survey the existing methods, present a stratified sampling methodology, and describe a new technique to estimate seasonal factors and safety stocks. The paper concludes with an example based on real life data. 相似文献
5.
《Computer》1978,11(12):23-35
Given certain facts about a project that are known early, this macro-estimating technique generates an expected life-cycle curve of manpower against time. 相似文献
6.
7.
8.
用Boltzmann统计状态熵度量怪引子的展开程度,从而提出选取延迟时间变量的第一极大熵准则。实际使用时,仅需等价地计算重构图所及的状态个数。通过Rossler吸引子的重构说明了本文方法的有效性。 相似文献
9.
10.
11.
Lina Zhou Yongmei Shi Dongsong Zhang 《Knowledge and Data Engineering, IEEE Transactions on》2008,20(8):1077-1081
Online deception is disrupting our daily life, organizational process, and even national security. Existing approaches to online deception detection follow a traditional paradigm by using a set of cues as antecedents for deception detection, which may be hindered by ineffective cue identification. Motivated by the strength of statistical language models (SLMs) in capturing the dependency of words in text without explicit feature extraction, we developed SLMs to detect online deception. We also addressed the data sparsity problem in building SLMs in general and in deception detection in specific using smoothing and vocabulary pruning techniques. The developed SLMs were evaluated empirically with diverse datasets. The results showed that the proposed SLM approach to deception detection outperformed a state-of-the-art text categorization method as well as traditional feature-based methods. 相似文献
12.
We investigate texture classification from single images obtained under unknown viewpoint and illumination. A statistical approach is developed where textures are modelled by the joint probability distribution of filter responses. This distribution is represented by the frequency histogram of filter response cluster centres (textons). Recognition proceeds from single, uncalibrated images and the novelty here is that rotationally invariant filters are used and the filter response space is low dimensional.Classification performance is compared with the filter banks and methods of Leung and Malik [IJCV, 2001], Schmid [CVPR, 2001] and Cula and Dana [IJCV, 2004] and it is demonstrated that superior performance is achieved here. Classification results are presented for all 61 materials in the Columbia-Utrecht texture database.We also discuss the effects of various parameters on our classification algorithm—such as the choice of filter bank and rotational invariance, the size of the texton dictionary as well as the number of training images used. Finally, we present a method of reliably measuring relative orientation co-occurrence statistics in a rotationally invariant manner, and discuss whether incorporating such information can enhance the classifiers performance. 相似文献
13.
Varma M. Zisserman A. 《IEEE transactions on pattern analysis and machine intelligence》2009,31(11):2032-2047
In this paper, we investigate material classification from single images obtained under unknown viewpoint and illumination. It is demonstrated that materials can be classified using the joint distribution of intensity values over extremely compact neighborhoods (starting from as small as 3times3 pixels square) and that this can outperform classification using filter banks with large support. It is also shown that the performance of filter banks is inferior to that of image patches with equivalent neighborhoods. We develop novel texton-based representations which are suited to modeling this joint neighborhood distribution for Markov random fields. The representations are learned from training images and then used to classify novel images (with unknown viewpoint and lighting) into texture classes. Three such representations are proposed and their performance is assessed and compared to that of filter banks. The power of the method is demonstrated by classifying 2,806 images of all 61 materials present in the Columbia-Utrecht database. The classification performance surpasses that of recent state-of-the-art filter bank-based classifiers such as Leung and Malik (IJCV 01), Cula and Dana (IJCV 04), and Varma and Zisserman (IJCV 05). We also benchmark performance by classifying all of the textures present in the UIUC, Microsoft Textile, and San Francisco outdoor data sets. We conclude with discussions on why features based on compact neighborhoods can correctly discriminate between textures with large global structure and why the performance of filter banks is not superior to that of the source image patches from which they were derived. 相似文献
14.
Statistical Approach for Voice Personality Transformation 总被引:1,自引:0,他引:1
A voice transformation method which changes the source speaker's utterances so as to sound similar to those of a target speaker is described. Speaker individuality transformation is achieved by altering the LPC cepstrum, average pitch period and average speaking rate. The main objective of the work involves building a nonlinear relationship between the parameters for the acoustical features of two speakers, based on a probabilistic model. The conversion rules involve the probabilistic classification and a cross correlation probability between the acoustic features of the two speakers. The parameters of the conversion rules are estimated by estimating the maximum likelihood of the training data. To obtain transformed speech signals which are perceptually closer to the target speaker's voice, prosody modification is also involved. Prosody modification is achieved by scaling excitation spectrum and time scale modification with appropriate modification factors. An evaluation by objective tests and informal listening tests clearly indicated the effectiveness of the proposed transformation method. We also confirmed that the proposed method leads to smoothly evolving spectral contours over time, which, from a perceptual standpoint, produced results that were superior to conventional vector quantization (VQ)-based methods 相似文献
15.
Large-scale data-intensive cloud computing with the MapReduce framework is becoming pervasive for the core business of many
academic, government, and industrial organizations. Hadoop, a state-of-the-art open source project, is by far the most successful
realization of MapReduce framework. While MapReduce is easy- to-use, efficient and reliable for data-intensive computations,
the excessive configuration parameters in Hadoop impose unexpected challenges on running various workloads with a Hadoop cluster
effectively. Consequently, developers who have less experience with the Hadoop configuration system may devote a significant
effort to write an application with poor performance, either because they have no idea how these configurations would influence
the performance, or because they are not even aware that these configurations exist. There is a pressing need for comprehensive
analysis and performance modeling to ease MapReduce application development and guide performance optimization under different
Hadoop configurations. In this paper, we propose a statistical analysis approach to identify the relationships among workload
characteristics, Hadoop configurations and workload performance. We apply principal component analysis and cluster analysis
to 45 different metrics, which derive relationships between workload characteristics and corresponding performance under different
Hadoop configurations. Regression models are also constructed that attempt to predict the performance of various workloads
under different Hadoop configurations. Several non-intuitive relationships between workload characteristics and performance
are revealed through our analysis and the experimental results demonstrate that our regression models accurately predict the
performance of MapReduce workloads under different Hadoop configurations. 相似文献
16.
《Micro, IEEE》1986,6(3):34-42
This specialized hardware assists program debugging and testing and program performance evaluation. It is installed like any other peripheral device. 相似文献
17.
Alexander Jähne Susan D. Urban Suzanne W. Dietrich 《Journal of Intelligent Information Systems》1996,7(2):111-128
This research has investigated dynamic, execution-based rule analysis through the development of a Prototype Environment for Active Rule Debugging, called PEARD. PEARD simulates the execution of active database rules, supporting the Event-Condition-Action rule paradigm. Rule definition is flexible, where changes to rules can be applied immediately during a debugging session without recompiling the system. A breakpoint debugging tool allows breakpoints to be set so that the state of variables may be inspected and changed anytime a breakpoint is reached during rule execution. A rule visualization tool displays the rule triggering process in graph form, supporting different visualization granularities to help the user to understand rule execution. Color coding is also used as part of the visualization tool to help the user see where the different parts of an ECA rule are executed due to deferred coupling modes. Users can examine different parts of the rule graph display to inspect the state of a transaction at different rule execution points. Other debugging features include a means for detecting potential cycles in rule execution and a utility to examine different rule execution paths from the same point in the rule triggering process. Our experience with PEARD has helped to identify some of the useful functional components of an active rule debugging tool and to identify research directions for future active rule development environments.This research was partially supported by NSF Grant No. IRI-9410993. 相似文献
18.
V. I. Shapovalov 《Automation and Remote Control》2001,62(6):909-918
Criteria defining the sign of entropy variation in the open system were considered. They enabled the author to explain the origin of the system properties and formulate the principles to be taken into account upon modeling the self-organizing systems. 相似文献
19.
P. A. Bakut 《控制论与系统》2013,44(3):117-125
The use of information theory concepts in statistical decision problems is examined. It is shown that the average risk of decision-making is related to the amount of information contained in observations. This is used as a basis for estimating the minimum average risk of decision-making. A number of examples are considered 相似文献
20.
We propose a convex optimization approach to solving the nonparametric regression estimation problem when the underlying regression function is Lipschitz continuous. This approach is based on the minimization of the sum of empirical squared errors, subject to the constraints implied by Lipschitz continuity. The resulting optimization problem has a convex objective function and linear constraints, and as a result, is efficiently solvable. The estimated function computed by this technique, is proven to convergeto the underlying regression function uniformly and almost surely, when the sample size grows to infinity, thus providing a very strong form of consistency. Wealso propose a convex optimization approach to the maximum likelihood estimation of unknown parameters in statistical models, where the parameters depend continuously on some observable input variables. For a number of classical distributional forms, the objective function in the underlying optimization problem is convex and the constraints are linear. These problems are, therefore, also efficiently solvable. 相似文献