首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 703 毫秒
1.
In this paper we derive five first-order likelihood-based confidence intervals for a population proportion parameter based on binary data subject to false-positive misclassification and obtained using a double sampling plan. We derive confidence intervals based on certain combinations of likelihood, Fisher-information types, and likelihood-based statistics. Using Monte Carlo methods, we compare the coverage properties and average widths of three new confidence intervals for a binomial parameter. We determine that an interval estimator derived from inverting a score-type statistic is superior in terms of coverage probabilities to three competing interval estimators for the parameter configurations examined here. Utilizing the expressions derived, we also determine confidence intervals for a binary parameter using real data subject to false-positive misclassification.  相似文献   

2.
Based on the preference ranking organization method for enrichment evaluations (PROMETHEE), the purpose of this paper is to develop a new multiple criteria decision-making method that uses the approach of likelihood-based outranking comparisons within the environment of interval type-2 fuzzy sets. Uncertain and imprecise assessment of information often occurs in multiple criteria decision analysis (MCDA). The theory of interval type-2 fuzzy sets is useful and convenient for modeling impressions and quantifying the ambiguous nature of subjective judgments. Using the approach of likelihood-based outranking comparisons, this paper presents an interval type-2 fuzzy PROMETHEE method designed to address MCDA problems based on interval type-2 trapezoidal fuzzy (IT2TrF) numbers. This paper introduces the concepts of lower and upper likelihoods for acquiring the likelihood of an IT2TrF binary relationship and defines a likelihood-based outranking index to develop certain likelihood-based preference functions that correspond to several generalized criteria. The concept of comprehensive preference measures is proposed to determine IT2TrF exiting, entering, and net flows in the valued outranking relationships. In addition, this work establishes the concepts of a comprehensive outranking index, a comprehensive outranked index, and a comprehensive dominance index to induce partial and total preorders for the purpose of acquiring partial ranking and complete ranking, respectively, of the alternative actions. The feasibility and applicability of the proposed method are illustrated with two practical applications to the problem of landfill site selection and a car evaluation problem. Finally, a comparison with other relevant methods is conducted to validate the effectiveness of the proposed method.  相似文献   

3.
Neural Computing and Applications - The purpose of this paper is to propose a useful likelihood measure for determining scalar function order relations and developing a novel likelihood-based...  相似文献   

4.
In this paper, we propose a new likelihood-based methodology to represent epistemic uncertainty described by sparse point and/or interval data for input variables in uncertainty analysis and design optimization problems. A worst-case maximum likelihood-based approach is developed for the representation of epistemic uncertainty, which is able to estimate the distribution parameters of a random variable described by sparse point and/or interval data. This likelihood-based approach is general and is able to estimate the parameters of any known probability distributions. The likelihood-based representation of epistemic uncertainty is then used in the existing framework for robustness-based design optimization to achieve computational efficiency. The proposed uncertainty representation and design optimization methodologies are illustrated with two numerical examples including a mathematical problem and a real engineering problem.  相似文献   

5.

The development of digital technology is utilized by people to capture and share video frames. At present, rather than capturing images, people are interested in recording video footage for exploring information. Here, retrieval of video from large databases is challenging due to the continuous frame count. To overcome these challenges associated with the retrieval of video from available databases, this research proposed a likelihood-based regression approach for video processing. To improve the retrieval accuracy of video sequences, the proposed method utilizes a likelihood estimation technique integrated with a regression model. The likelihood estimate measures the pixel level roughly for estimating the pixel range, after which the regression approach measures the pixel level for transforming certainly blurred and unwanted pixels. In the proposed likelihood regression approach, the video is converted into a video frame and stored in a database. Query frames are taken into account by the generated database depending on the features which are used for a given video to be retrieved. The significant video retrieval performance obtained from the simulation results for the proposed likelihood-based regression model shows that the proposed model performs well over the other state-of-the-art techniques.

  相似文献   

6.
In this article we derive likelihood-based confidence intervals for the risk ratio using over-reported two-sample binary data obtained using a double-sampling scheme. The risk ratio is defined as the ratio of two proportion parameters. By maximizing the full likelihood function, we obtain closed-form maximum likelihood estimators for all model parameters. In addition, we derive four confidence intervals: a naive Wald interval, a modified Wald interval, a Fieller-type interval, and an Agresti-Coull interval. All four confidence intervals are illustrated using cervical cancer data. Finally, we conduct simulation studies to assess and compare the coverage probabilities and average lengths of the four interval estimators. We conclude that the modified Wald interval, unlike the other three intervals, produces close-to-nominal confidence intervals under various simulation scenarios examined here and, therefore, is preferred in practice.  相似文献   

7.
相似性度量是聚类分析的重要基础,如何有效衡量类属型符号间的相似性是相似性度量的一个难点.文中根据离散符号的核概率密度衡量符号间的相似性,与传统的简单符号匹配及符号频度估计方法不同,该相似性度量在核函数带宽的作用下,不再依赖同一属性上符号间独立性假设.随后建立类属型数据的贝叶斯聚类模型,定义基于似然的类属型对象-簇间相似性度量,给出基于模型的聚类算法.采用留一估计和最大似然估计,提出3种求解方法在聚类过程中动态确定最优的核带宽.实验表明,相比使用特征加权或简单匹配距离的聚类算法,文中算法可以获得更高的聚类精度,估计的核函数带宽在重要特征识别等应用中具有实际意义.  相似文献   

8.
This paper presents an interval-valued intuitionistic fuzzy permutation method with likelihood-based preference functions for managing multiple criteria decision analysis based on interval-valued intuitionistic fuzzy sets. First, certain likelihood-based preference functions are proposed using the likelihoods of interval-valued intuitionistic fuzzy preference relationships. Next, selected practical indices of concordance/discordance are established to evaluate all possible permutations of the alternatives. The optimal priority order of the alternatives is determined by comparing all comprehensive concordance/discordance values based on score functions. Furthermore, this paper considers various preference types and develops another interval-valued intuitionistic fuzzy permutation method using programming models to address multiple criteria decision-making problems with incomplete preference information. The feasibility and applicability of the proposed methods are illustrated in the problem of selecting a suitable bridge construction method. Moreover, certain comparative analyses are conducted to verify the advantages of the proposed methods compared with those of other decision-making methods. Finally, the practical effectiveness of the proposed methods is validated with a risk assessment problem in new product development.  相似文献   

9.
The three likelihood-based tests, namely, likelihood ratio test, Rao score test, and Wald test and two more asymptotic tests which use Srivastava's estimator of intraclass correlation coefficient are considered to test the null hypothesis of equality of intraclass correlation coefficients when the families have unequal number of children. Methods are illustrated on Galton's data set. Using simulation experiment we compute the sizes and powers of these tests and compare. It is found that our proposed test using Srivastava's estimator and the score test perform the best among all tests.  相似文献   

10.
本文对频偏和相偏参数未知的幅相调制信号的识别问题,提出了一种新的基于似然函数的方法。该方法引入一种自适应的马尔可夫链蒙特卡罗(MCMC)算法——自适应Metropolis(AM)算法,可产生满足目标分布的未知参数的各态历经样本从而实现似然函数的近似计算。仿真试验表明算法具有很好的收敛性和识别精度。  相似文献   

11.
Prerau MJ  Eden UT 《Neural computation》2011,23(10):2537-2566
We develop a general likelihood-based framework for use in the estimation of neural firing rates, which is designed to choose the temporal smoothing parameters that maximize the likelihood of missing data. This general framework is algorithm-independent and thus can be applied to a multitude of established methods for firing rate or conditional intensity estimation. As a simple example of the use of the general framework, we apply it to the peristimulus time histogram and kernel smoother, the methods most widely used for firing rate estimation in the electrophysiological literature and practice. In doing so, we illustrate how the use of the framework can employ the general point process likelihood as a principled cost function and can provide substantial improvements in estimation accuracy for even the most basic of rate estimation algorithms. In particular, the resultant kernel smoother is simple to implement, efficient to compute, and can accurately determine the bandwidth of a given rate process from individual spike trains. We perform a simulation study to illustrate how the likelihood framework enables the kernel smoother to pick the bandwidth parameter that best predicts missing data, and we show applications to real experimental spike train data. Additionally, we discuss how the general likelihood framework may be used in conjunction with more sophisticated methods for firing rate and conditional intensity estimation and suggest possible applications.  相似文献   

12.
To compare two treatment effects, which can be described as the difference of the parameters in two linear models, we propose an empirical likelihood-based method to make inference for the difference. Our method is free of the assumptions of normally distributed and homogeneous errors, and equal sample sizes. The empirical likelihood ratio for the difference of the parameters of interest is shown to be asymptotically chi-squared. Simulation experiments illustrate that our method outperforms the published ones. Our method is used to analyze a data set from a drug study.  相似文献   

13.
The three likelihood-based tests, namely, likelihood ratio test, Rao score test, and Wald test and two more asymptotic tests which use Srivastava's estimator of intraclass correlation coefficient are considered to test the null hypothesis of equality of intraclass correlation coefficients when the families have unequal number of children. Methods are illustrated on Galton's data set. Using simulation experiment we compute the sizes and powers of these tests and compare. It is found that our proposed test using Srivastava's estimator and the score test perform the best among all tests.  相似文献   

14.
Minimum classification error training for online handwriting recognition   总被引:1,自引:0,他引:1  
This paper describes an application of the minimum classification error (MCE) criterion to the problem of recognizing online unconstrained-style characters and words. We describe an HMM-based, character and word-level MCE training aimed at minimizing the character or word error rate while enabling flexibility in writing style through the use of multiple allographs per character. Experiments on a writer-independent character recognition task covering alpha-numerical characters and keyboard symbols show that the MCE criterion achieves more than 30 percent character error rate reduction compared to the baseline maximum likelihood-based system. Word recognition results, on vocabularies of 5k to 10k, show that MCE training achieves around 17 percent word error rate reduction when compared to the baseline maximum likelihood system.  相似文献   

15.
基于极大似然估计的新息自适应滤波算法   总被引:1,自引:0,他引:1  
针对噪声统计信息未知或时变情况下常规卡尔曼滤波估计精度下降甚至发散的问题,提出了一种基于极大似然估计的新息自适应滤波算法.算法对基于极大似然估计的常规新息协方差估值器进行限定记忆指数衰减加权修正,增加滑动窗口内新近新息协方差序列的利用权重;根据新息自适应原理,利用新息协方差估计值直接计算滤波增益矩阵,加快滤波器收敛速度的同时提高了滤波算法的估计精度.算法应用于捷联惯性导航系统/全球定位系统(SINS/GPS)组合导航系统,仿真实验表明:在噪声统计信息未知或时变情况下,算法具有更强的鲁棒性以及更高的滤波精度.  相似文献   

16.
Estimation of longitudinal models of relationship status between all pairs of individuals (dyads) in social networks is challenging due to the complex inter-dependencies among observations and lengthy computation times. To reduce the computational burden of model estimation, a method is developed that subsamples the “always-null” dyads in which no relationships develop throughout the period of observation. The informative sampling process is accounted for by weighting the likelihood contributions of the observations by the inverses of the sampling probabilities. This weighted-likelihood estimation method is implemented using Bayesian computation and evaluated in terms of its bias, efficiency, and speed of computation under various settings. Comparisons are also made to a full information likelihood-based procedure that is only feasible to compute when limited follow-up observations are available. Calculations are performed on two real social networks of very different sizes. The easily computed weighted-likelihood procedure closely approximates the corresponding estimates for the full network, even when using low sub-sampling fractions. The fast computation times make the weighted-likelihood approach practical and able to be applied to networks of any size.  相似文献   

17.
We address the sequence classification problem using a probabilistic model based on hidden Markov models (HMMs). In contrast to commonly-used likelihood-based learning methods such as the joint/conditional maximum likelihood estimator, we introduce a discriminative learning algorithm that focuses on class margin maximization. Our approach has two main advantages: (i) As an extension of support vector machines (SVMs) to sequential, non-Euclidean data, the approach inherits benefits of margin-based classifiers, such as the provable generalization error bounds. (ii) Unlike many algorithms based on non-parametric estimation of similarity measures that enforce weak constraints on the data domain, our approach utilizes the HMM’s latent Markov structure to regularize the model in the high-dimensional sequence space. We demonstrate significant improvements in classification performance of the proposed method in an extensive set of evaluations on time-series sequence data that frequently appear in data mining and computer vision domains.  相似文献   

18.
This paper presents an a priori probability density function (pdf)-based time-of-arrival (TOA) source localization algorithms. Range measurements are used to estimate the location parameter for TOA source localization. Previous information on the position of the calibrated source is employed to improve the existing likelihood-based localization method. The cost function where the prior distribution was combined with the likelihood function is minimized by the adaptive expectation maximization (EM) and space-alternating generalized expectation–maximization (SAGE) algorithms. The variance of the prior distribution does not need to be known a priori because it can be estimated using Bayes inference in the proposed adaptive EM algorithm. Note that the variance of the prior distribution should be known in the existing three-step WLS method [1]. The resulting positioning accuracy of the proposed methods was much better than the existing algorithms in regimes of large noise variances. Furthermore, the proposed algorithms can also effectively perform the localization in line-of-sight (LOS)/non-line-of-sight (NLOS) mixture situations.  相似文献   

19.
本文讨论了软件可靠性的变点分析理论,结合Schneidewind模型提出了软件可靠性变点分析的极大似然方法,并将其应用于实际的软件失效数据集,采用对数PLR图和U-图准则进行检验,结果证明了变点分析方法在软件可靠性分析中的有效性和统计意义。  相似文献   

20.
In comparing the mean count of two independent samples, some practitioners would use the t-test or the Wilcoxon rank sum test while others may use methods based on a Poisson model. It is not uncommon to encounter count data that exhibit overdispersion where the Poisson model is no longer appropriate. This paper deals with methods for overdispersed data using the negative binomial distribution resulting from a Poisson-Gamma mixture. We investigate the small sample properties of the likelihood-based tests and compare their performances to those of the t-test and of the Wilcoxon test. We also illustrate how these procedures may be used to compute power and sample sizes to design studies with response variables that are overdispersed count data. Although methods are based on inferences about two independent samples, sample size calculations may also be applied to problems comparing more than two independent samples. It will be shown that there is gain in efficiency when using the likelihood-based methods compared to the t-test and the Wilcoxon test. In studies where each observation is very costly, the ability to derive smaller sample size estimates with the appropriate tests is not only statistically, but also financially, appealing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号