首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
An Approximate Bayesian Bootstrap (ABB) offers advantages in incorporating appropriate uncertainty when imputing missing data, but most implementations of the ABB have lacked the ability to handle nonignorable missing data where the probability of missingness depends on unobserved values. This paper outlines a strategy for using an ABB to multiply impute nonignorable missing data. The method allows the user to draw inferences and perform sensitivity analyses when the missing data mechanism cannot automatically be assumed to be ignorable. Results from imputing missing values in a longitudinal depression treatment trial as well as a simulation study are presented to demonstrate the method’s performance. We show that a procedure that uses a different type of ABB for each imputed data set accounts for appropriate uncertainty and provides nominal coverage.  相似文献   

2.
Classification is one of the most important tasks in machine learning with a huge number of real-life applications. In many practical classification problems, the available information for making object classification is partial or incomplete because some attribute values can be missing due to various reasons. These missing values can significantly affect the efficacy of the classification model. So it is crucial to develop effective techniques to impute these missing values. A number of methods have been introduced for solving classification problem with missing values. However they have various problems. So, we introduce an effective method for imputing missing values using the correlation among the attributes. Other methods which consider correlation for imputing missing values works better either for categorical or numeric data, or designed for a particular application only. Moreover they will not work if all the records have at least one missing attribute. Our method, Model based Missing value Imputation using Correlation (MMIC), can effectively impute both categorical and numeric data. It uses an effective model based technique for filling the missing values attribute wise and reusing then effectively using the model. Extensive performance analyzes show that our proposed approach achieves high performance in imputing missing values and thus increases the efficacy of the classifier. The experimental results also show that our method outperforms various existing methods for handling missing data in classification.  相似文献   

3.
When we have data with missing values, the assumption that data are missing at random is very convenient. It is, however, sometimes questionable because some of the missing values could be strongly related to the underlying true values. We introduce methods for nonignorable multivariate missing data, which assume that missingness is related to the variables in question, and to the additional covariates, through a latent variable measured by the missingness indicators. The methodology developed here is useful for investigating the sensitivity of one’s estimates to untestable assumptions about the missing-data mechanism. A simulation study and data analysis are conducted to evaluate the performance of the proposed method and to compare to that of MAR-based alternatives.  相似文献   

4.
Many data mining and data analysis techniques operate on dense matrices or complete tables of data. Real‐world data sets, however, often contain unknown values. Even many classification algorithms that are designed to operate with missing values still exhibit deteriorated accuracy. One approach to handling missing values is to fill in (impute) the missing values. In this article, we present a technique for unsupervised learning called unsupervised backpropagation (UBP), which trains a multilayer perceptron to fit to the manifold sampled by a set of observed point vectors. We evaluate UBP with the task of imputing missing values in data sets and show that UBP is able to predict missing values with significantly lower sum of squared error than other collaborative filtering and imputation techniques. We also demonstrate with 24 data sets and nine supervised learning algorithms that classification accuracy is usually higher when randomly withheld values are imputed using UBP, rather than with other methods.  相似文献   

5.
Nonlinear mixed-effects (NLME) models are widely used for longitudinal data analyses. Time-dependent covariates are often introduced to partially explain inter-individual variation. These covariates often have missing data, and the missingness may be nonignorable. Likelihood inference for NLME models with nonignorable missing data in time-varying covariates can be computationally very intensive and may even offer computational difficulties such as nonconvergence. We propose a computationally very efficient method for approximate likelihood inference. The method is illustrated using a real data example.  相似文献   

6.
Regression models are proposed for joint analysis of Poisson and continuous longitudinal data with nonignorable missing values under fully parametric framework. Our primary interest is to evaluate the influence of the covariates on both Poisson and continuous responses. First, we form the full likelihood with complete data using the multivariate Poisson model and conditional multivariate normal distribution and then construct an ECM algorithm to find the maximum likelihood estimates of the model parameters. Then, under the assumption that the missingness mechanisms for the two responses are independent but nonignorable, namely, dependent on both observed and missing data of the two responses, we choose the logit model for the missingness mechanisms and selection model for the full likelihood. Also, we build two implementations of the Monte Carlo EM algorithm for estimating the parameters in the model. Wald test is employed to test the significance of covariates. Finally, we present the results of the Monte Carlo simulation to evaluate the performance of the proposed methodology and an application to the interstitial cystitis data base (ICDB) cohort study. To the best of our knowledge, our model is the first parametric model for joint analysis of Poisson and continuous longitudinal data with nonignorable missing value.  相似文献   

7.
This paper investigates the characteristics of a clinical dataset using a combination of feature selection and classification methods to handle missing values and understand the underlying statistical characteristics of a typical clinical dataset. Typically, when a large clinical dataset is presented, it consists of challenges such as missing values, high dimensionality, and unbalanced classes. These pose an inherent problem when implementing feature selection and classification algorithms. With most clinical datasets, an initial exploration of the dataset is carried out, and those attributes with more than a certain percentage of missing values are eliminated from the dataset. Later, with the help of missing value imputation, feature selection and classification algorithms, prognostic and diagnostic models are developed. This paper has two main conclusions: 1) Despite the nature of clinical datasets, and their large size, methods for missing value imputation do not affect the final performance. What is crucial is that the dataset is an accurate representation of the clinical problem and those methods of imputing missing values are not critical for developing classifiers and prognostic/diagnostic models. 2) Supervised learning has proven to be more suitable for mining clinical data than unsupervised methods. It is also shown that non-parametric classifiers such as decision trees give better results when compared to parametric classifiers such as radial basis function networks(RBFNs).  相似文献   

8.
当前的不完整数据处理算法填充缺失值时,精度低下。针对这个问题,提出一种基于CFS聚类和改进的自动编码模型的不完整数据填充算法。利用CFS聚类算法对不完整数据集进行聚类,对降噪自动编码模型进行改进,根据聚类结果,利用改进的自动编码模型对缺失数据进行填充。为了使得CFS聚类算法能够对不完整数据集进行聚类,提出一种部分距离策略,用于度量不完整数据对象之间的距离。实验结果表明提出的算法能够有效填充缺失数据。  相似文献   

9.
马茜  谷峪  李芳芳  于戈 《软件学报》2016,27(9):2332-2347
近年来,随着感知网络的广泛应用,感知数据呈爆炸式增长.但是由于受到硬件设备的固有限制、部署环境的随机性以及数据处理过程中的人为失误等多方面因素的影响,感知数据中通常包含大量的缺失值.而大多数现有的上层应用分析工具无法处理包含缺失值的数据集,因此对缺失数据进行填补是不可或缺的.目前也有很多缺失数据填补算法,但在缺失数据较为密集的情况下,已有算法的填补准确性很难保证,同时未考虑填补顺序对填补精度的影响.基于此,提出了一种面向多源感知数据且顺序敏感的缺失值填补框架OMSMVI(order-sensitive missing value imputation framework for multi-source sensory data).该框架充分利用感知数据特有的多维度相关性:时间相关性、空间相关性、属性相关性,对不同数据源间的相似度进行衡量;进而,基于多维度相似性构建以缺失数据源为中心的相似图,并将已填补的缺失值作为观测值用于后续填补过程中.同时考虑缺失数据源的整体分布,提出对缺失值进行顺序敏感的填补,即:首先对缺失值的填补顺序进行决策,再对缺失值进行填补.对缺失值进行顺序填补能够有效缓解在缺失数据较为密集的情况下,由于缺失数据源的完整近邻与其相似度较低引起的填补精度下降问题;最后,对KNN填补算法进行改进,提出一种新的基于近邻节点的缺失值填补算法NI(neighborhood-based imputation),该算法利用感知数据的多维度相似性对缺失数据源的所有近邻节点进行查找,解决了KNN填补算法K值难以确定的问题,也进一步提高了填补准确性.利用两个真实数据集,并与基本填补算法进行对比,验证了算法的准确性及有效性.  相似文献   

10.
This paper proposes to utilize information within incomplete instances (instances with missing values) when estimating missing values. Accordingly, a simple and efficient nonparametric iterative imputation algorithm, called the NIIA method, is designed for iteratively imputing missing target values. The NIIA method imputes each missing value several times until the algorithm converges. In the first iteration, all the complete instances are used to estimate missing values. The information within incomplete instances is utilized since the second imputation iteration. We conduct some experiments for evaluating the efficiency, and demonstrate: (1) the utilization of information within incomplete instances is of benefit to easily capture the distribution of a dataset; and (2) the NIIA method outperforms the existing methods in accuracy, and this advantage is clearly highlighted when datasets have a high missing ratio.  相似文献   

11.
针对k最近邻填充算法(kNNI)在缺失数据的k个最近邻可能存在噪声,提出一种新的缺失值填充算法——相互k最近邻填充算法MkNNI(Mutualk-NearestNeighborImputa—tion)。用于填充缺失值的数据,不仅是缺失数据的k最近邻,而且它的k最近邻也包含该缺失数据.从而有效地防止kNNI算法选取的k个最近邻点可能存在噪声这一情况。实验结果表明.MkNNI算法的填充准确性总体上要优于kNNI算法。  相似文献   

12.
材料腐蚀带来巨大的损失。对于大部分地区来说,大气腐蚀等级是未知的。如何准确的补偿缺失的大气腐蚀等级数据成为函待解决的问题。分别针对大气腐蚀等级的两个关键因素氯离子浓度、二氧化硫浓度单独缺失的情况进行数据建模补偿。针对中国地区腐蚀等级以及相关环境参数数据稀疏分布的特性,提出了一种基于稀疏数据规约的CMAC大气腐蚀数据补偿方法。同时,针对二氧化硫浓度缺失的情况,根据现有的有效数据,提出了二氧化硫浓度的经验公式。结果表明,氯离子浓度单独缺失时预测准确率为86.5%,二氧化硫浓度单独缺失时预测准确率为82.6%。该算法提高了大气腐蚀等级数据补偿准确率,为材料选择提供了重要依据。  相似文献   

13.
Nonlinear structural equation models with nonignorable missing outcomes from reproductive dispersion models are proposed to identify the relationship between manifest variables and latent variables in modern educational, medical, social and psychological studies. The nonignorable missing mechanism is specified by a logistic regression model. An EM algorithm is developed to obtain the maximum likelihood estimates of the structural parameters and parameters in the logistic regression model. Assessment of local influence is investigated in nonlinear structural equation models with nonignorable missing outcomes from reproductive dispersion models on the basis of the conditional expectation of the complete-data log-likelihood function. Some local influence diagnostics are obtained via observations of missing data and latent variables that are generated by the Gibbs sampler and Metropolis-Hastings algorithm on the basis of the conformal normal curvature. A simulation study and a real example are used to illustrate the application of the proposed methodologies.  相似文献   

14.
Imputation of missing links and attributes in longitudinal social surveys   总被引:1,自引:0,他引:1  
The predictive analysis of longitudinal social surveys is highly sensitive to the effects of missing data in temporal observations. Such high sensitivity to missing values raises the need for accurate data imputation, because without it a large fraction of collected data could not be used properly. Previous studies focused on the treatment of missing data in longitudinal social networks due to non-respondents and dealt with the problem largely by imputing missing links in isolation or analyzing the imputation effects on network statistics. We propose to account for changing network topology and interdependence between actors’ links and attributes to construct a unified approach for imputation of links and attributes in longitudinal social surveys. The new method, based on an exponential random graph model, is evaluated experimentally for five scenarios of missing data models utilizing synthetic and real life datasets with 20 %–60 % of nodes missing. The obtained results outperformed all alternatives, four of which were link imputation methods and two node attribute imputation methods. We further discuss the applicability and scalability of our approach to real life problems and compare our model with the latest advancements in the field. Our findings suggest that the proposed method can be used as a viable imputation tool in longitudinal studies.  相似文献   

15.
In this study the authors analyse the International Software Benchmarking Standards Group data repository, Release 8.0. The data repository comprises project data from several different companies. However, the repository exhibits missing data, which must be handled in an appropriate manner, otherwise inferences may be made that are biased and misleading. The authors re-examine a statistical model that explained about 62% of the variability in actual software development effort (Summary Work Effort) which was conditioned on a sample from the repository of 339 observations. This model exhibited covariates Adjusted Function Points and Maximum Team Size and dependence on Language Type (which includes categories 2nd, 3rd, 4th Generation Languages and Application Program Generators) and Development Type (enhancement, new development and re-development). The authors now use Bayesian inference and the Bayesian statistical simulation program, BUGS, to impute missing data avoiding deletion of observations with missing Maximum Team size and increasing sample size to 616. Providing that by imputing data distributional biases are not introduced, the accuracy of inferences made from models that fit the data will increase. As a consequence of imputation, models that fit the data and explain about 59% of the variability in actual effort are identified. These models enable new inferences to be made about Language Type and Development Type. The sensitivity of the inferences to alternative distributions for imputing missing data is also considered. Furthermore, the authors contemplate the impact of these distributions on the explained variability of actual effort and show how valid effort estimates can be derived to improve estimate consistency.  相似文献   

16.
苏毅娟  钟智 《计算机工程》2009,35(17):92-93,9
缺失数据填充效果会对学习算法和挖掘算法的后续处理过程产生影响。针对代价敏感决策树方法没有同时考虑填充顺序和填充代价的问题,提出一种有序填充缺失数据的算法,综合考虑经济因素和建立填充器所需的有效信息。实验结果表明其预测准确率和分类准确率高于现有算法。  相似文献   

17.
DNA methylation is one important epigenetic type to play a vital role in many diseases including cancers. With the development of the high-throughput sequencing technology, there is much progress to disclose the relations of DNA methylation with diseases. However, the analyses of DNA methylation data are challenging due to the missing values caused by the limitations of current techniques. While many methods have been developed to impute the missing values, these methods are mostly based on the correlations between individual samples, and thus are limited for the abnormal samples in cancers. In this study, we present a novel transfer learning based neural network to impute missing DNA methylation data, namely the TDimpute-DNAmeth method. The method learns common relations between DNA methylation from pan-cancer samples, and then fine-tunes the learned relations over each specific cancer type for imputing the missing data. Tested on 16 cancer datasets, our method was shown to outperform other commonly-used methods. Further analyses indicated that DNA methylation is related to cancer survival and thus can be used as a biomarker of cancer prognosis.  相似文献   

18.
A new matching procedure based on imputing missing data by means of a local linear estimator of the underlying population regression function (that is assumed not necessarily linear) is introduced. Such a procedure is compared to other traditional approaches, more precisely hot deck methods as well as methods based on kNN estimators. The relationship between the variables of interest is assumed not necessarily linear. Performance is measured by the matching noise given by the discrepancy between the distribution generating genuine data and the distribution generating imputed values.  相似文献   

19.
A new matching procedure based on imputing missing data by means of a local linear estimator of the underlying population regression function (that is assumed not necessarily linear) is introduced. Such a procedure is compared to other traditional approaches, more precisely hot deck methods as well as methods based on kNN estimators. The relationship between the variables of interest is assumed not necessarily linear. Performance is measured by the matching noise given by the discrepancy between the distribution generating genuine data and the distribution generating imputed values.  相似文献   

20.
Data plays a vital role as a source of information to organizations, especially in times of information and technology. One encounters a not-so-perfect database from which data is missing, and the results obtained from such a database may provide biased or misleading solutions. Therefore, imputing missing data to a database has been regarded as one of the major steps in data mining. The present research used different methods of data mining to construct imputative models in accordance with different types of missing data. When the missing data is continuous, regression models and Neural Networks are used to build imputative models. For the categorical missing data, the logistic regression model, neural network, C5.0 and CART are employed to construct imputative models. The results showed that the regression model was found to provide the best estimate of continuous missing data; but for categorical missing data, the C5.0 model proved the best method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号