共查询到20条相似文献,搜索用时 15 毫秒
1.
Shen Chen Lin Hongfei Guo Kai Xu Kan Yang Zhihao Wang Jian 《Neural computing & applications》2019,31(9):4799-4808
Neural Computing and Applications - As one of the most important medical field subjects, adverse drug reaction seriously affects the patient’s life, health, and safety. Although many methods... 相似文献
2.
3.
Antonio J. Jara Miguel A. Zamora Antonio F. Skarmeta 《Personal and Ubiquitous Computing》2014,18(1):5-17
Drug compliance and adverse drug reactions (ADR) are two of the most important issues regarding patient safety throughout the worldwide healthcare sector. ADR prevalence is 6.7 % throughout hospitals worldwide, with an international death rate of 0.32 % of the total of the patients. This rate is even higher in Ambient Assisted Living environments, where 15 % of the patients suffer clinically significant interactions due to patient non-compliance to drug dosage and schedule of intake in addition to suffering from polypharmacy. These instances increase with age and cause risks of drug interactions, adverse effects, and toxicity. However, with a tight follow-up of the drug treatment, complications of incorrect drug use can be reduced. For that purpose, we propose an innovative system based on the Internet of Things (IoT) for the drug identification and the monitoring of medication. IoT is applied to examine drugs in order to fulfill treatment, to detect harmful side effects of pharmaceutical excipients, allergies, liver/renal contradictions, and harmful side effects during pregnancy. The IoT design acknowledges that the aforementioned problems are worldwide so the solution supports several IoT identification technologies: barcode, Radio Frequency Identification, Near Field Communication, and a new solution developed for low-income countries based on IrDA in collaboration with the World Health Organization. These technologies are integrated in personal devices such as smart-phones, PDAs, PCs, and in our IoT-based personal healthcare device called Movital. 相似文献
4.
Hyun-Chong Cho Kenneth Clint Slatton Carolyn R. Krekeler 《International journal of remote sensing》2013,34(24):9571-9597
To map the Earth's surface at remarkable resolution, Airborne Laser Swath Mapping (ALSM) instrument technology and subsequent algorithms have been used over the last several years. Since forested watersheds have commonly been problematic to study with remote sensing techniques, the ability of ALSM technology to densely sample ground elevations beneath forest canopies is especially considerable. Stream network detection from digital elevation models (DEMs) is a key role in modelling spatially distributed hydrological processes. To detect stream channels, we have developed two approaches. The first approach is based on an encoding of mathematical morphological operators. In the second approach, a composition of geodesic top-hat and bot-hat operations of different sizes is used in order to build a morphological profile (P M) that records the image structural information. The two proposed methods perform well in terms of detection results and classification accuracies. The second approach is more general than the first, but it also requires training and more computation. 相似文献
5.
Sriraam Natarajan Vishal Bangera Tushar Khot Jose Picado Anurag Wazalwar Vitor Santos Costa David Page Michael Caldwell 《Knowledge and Information Systems》2017,51(2):435-457
Adverse drug events (ADEs) are a major concern and point of emphasis for the medical profession, government, and society. A diverse set of techniques from epidemiology, statistics, and computer science are being proposed and studied for ADE discovery from observational health data (e.g., EHR and claims data), social network data (e.g., Google and Twitter posts), and other information sources. Methodologies are needed for evaluating, quantitatively measuring and comparing the ability of these various approaches to accurately discover ADEs. This work is motivated by the observation that text sources such as the Medline/Medinfo library provide a wealth of information on human health. Unfortunately, ADEs often result from unexpected interactions, and the connection between conditions and drugs is not explicit in these sources. Thus, in this work, we address the question of whether we can quantitatively estimate relationships between drugs and conditions from the medical literature. This paper proposes and studies a state-of-the-art NLP-based extraction of ADEs from text. 相似文献
6.
Rolando De la Cruz-Mesía Fernando A. Quintana Guillermo Marshall 《Computational statistics & data analysis》2008,52(3):1441-1457
A model-based clustering method is proposed for clustering individuals on the basis of measurements taken over time. Data variability is taken into account through non-linear hierarchical models leading to a mixture of hierarchical models. We study both frequentist and Bayesian estimation procedures. From a classical viewpoint, we discuss maximum likelihood estimation of this family of models through the EM algorithm. From a Bayesian standpoint, we develop appropriate Markov chain Monte Carlo (MCMC) sampling schemes for the exploration of target posterior distribution of parameters. The methods are illustrated with the identification of hormone trajectories that are likely to lead to adverse pregnancy outcomes in a group of pregnant women. 相似文献
7.
Bellazzi R. Riva A. 《IEEE transactions on systems, man, and cybernetics. Part A, Systems and humans : a publication of the IEEE Systems, Man, and Cybernetics Society》1998,28(5):629-636
Many real applications of Bayesian networks (BN) concern problems in which several observations are collected over time on a certain number of similar plants. This situation is typical of the context of medical monitoring, in which several measurements of the relevant physiological quantities are available over time on a population of patients under treatment, and the conditional probabilities that describe the model are usually obtained from the available data through a suitable learning algorithm. In situations with small data sets for each plant, it is useful to reinforce the parameter estimation process of the BN by taking into account the observations obtained from other similar plants. On the other hand, a desirable feature to be preserved is the ability to learn individualized conditional probability tables, rather than pooling together all the available data. In this work we apply a Bayesian hierarchical model able to preserve individual parameterization, and, at the same time, to allow the conditionals of each plant to borrow strength from all the experience contained in the data-base. A testing example and an application in the context of diabetes monitoring will be shown 相似文献
8.
Amol Pande Liang Li Jeevanantham Rajeswaran John Ehrlinger Udaya B. Kogalur Eugene H. Blackstone Hemant Ishwaran 《Machine Learning》2017,106(2):277-305
Machine learning methods provide a powerful approach for analyzing longitudinal data in which repeated measurements are observed for a subject over time. We boost multivariate trees to fit a novel flexible semi-nonparametric marginal model for longitudinal data. In this model, features are assumed to be nonparametric, while feature-time interactions are modeled semi-nonparametrically utilizing P-splines with estimated smoothing parameter. In order to avoid overfitting, we describe a relatively simple in sample cross-validation method which can be used to estimate the optimal boosting iteration and which has the surprising added benefit of stabilizing certain parameter estimates. Our new multivariate tree boosting method is shown to be highly flexible, robust to covariance misspecification and unbalanced designs, and resistant to overfitting in high dimensions. Feature selection can be used to identify important features and feature-time interactions. An application to longitudinal data of forced 1-second lung expiratory volume (FEV1) for lung transplant patients identifies an important feature-time interaction and illustrates the ease with which our method can find complex relationships in longitudinal data. 相似文献
9.
Kuo-Chin Lin 《Computational statistics & data analysis》2010,54(7):1872-1880
Longitudinal studies involving categorical responses are extensively applied in many fields of research and are often fitted by the generalized estimating equations (GEE) approach and generalized linear mixed models (GLMMs). The assessment of model fit is an important issue for model inference. The purpose of this article is to extend Pan’s (2002a) goodness-of-fit tests for GEE models with longitudinal binary data to the tests for logistic proportional odds models with longitudinal ordinal data. Two proposed methods based on Pearson chi-squared test and unweighted sum of residual squares are developed, and the approximate expectations and variances of the test statistics are easily computed. Four major variants of working correlation structures, independent, AR(1), exchangeable and unspecified, are considered to estimate the variances of the proposed test statistics. Simulation studies in terms of type I error rate and the power performance of the proposed tests are presented for various sample sizes. Furthermore, the approaches are demonstrated by two real data sets. 相似文献
10.
Longitudinal data refer to the situation where repeated observations are available for each sampled object. Clustered data,
where observations are nested in a hierarchical structure within objects (without time necessarily being involved) represent
a similar type of situation. Methodologies that take this structure into account allow for the possibilities of systematic
differences between objects that are not related to attributes and autocorrelation within objects across time periods. A standard
methodology in the statistics literature for this type of data is the mixed effects model, where these differences between
objects are represented by so-called “random effects” that are estimated from the data (population-level relationships are
termed “fixed effects,” together resulting in a mixed effects model). This paper presents a methodology that combines the
structure of mixed effects models for longitudinal and clustered data with the flexibility of tree-based estimation methods.
We apply the resulting estimation method, called the RE-EM tree, to pricing in online transactions, showing that the RE-EM
tree is less sensitive to parametric assumptions and provides improved predictive power compared to linear models with random
effects and regression trees without random effects. We also apply it to a smaller data set examining accident fatalities,
and show that the RE-EM tree strongly outperforms a tree without random effects while performing comparably to a linear model
with random effects. We also perform extensive simulation experiments to show that the estimator improves predictive performance
relative to regression trees without random effects and is comparable or superior to using linear models with random effects
in more general situations. 相似文献
11.
12.
M. Helena Gonçalves M. Salomé Cabral Eduardo Escrich 《Computational statistics & data analysis》2007,51(12):6511-6520
In many cancer studies and clinical research, repeated observations of response variables are taken over time on each individual in one or more treatment groups. In such cases the repeated observations of each vector response are likely to be correlated and the autocorrelation structure for the repeated data plays a significant role in the estimation of regression parameters. A random intercept model for count data is developed using exact maximum-likelihood estimation via numerical integration. A simulation study is performed to compare the proposed methodology with the traditional generalized linear mixed model (GLMM) approach and with the GLMM when penalized quasi-likelihood method is used to perform maximum-likelihood estimation. The methodology is illustrated by analyzing data sets containing longitudinal measures of number of tumors in an experiment of carcinogenesis to study the influence of lipids in the development of cancer. 相似文献
13.
Authenticating streaming data is a very important research area due to its wide range of applications. Previous technologies mainly focused on authenticating data packets at the IP layer and ensuring the robustness of the verification. These schemes usually incur large communications overhead, which is not desirable in applications with limited bandwidth. In this paper, we propose a novel fragile watermarking algorithm which verifies the integrity of streaming data at the application layer. The data are divided into groups based on synchronization points, so each group can be synchronized and any modifications made to one group only affect up to two groups. A unique watermark is embedded directly into each group to save communications bandwidth. The embedded watermark can detect as well as locate any modifications made to a data stream. To ensure the completeness of the data stream, watermarks are chained across groups so that no matter how many data are deleted, these deletions can be correctly detected. Security analysis and experimental results show that the proposed scheme can efficiently detect and locate modifications and ensure the completeness of data streams. 相似文献
14.
The paper considers the use of a method for the identification and verification of significant patterns to find clear and measurable differences between groups of countries by the nature of the relationship of the dynamics of their macroeconomic indicators. The analysis is conducted using panel data that include annual values of a number of economic indicators in a specified time intervals. An approach based on permutation tests is used to take into account the effect of multiple testing. A technology that combines correlation analysis with the detection of significant patterns has made it possible to reveal statistically significant differences between groups of countries with two types of institutional matrices as identified by sociologists. 相似文献
15.
用于过失测量数据侦破与校正的改进MT-NT算法 总被引:1,自引:0,他引:1
介绍了一种用于过失误差侦破和校正改进的MT-NT算法。改进后的算法采用逐次侦破、校正的策略,有效地解决了在侦破过失误差过程中出现的系数矩阵降秩问题,减少了运算量,增加了信息的可用性和完整性。给出了该算法的框图及步骤,并采用面向对象的方法和C 十语言编制出了过程测量数据校正软件。经过实例验证,该算法可有效侦破测量数据中的过失误差,避免了在运算过程中出现的系数矩阵降秩问题,具有一定的实用性。 相似文献
16.
Romain Neugebauer Mark J. van der Laan 《Computational statistics & data analysis》2006,51(3):1676-1697
In a companion paper, [Neugebauer, R., van der Laan, M.J., 2006b. Causal effects in longitudinal studies: definition and maximum likelihood estimation. Comput. Stat. Data. Anal., this issue, doi:10.1016/j.csda.2006.06.013], we provided an overview of causal effect definition with marginal structural models (MSMs) in longitudinal studies. A parametric MSM (PMSM) and a non-parametric MSM (NPMSM) approach were described for the representation of causal effects in pooled or stratified analyses of treatment effects on time-dependent outcomes. Maximum likelihood estimation, also referred to as G-computation estimation, was detailed for these causal effects. In this paper, we develop new algorithms for the implementation of the G-computation estimators of both NPMSM and PMSM causal effects. Current algorithms rely on Monte Carlo simulation of all possible treatment-specific outcomes, also referred to as counterfactuals or potential outcomes. This task becomes computationally impracticable (a) in studies with a continuous treatment, and/or (b) in longitudinal studies with long follow-up with or without time-dependent outcomes. The proposed algorithms address this important computing limitation inherent to G-computation estimation in most longitudinal studies. Finally, practical considerations about the proposed algorithms lead to a further generalization of the definition of NPMSM causal effects in order to allow more reliable applications of these methodologies to a broader range of real-life studies. Results are illustrated with two simulation studies. 相似文献
17.
Ziqi ChenNing-Zhong Shi Wei Gao Man-Lai Tang 《Computational statistics & data analysis》2011,55(12):3344-3354
Semiparametric methods for longitudinal data with dependence within subjects have recently received considerable attention. Existing approaches that focus on modeling the mean structure require a correct specification of the covariance structure as misspecified covariance structures may lead to inefficient or biased mean parameter estimates. Besides, computation and estimation problems arise when the repeated measurements are taken at irregular and possibly subject-specific time points, the dimension of the covariance matrix is large, and the positive definiteness of the covariance matrix is required. In this article, we propose a profile kernel approach based on semiparametric partially linear regression models for the mean and model covariance structures simultaneously, motivated by the modified Cholesky decomposition. We also study the large-sample properties of the parameter estimates. The proposed method is evaluated through simulation and applied to a real dataset. Both theoretical and empirical results indicate that properly taking into account the within-subject correlation among the responses using our method can substantially improve efficiency. 相似文献
18.
We develop several kernel methods for classification of longitudinal data and apply them to detect cognitive decline in the elderly. We first develop mixed-effects models, a type of hierarchical empirical Bayes generative models, for the time series. After demonstrating their utility in likelihood ratio classifiers (and the improvement over standard regression models for such classifiers), we develop novel Fisher kernels based on mixture of mixed-effects models and use them in support vector machine classifiers. The hierarchical generative model allows us to handle variations in sequence length and sampling interval gracefully. We also give nonparametric kernels not based on generative models, but rather on the reproducing kernel Hilbert space. We apply the methods to detecting cognitive decline from longitudinal clinical data on motor and neuropsychological tests. The likelihood ratio classifiers based on the neuropsychological tests perform better than than classifiers based on the motor behavior. Discriminant classifiers performed better than likelihood ratio classifiers for the motor behavior tests. 相似文献
19.
20.
Objective: To tackle the extraction of adverse drug reaction events in electronic health records. The challenge stands in inferring a robust prediction model from highly unbalanced data. According to our manually annotated corpus, only 6% of the drug-disease entity pairs trigger a positive adverse drug reaction event and this low ratio makes machine learning tough.Method: We present a hybrid system utilising a self-developed morpho-syntactic and semantic analyser for medical texts in Spanish. It performs named entity recognition of drugs and diseases and adverse drug reaction event extraction. The event extraction stage operates using rule-based and machine learning techniques.Results: We assess both the base classifiers, namely a knowledge-based model and an inferred classifier, and also the resulting hybrid system. Moreover, for the machine learning approach, an analysis of each particular bio-cause triggering the adverse drug reaction is carried out.Conclusions: One of the contributions of the machine learning based system is its ability to deal with both intra-sentence and inter-sentence events in a highly skewed classification environment. Moreover, the knowledge-based and the inferred model are complementary in terms of precision and recall. While the former provides high precision and low recall, the latter is the other way around. As a result, an appropriate hybrid approach seems to be able to benefit from both approaches and also improve them. This is the underlying motivation for selecting the hybrid approach. In addition, this is the first system dealing with real electronic health records in Spanish. 相似文献