首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Metabolite profiling in biomarker discovery, enzyme substrate assignment, drug activity/specificity determination, and basic metabolic research requires new data preprocessing approaches to correlate specific metabolites to their biological origin. Here we introduce an LC/MS-based data analysis approach, XCMS, which incorporates novel nonlinear retention time alignment, matched filtration, peak detection, and peak matching. Without using internal standards, the method dynamically identifies hundreds of endogenous metabolites for use as standards, calculating a nonlinear retention time correction profile for each sample. Following retention time correction, the relative metabolite ion intensities are directly compared to identify changes in specific endogenous metabolites, such as potential biomarkers. The software is demonstrated using data sets from a previously reported enzyme knockout study and a large-scale study of plasma samples. XCMS is freely available under an open-source license at http://metlin.scripps.edu/download/.  相似文献   

2.
1H NMR spectroscopy potentially provides a robust approach for high-throughput metabolic screening of biofluids such as urine and plasma, but sample handling and preparation need careful optimization to ensure that spectra accurately report biological status or disease state. We have investigated the effects of storage temperature and time on the 1H NMR spectral profiles of human urine from two participants, collected three times a day on four different days. These were analyzed using modern chemometric methods. Analytical and preparation variation (tested between -40 degrees C and room temperature) and time of storage (to 24 h) were found to be much less influential than biological variation in sample classification. Statistical total correlation spectroscopy and discriminant function methods were used to identify the specific metabolites that were hypervariable due to preparation and biology. Significant intraindividual variation in metabolite profiles were observed even for urine collected on the same day and after at least 6 h fasting. The effect of long-term storage at different temperatures was also investigated, showing urine is stable if frozen for at least 3 months and that storage at room temperature for long periods (1-3 months) results in a metabolic profile explained by bacterial activity. Presampling (e.g., previous day) intake of food and medicine can also strongly influence the urinary metabolic profiles indicating that collective detailed participant historical meta data are important for interpretation of metabolic phenotypes and for avoiding false biomarker discovery.  相似文献   

3.
The only widely used and accepted method for long-term cell preservation is storage below -130 degrees C. The biosciences will make increasing use of preservation and place new demands on it. Currently, cells are frozen in volumes greater than 1 ml but the new cell and implantation therapies (particularly those using stem cells) will require accurately defined freezing and storage conditions for each single cell. Broadly-based, routine freezing of biological samples allows the advantage of retrospective analysis and the possibility of saving genetic rights. For such applications, one billion is a modest estimation for the number of samples. Current cryotechniques cannot handle so many samples in an efficient and economic way, and the need for new cryotechnology is evident. The interdisciplinary approach presented here should lead to a new sample storage and operating strategy that fulfils the needs mentioned above. Fundamental principles of this new kind of smart sample storage are: (i) miniaturisation; (ii) modularisation; (iii) informationsample integration, i.e. freezing memory chips with samples; and (iv) physical and logical access to samples and information without thawing the samples. In contrast to current sample systems, the prototyped family of intelligent cryosubstrates allows the recovery of single wells (parts) of the substrate without thawing the rest of the sample. The development of intelligent cryosubstrates is linked to developments in high throughput freezing, high packing density storage and minimisation of cytotoxic protective agents.  相似文献   

4.
Zhang N  Doucette A  Li L 《Analytical chemistry》2001,73(13):2968-2975
Sodium dodecyl sulfate (SDS) is widely used in protein sample workup. However, many mass spectrometric methods cannot tolerate the presence of this strong surfactant in a protein sample. We present a practical and robust technique based on a two-layer matrix/sample deposition method for the analysis of protein and peptide samples containing SDS by matrix-assisted laser desorption ionization mass spectrometry (MALDI-MS). The two-layer method involves the deposition of a mixture of sample and matrix on top of a thin layer of matrix crystals. It was found that for SDS-containing samples, the intensity of the MALDI signals can be affected by the conditions of sample preparation: on-probe washing, choice of matrix, deposition method, solvent system, and protein-to-SDS ratio. However, we found that, under appropriate conditions, the two-layer method gave reliable MALDI signals for samples with levels of SDS up to approximately 1%. The applications of this method are demonstrated for MALDI analysis of hydrophobic membrane proteins as well as bacterial extracts. We envision that this two-layer method capable of handling impure samples including those containing SDS will play an important role in protein molecular weight analysis as well as in proteome identification by MALDI-MS and MS/MS.  相似文献   

5.
System reliability depends on inherent mechanical and structural aging factors as well as on operational and environmental conditions, which could enhance (or smoothen) such factors. In practice, the involved dependences may burden the modeling of the reliability behavior over time, in which traditional stochastic modeling approaches may likely fail. Empirical prediction methods, such as support vector machines (SVMs), become a valid alternative whenever reliable time series data are available. However, the prediction performance of SVMs depends on the setting of a number of parameters that influence the effectiveness of the training stage during which the SVMs are constructed based on the available data set. The problem of choosing the most suitable values for the SVM parameters can be framed in terms of an optimization problem aimed at minimizing a prediction error. In this work, this problem is solved by particle swarm optimization (PSO), a probabilistic approach based on an analogy with the collective motion of biological organisms. SVM in liaison with PSO is then applied to tackle reliability prediction problems based on time series data of engineered components. Comparisons of the obtained results with those given by other time series techniques indicate that the PSO + SVM model is able to provide reliability predictions with comparable or great accuracy. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

6.
Biomarkers provide clinicians with an important tool for disease assessment. Many different biomarkers have been discovered, but few of them suffice as stand-alone indicators for disease presence or prognosis. Because no single biomarker can be relied upon for accurate disease detection there has been a substantial push for new multianalyte screening methods. Furthermore, there is a need to push assays toward a point-of-care technology to reduce the time between clinical analysis and medical intervention and minimize artifacts created during sample storage. There currently are, however, few inexpensive multianalyte methods for disease detection that can function in a point-of-care setting. A new approach which bridges the gap between traditional immunoassays and high-density microarrays by utilizing microfluidics, immunoassays, and micellar electrokinetic chromatography (MEKC) is discussed here. This chemistry, the cleavable tag immunoassay (CTI), is a low- to medium-density heterogeneous immunoassay designed to detect 1-20 analytes simultaneously. Although similar to traditional sandwich immunoassays, this approach is unique because the signal is not imaged on the surface; instead, a fluorescent tag is chemically cleaved from the antibody and analyzed by microchip MEKC. In this report, the CTI chemistry is used for the detection of four cardiac biomarkers elevated in acute myocardial infarction. Limit of detection (LOD) and dynamic range are reported for all biomarkers with LODs on the order of low nanograms per milliliter to low picograms per milliliter. Most importantly, the dynamic range for each of the biomarkers spans the boundary between normal and elevated levels. Finally, elevated marker levels were measured in spiked human serum samples.  相似文献   

7.
We report a methodology for the rapid determination of biomarkers in saliva. The method is based on direct coupling of a headspace sampler with a mass spectrometer. The saliva samples are subjected to the headspace generation process, and the volatiles generated are introduced directly into the mass spectrometer, thereby obtaining a fingerprint of the sample analyzed. The main advantage of the proposed methodology is that no prior chromatographic separation and no sample manipulation is required. The following model compounds were studied to check the possibilities of the methodology: methyl tert-butyl ether and styrene as biomarkers of exposure and dimethyl disulfide, limonene, and 2-ethyl-1-hexanol as biomarkers of diseases. The method was applied to the determination of biomarkers in 28 saliva samples: 24 of them were from healthy volunteers, and the others were from patients with different types of illness (including different types of cancer). Additionally, a separative analysis by GC/MS was performed for confirmatory purposes, and both methods provided similar results.  相似文献   

8.
Although NMR spectroscopic techniques coupled with multivariate statistics can yield much useful information for classifying biological samples based on metabolic profiles, biomarker identification remains a time-consuming and complex procedure involving separation methods, two-dimensional NMR, and other spectroscopic tools. We present a new approach to aid complex biomixture analysis that combines diffusion ordered (DO) NMR spectroscopy with statistical total correlation spectroscopy (STOCSY) and demonstrate its application in the characterization of urinary biomarkers and enhanced information recovery from plasma NMR spectra. This method relies on calculation and display of the covariance of signal intensities from the various nuclei on the same molecule across a series of spectra collected under different pulsed field gradient conditions that differentially attenuate the signal intensities according to translational molecular diffusion rates. We term this statistical diffusion-ordered spectroscopy (S-DOSY). We also have developed a new visualization tool in which the apparent diffusion coefficients from DO spectra are projected onto a 1D NMR spectrum (diffusion-ordered projection spectroscopy, DOPY). Both methods either alone or in combination have the potential for general applications to any complex mixture analysis where the sample contains compounds with a range of diffusion coefficients.  相似文献   

9.
This paper evaluates various sample preparation methods for multicapillary gel electrophoresis based glycan analysis to support electrokinetic injection. First the removal of excess derivatization reagent is discussed. Although the Sephadex G10 filled multiscreen 96-well filter plate and Sephadex G10 filled pipet tips enabled increased analysis sensitivity, polyamide DPA-6S pipet tips worked particularly well. In this latter case an automated liquid handling system was used to increase purification throughput, necessary to feed the multicapillary electrophoresis unit. Problems associated with the high glucose content of such biological samples as normal human plasma were solved by applying ultrafiltration. Finally, a volatile buffer system was developed for exoglycosidase-based carbohydrate analysis.  相似文献   

10.
It is shown how various exact nonparametric inferences based on an ordinary right or progressively Type-II right censored sample can be generalized to situations where two independent samples are combined. We derive the relevant formulas for the combined ordered samples to construct confidence intervals for a given quantile, prediction intervals, and tolerance intervals. The results are valid for every continuous distribution function. The key results are the derivations of the marginal distribution functions in the combined ordered samples. In the case of ordinary Type-II right censored order statistics, it is shown that the combined ordered sample is no longer distributed as order statistics. Instead, the distribution in the combined ordered sample is closely related to progressively Type-II censored order statistics.  相似文献   

11.
在工程应用中,如数据挖掘、成本预测以及风险预测等,Logistic 回归是一类十分重要的预测方法.当前,大部分 Logistic 回归方法都是基于优化准则而设计,这类回归方法具有参数调试过程繁琐、模型解释性差、估计子没有置信区间等缺点.本文从 Bayes 概率角度研究 Logistic 组稀疏性回归的建模与推断问题.具体来说,首先利用高斯-方差混合公式提出 Logistic 组稀疏回归的 Bayes 概率模型;其次,通过变分 Bayes 方法设计出一个高效的推断算法.在模拟数据上的实验结果表明,本文所提出的方法具有较好的预测性能.  相似文献   

12.
Spectral cytopathology (SCP) is a novel approach for disease diagnosis that utilizes infrared spectroscopy to interrogate the biochemical components of cellular samples and multivariate statistical methods, such as principal component analysis, to analyze and diagnose spectra. SCP has taken vast strides in its application for disease diagnosis over the past decade; however, fixation-induced changes and sample handling methods are still not systematically understood. Conversely, fixation and staining methods in conventional cytopathology, typically involving protocols to maintain the morphology of cells, have been documented and widely accepted for nearly a century. For SCP, fixation procedures must preserve the biochemical composition of samples so that spectral changes significant to disease diagnosis are not masked. We report efforts to study the effects of fixation protocols commonly used in traditional cytopathology and SCP, including fixed and unfixed methods applied to exfoliated oral (buccal) mucosa cells. Data suggest that the length of time in fixative and duration of sample storage via desiccation contribute to minor spectral changes where spectra are nearly superimposable. These findings illustrate that changes influenced by fixation are negligible in comparison to changes induced by disease.  相似文献   

13.
Efficient enrichment of specific glycoproteins from complex biological samples is of great importance towards the discovery of disease biomarkers in biological systems. Recently, phenylboronic acid‐based functional materials have been widely used for enrichment of glycoproteins. However, such enrichment was mainly carried out under alkaline conditions, which is different to the status of glycoproteins in neutral physiological conditions and may cause some unpredictable degradation. In this study, on‐demand neutral enrichment of glycoproteins from crude biological samples is accomplished by utilizing the reversible interaction between the cis‐diols of glycoproteins and benzoboroxole‐functionalized magnetic composite microspheres (Fe3O4/PAA‐AOPB). The Fe3O4/PAA‐AOPB composite microspheres are deliberately designed and constructed with a high‐magnetic‐response magnetic supraparticle (MSP) core and a crosslinked poly(acrylic acid) (PAA) shell anchoring abundant benzoboroxole functional groups on the surface. These nanocomposites possessed many merits, such as large enrichment capacity (93.9 mg/g, protein/beads), low non‐specific adsorption, quick enrichment process (10 min) and magnetic separation speed (20 s), and high recovery efficiency. Furthermore, the as‐prepared Fe3O4/PAA‐AOPB microspheres display high selectivity to glycoproteins even in the E. coli lysate or fetal bovine serum, showing great potential in the identify of low‐abundance glycoproteins as biomarkers in real complex biological systems for clinical diagnoses.  相似文献   

14.
High-throughput experimentation and screening methods are changing work flows and creating new possibilities in biochemistry, organometallic chemistry, and catalysis. However, many high-throughput systems rely on off-line chromatography methods that shift the bottleneck to the analysis stage. On-line or at-line spectroscopic analysis is an attractive alternative. It is fast, noninvasive, and nondestructive and requires no sample handling. The disadvantage is that spectroscopic calibration is time-consuming and complex. Ideally, the calibration model should give reliable predictions while keeping the number of calibration samples to a minimum. In this paper, we employ the net analyte signal approach to build a calibration model for Fourier transform near-infrared measurements, using a minimum number of calibration samples based on blank samples. This approach fits very well to high-throughput setups. With this approach, we can reduce the number of calibration samples to the number of chemical components in the system. Thus, the question is no longer how many but which type of calibration samples should one include in the model to obtain reliable predictions. Various calibration models are tested using Monte Carlo simulations, and the results are compared with experimental data for palladium-catalyzed Heck cross-coupling.  相似文献   

15.
Recently, conventional representation-based classification (RBC) methods demonstrate promising performance in image recognition. However, conventional RBCs only use a kind of deviations between the test sample and the linear combination of training samples of each class to perform classification. In many cases, a single kind of deviations corresponding to each class cannot effectively reflect the difference between the test sample and reconstructed sample of each class. Moreover, in practical applications, limited training samples are not able to reflect the possible changes of the image sufficiently. In this paper, we propose a novel scheme to tackle the above-mentioned problems. Specifically, we first use the original training samples to generate corresponding mirror samples. Thus, the original sample set and its mirror counterpart are treated as two separate training groups. Secondly, we perform collaborative representation classification on these two groups from which each class leads to two kinds of deviations, respectively. Finally, we fuse two kinds of deviations of each class and their correlation coefficient to classify the test sample. The correlation coefficient is defined for two kinds of deviations of each class. Experimental results on four databases show the proposed scheme can improve the recognition rate in image-based recognition.  相似文献   

16.
In order to improve the efficiency of moisture meters calibrations, we studied the effect of ambient humidity, sample handling, packing and transportation on the timber wood (spruce) moisture determination. It was proved by experiments that dry timber samples (\(12 \times 12 \times 2.5\) cm) reach equilibrium within 30–40 days even when moisturizing them at a high relative air humidity (80 %). On the other hand, the major mass loss of moist samples placed at normal laboratory conditions was found to occur during the first few days while the first 5 days are critical. The effects of sample handling, packing and transportation were studied by means of interlaboratory comparison between CMI, CETIAT, INRIM, NIS and KRISS. The obtained results show that samples with moisture content less than 7 % tend to absorb small amount of water, whereas samples with moisture content larger than 15 % tend to desorb small amount of water during the handling and transporting even when using vacuum packing and short handling times.  相似文献   

17.
Despite the intrinsic elemental analysis capability and lack of sample preparation requirements, laser-induced breakdown spectroscopy (LIBS) has not been extensively used for real-world applications, e.g., quality assurance and process monitoring. Specifically, variability in sample, system, and experimental parameters in LIBS studies present a substantive hurdle for robust classification, even when standard multivariate chemometric techniques are used for analysis. Considering pharmaceutical sample investigation as an example, we propose the use of support vector machines (SVM) as a nonlinear classification method over conventional linear techniques such as soft independent modeling of class analogy (SIMCA) and partial least-squares discriminant analysis (PLS-DA) for discrimination based on LIBS measurements. Using over-the-counter pharmaceutical samples, we demonstrate that the application of SVM enables statistically significant improvements in prospective classification accuracy (sensitivity), because of its ability to address variability in LIBS sample ablation and plasma self-absorption behavior. Furthermore, our results reveal that SVM provides nearly 10% improvement in correct allocation rate and a concomitant reduction in misclassification rates of 75% (cf. PLS-DA) and 80% (cf. SIMCA)-when measurements from samples not included in the training set are incorporated in the test data-highlighting its robustness. While further studies on a wider matrix of sample types performed using different LIBS systems is needed to fully characterize the capability of SVM to provide superior predictions, we anticipate that the improved sensitivity and robustness observed here will facilitate application of the proposed LIBS-SVM toolbox for screening drugs and detecting counterfeit samples, as well as in related areas of forensic and biological sample analysis.  相似文献   

18.
LC-MS-based proteomics requires methods with high peak capacity and a high degree of automation, integrated with data-handling tools able to cope with the massive data produced and able to quantitatively compare them. This paper describes an off-line two-dimensional (2D) LC-MS method and its integration with software tools for data preprocessing and multivariate statistical analysis. The 2D LC-MS method was optimized in order to minimize peptide loss prior to sample injection and during the collection step after the first LC dimension, thus minimizing errors from off-column sample handling. The second dimension was run in fully automated mode, injecting onto a nanoscale LC-MS system a series of more than 100 samples, representing fractions collected in the first dimension (8 fractions/sample). As a model study, the method was applied to finding biomarkers for the antiinflammatory properties of zilpaterol, which are coupled to the beta2-adrenergic receptor. Secreted proteomes from U937 macrophages exposed to lipopolysaccharide in the presence or absence of propanolol or zilpaterol were analysed. Multivariate statistical analysis of 2D LC-MS data, based on principal component analysis, and subsequent targeted LC-MS/MS identification of peptides of interest demonstrated the applicability of the approach.  相似文献   

19.
Handoff processes during civil infrastructure operations are transitions between sequential tasks. Typical handoffs constantly involve cognitive and communication activities among operations personnel, as well as traveling activities. Large civil infrastructures, such as nuclear power plants (NPPs), provide critical services to modern cities but require regular or unexpected shutdowns (i.e., outage) for maintenance. Handoffs during such an outage contain interwoven workflows and communication activities that pose challenges to the cognitive and communication skills of handoff participants and constantly result in delays. Traveling time and changing field conditions bring additional challenges to effective coordination among multiple groups of people. Historical NPP records studied in this research indicate that even meticulous planning that takes six months before each outage could hardly guarantee sufficient back-up plans for handling various unexpected events. Consequently, delays frequently occur in NPP outages and bring significant socioeconomic losses. A synthesis of previous studies on the delay analysis of accelerated maintenance schedules revealed the importance and challenges of handoff modeling. However, existing schedule representation methods could hardly represent the interwoven communication, cognitive, traveling, and working processes of multiple participants collaborating on completing scheduled tasks. Moreover, the lack of formal models that capture how cognitive, waiting, traveling, and communication issues affect outage workflows force managers to rely on personal experiences in diagnosing delays and coordinating multiple teams involved in outages. This study aims to establish formal models through agent-based simulation to support the analytical assessment of outage schedules with full consideration of cognitive and communication factors involved in handoffs within the NPP outage workflows. Simulation results indicate that the proposed handoff modeling can help predict the impact of cognitive and communication issues on delays propagating throughout outage schedules. Moreover, various activities are fully considered, including traveling between workspaces and waiting. Such delay prediction capability paves the path toward predictive and resilience outage control of NPPs.  相似文献   

20.
To differentiate heparin samples with varying amounts of dermatan sulfate (DS) impurities and oversulfated chondroitin sulfate (OSCS) contaminants, proton NMR spectral data for heparin sodium active pharmaceutical ingredient samples from different manufacturers were analyzed using multivariate chemometric techniques. A total of 168 samples were divided into three groups: (a) Heparin, [DS] ≤ 1.0% and [OSCS] = 0%; (b) DS, [DS] > 1.0% and [OSCS] = 0%; (c) OSCS, [OSCS] > 0% with any content of DS. The chemometric models were constructed and validated using two well-established methods: soft independent modeling of class analogy (SIMCA) and unequal class modeling (UNEQ). While SIMCA modeling was conducted using the entire set of variables extracted from the NMR spectral data, UNEQ modeling was combined with variable reduction using stepwise linear discriminant analysis to comply with the requirement that the number of samples per class exceed the number of variables in the model by at least 3-fold. Comparison of the results from these two modeling approaches revealed that UNEQ had greater sensitivity (fewer false positives) while SIMCA had greater specificity (fewer false negatives). For Heparin, DS, and OSCS, respectively, the sensitivity was 78% (56/72), 74% (37/50), and 85% (39/46) from SIMCA modeling and 88% (63/72), 90% (45/50), and 91% (42/46) from UNEQ modeling. Importantly, the specificity of both the SIMCA and UNEQ models was 100% (46/46) for Heparin with respect to OSCS; no OSCS-containing sample was misclassified as Heparin. The specificity of the SIMCA model (45/50, or 90%) was superior to that of the UNEQ model (27/50, or 54%) for Heparin with respect to DS samples. However, the overall prediction ability of the UNEQ model (85%) was notably better than that of the SIMCA model (76%) for the Heparin vs DS vs OSCS classes. The models were challenged with blends of heparin spiked with nonsulfated, partially sulfated, or fully oversulfated chondroitin sulfate A, dermatan sulfate, or heparan sulfate at the 1.0, 5.0, and 10.0 wt % levels. The results from the present study indicate that the combination of (1)H NMR spectral data and class modeling techniques (viz., SIMCA and UNEQ) represents a promising strategy for assessing the quality of commercial heparin samples with respect to impurities and contaminants. The methodologies show utility for applications beyond heparin to other complex products.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号