首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Gerssen A  van den Top HJ  van Egmond HP 《Analytical chemistry》2012,84(1):478-80; discussion 481-3
This recent paper by Otero and co-workers presents some data from analysis of okadaic acid group toxins by liquid chromatography-tandem mass spectrometry (LC-MS/MS) using different instruments, operating parameters, and solvent conditions. They question the suitability of this tool for quantitative analysis. This paper reveals a lack of understanding of critical factors for the successful use of LC-MS methodology in general as well as some specific proficiency issues with the work reported on the three toxins. We show that there are problems with the conduct and reporting of the experiments, including possible injector carry-over and lack of quality assurance/quality control (QA/QC) controls. Therefore the specific conclusions they draw from their data are considered invalid.  相似文献   

2.
The output of LC-MS metabolomics experiments consists of mass-peak intensities identified through a peak-picking/alignment procedure. Besides imperfections in biological samples and instrumentation, data accuracy is highly dependent on the applied algorithms and their parameters. Consequently, quality control (QC) is essential for further data analysis. Here, we present a QC approach that is based on discrepancies between replicate samples. First, the quantile normalization of per-sample log-signal distributions is applied to each group of biologically homogeneous samples. Next, the overall quality of each replicate group is characterized by the Z-transformed correlation coefficients between samples. This general QC allows a tuning of the procedure's parameters which minimizes the inter-replicate discrepancies in the generated output. Subsequently, an in-depth QC measure detects local neighborhoods on a template of aligned chromatograms that are enriched by divergences between intensity profiles of replicate samples. These neighborhoods are determined through a segmentation algorithm. The retention time (RT)-m/z positions of the neighborhoods with local divergences are indicative of either: incorrect alignment of chromatographic features, technical problems in the chromatograms, or to a true biological discrepancy between replicates for particular metabolites. We expect this method to aid in the accurate analysis of metabolomics data and in the development of new peak-picking/alignment procedures.  相似文献   

3.
LC-MS-based proteomics requires methods with high peak capacity and a high degree of automation, integrated with data-handling tools able to cope with the massive data produced and able to quantitatively compare them. This paper describes an off-line two-dimensional (2D) LC-MS method and its integration with software tools for data preprocessing and multivariate statistical analysis. The 2D LC-MS method was optimized in order to minimize peptide loss prior to sample injection and during the collection step after the first LC dimension, thus minimizing errors from off-column sample handling. The second dimension was run in fully automated mode, injecting onto a nanoscale LC-MS system a series of more than 100 samples, representing fractions collected in the first dimension (8 fractions/sample). As a model study, the method was applied to finding biomarkers for the antiinflammatory properties of zilpaterol, which are coupled to the beta2-adrenergic receptor. Secreted proteomes from U937 macrophages exposed to lipopolysaccharide in the presence or absence of propanolol or zilpaterol were analysed. Multivariate statistical analysis of 2D LC-MS data, based on principal component analysis, and subsequent targeted LC-MS/MS identification of peptides of interest demonstrated the applicability of the approach.  相似文献   

4.
An approach using semantic metrics to provide insight into software quality early in the design phase of software development by automatically analysing natural language (NL) design specifications for object-oriented systems is presented. Semantic metrics are based on the meaning of software within the problem domain. In this paper, we extend semantic metrics to analyse design specifications. Since semantic metrics can now be calculated from early in design through software maintenance, they provide a consistent and seamless type of metric that can be collected through the entire lifecycle. We discuss our semMet system, an NL-based program comprehension tool we have expanded to calculate semantic metrics from design specifications. To validate semantic metrics from design specifications and to illustrate their seamless nature across the software lifecycle, we compare semantic metrics from different phases of the lifecycle, and we also compare them to syntactically oriented metrics calculated from the source code. Results indicate semantic metrics calculated from design specifications can give insight into the quality of the source code based on that design. Also, these results illustrate that semantic metrics provide a consistent and seamless type of metric that can be collected through the entire lifecycle.  相似文献   

5.
It is widely accepted that more widespread use of object‐oriented techniques can only come about when there are techniques and tool systems that provide design support beyond visualizing code. Distinct software metrics are considered as being able to support the design by indicating critical components with respect to various quality factors such as maintainability and reliability. Unfortunately, many object‐oriented metrics were defined and applied to classroom projects, but no evidence was given that the metrics are useful and applicable—both from an experience viewpoint and from a tools viewpoint—for industrial object‐oriented development. Distinct complexity metrics have been developed and integrated in a Smalltalk development support system called SmallMetric. Thus we achieved a basis for software analysis (metrics) and development support (critique) of Smalltalk systems. The main concepts of the environment including the underlying metrics are explained, its use and operation are discussed and some results of the implementation and its application to several industrial projects are given with examples. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

6.
The utility of packed-column supercritical, subcritical, and enhanced fluidity liquid chromatographies (pcSFC) for high-throughput applications has increased during the past few years. In contrast to traditional reversed-phase liquid chromatography, the addition of a volatile component to the mobile phase, such as CO2, produces a lower mobile-phase viscosity. This allows the use of higher flow rates which can translate into faster analysis times. In addition, the resulting mobile phase is considerably more volatile than the aqueous-based mobile phases that are typically used with LC-MS, allowing the entire effluent to be directed into the MS interface. High-throughput bioanalytical quantitation using pcSFC-MS/MS for pharmacokinetics applications is demonstrated in this report using dextromethorphan as a model compound. Plasma samples were prepared by automated liquid/liquid extraction in the 96-well format prior to pcSFC-MS/MS analysis. Three days of validation data are provided along with study sample data from a patient dosed with commercially available Vicks 44. Using pcSFC and MS/MS, dextromethorphan was quantified in 96-well plates at a rate of approximately 10 min/plate with average intraday accuracy of 9% or better. Daily relative standard deviations (RSDs) were less than 10% for the 2.21 and 14.8 ng/mL quality control (QC) samples, while the RSDs were less than 15% at the 0.554 ng/mL QC level.  相似文献   

7.
Sodium dodecyl sulfate (SDS) is one of the most popular laboratory reagents used for biological sample extraction; however, the presence of this reagent in samples challenges LC-MS-based proteomics analyses because it can interfere with reversed-phase LC separations and electrospray ionization. This study reports a simple SDS-assisted proteomics sample preparation method facilitated by a novel peptide-level SDS removal step. In an initial demonstration, SDS was effectively (>99.9%) removed from peptide samples through ion substitution-mediated DS(-) precipitation using potassium chloride (KCl), and excellent peptide recovery (>95%) was observed for <20 μg of peptides. Further experiments demonstrated the compatibility of this protocol with LC-MS/MS analyses. The resulting proteome coverage obtained for both mammalian tissues and bacterial samples was comparable to or better than that obtained for the same sample types prepared using standard proteomics preparation methods and analyzed using LC-MS/MS. These results suggest the SDS-assisted protocol is a practical, simple, and broadly applicable proteomics sample processing method, which can be particularly useful when dealing with samples difficult to solubilize by other methods.  相似文献   

8.
We present a new proteomics analysis pipeline focused on maximizing the dynamic range of detected molecules in liquid chromatography-mass spectrometry (LC-MS) data and accurately quantifying low-abundance peaks to identify those with biological relevance. Although there has been much work to improve the quality of data derived from LC-MS instruments, the goal of this study was to extend the dynamic range of analyzed compounds by making full use of the information available within each data set and across multiple related chromatograms in an experiment. Our aim was to distinguish low-abundance signal peaks from noise by noting their coherent behavior across multiple data sets, and central to this is the need to delay the culling of noise peaks until the final peak-matching stage of the pipeline, when peaks from a single sample appear in the context of all others. The application of thresholds that might discard signal peaks early is thereby avoided, hence the name TAPP: threshold-avoiding proteomics pipeline. TAPP focuses on quantitative low-level processing of raw LC-MS data and includes novel preprocessing, peak detection, time alignment, and cluster-based matching. We demonstrate the performance of TAPP on biologically relevant sample data consisting of porcine cerebrospinal fluid spiked over a wide range of concentrations with horse heart cytochrome c.  相似文献   

9.
Liquid chromatography coupled to mass spectrometry (LC-MS) and tandem mass spectrometry (LC-MS/MS) has become a standard technique for analyzing complex peptide mixtures to determine composition and relative abundance. Several high-throughput proteomics techniques attempt to combine complementary results from multiple LC-MS and LC-MS/MS analyses to provide more comprehensive and accurate results. To effectively collate and use results from these techniques, variations in mass and elution time measurements between related analyses need to be corrected using algorithms designed to align the various types of data: LC-MS/MS versus LC-MS/MS, LC-MS versus LC-MS/MS, and LC-MS versus LC-MS. Described herein are new algorithms referred to collectively as liquid chromatography-based mass spectrometric warping and alignment of retention times of peptides (LCMSWARP), which use a dynamic elution time warping approach similar to traditional algorithms that correct for variations in LC elution times using piecewise linear functions. LCMSWARP is compared to the equivalent approach based upon linear transformation of elution times. LCMSWARP additionally corrects for temporal drift in mass measurement accuracies. We also describe the alignment of LC-MS results and demonstrate their application to the alignment of analyses from different chromatographic systems, showing the suitability of the present approach for more complex transformations.  相似文献   

10.
Proteomics has grown significantly with the aid of new technologies that consistently are becoming more streamlined. While processing of proteins from a whole cell lysate is typically done in a bottom-up fashion utilizing MS/MS of peptides from enzymatically digested proteins, top-down proteomics is becoming a viable alternative that until recently has been limited largely to offline analysis by tandem mass spectrometry. Here we describe a method for high-resolution tandem mass spectrometery of intact proteins on a chromatographic time scale. In a single liquid chromatography-tandem mass spectrometry (LC-MS/MS) run, we have identified 22 yeast proteins with molecular weights from 14 to 35 kDa. Using anion exchange chromatography to fractionate a whole cell lysate before online LC-MS/MS, we have detected 231 metabolically labeled (14N/15N) protein pairs from Saccharomyces cerevisiae. Thirty-nine additional proteins were identified and characterized from LC-MS/MS of selected anion exchange fractions. Automated localization of multiple acetylations on Histone H4 was also accomplished on an LC time scale from a complex protein mixture. To our knowledge, this is the first demonstration of top-down proteomics (i.e., many identifications) on linear ion trap Fourier transform (LTQ FT) systems using high-resolution MS/MS data obtained on a chromatographic time scale.  相似文献   

11.
Ligand binding assays (LBAs) are widely used for therapeutic monoclonal antibody (mAb) quantification in biological samples. Major limitations are long method development times, reagent procurement, and matrix effects. LC-MS/MS methods using signature peptides are emerging as an alternative approach, which typically use a stable isotope labeled signature peptide as the internal standard (IS). However, a new IS has to be generated for every candidate, and the IS may not correct for variations at all processing steps. We have developed a general LC-MS/MS method approach employing a uniformly heavy-isotope labeled common whole mAb IS and a common immunocapture for sample processing. The method was streamlined with automation for consistency and throughput. Method qualification of four IgG(2) and four IgG(1) mAbs showed sensitivity of 0.1 μg/mL and linearity of 0.1-15 μg/mL. Quality control (QC) data of these eight mAbs were accurate and precise. The QC performance of the whole molecule labeled IS was better than those of synthetic labeled IS peptides tested. The pharmacokinetic results of two mAbs (an IgG(2) and IgG(1) candidate) dosed in rats were comparable to those of LBA. The general LC-MS/MS method approach overcomes the limitations of current methods to reduce time and resources required for preclinical studies.  相似文献   

12.
We present a software tool for visualizing data obtained from analyzing complex peptide mixtures by liquid chromatography (LC) electrospray ionization (ESI) mass spectrometry (MS). The data are represented as a two-dimensional density plot. For experiments employing collision-induced dissociation (CID), links are embedded in the image to the CID spectra and the corresponding peptide sequences that are represented by the respective feature. The image provides an intuitive method to evaluate sample quality and the performance of an LC-ESI-MS system and can be used to optimize experimental conditions. Local patterns of the image can also be used to identify chemical contaminants and specific peptide features. Therefore, this software tool may have broad application in MS-based proteomics.  相似文献   

13.
The evident importance of metabolic profiling for biomarker discovery and hypothesis generation has led to interest in incorporating this technique into large-scale studies, e.g., clinical and molecular phenotyping studies. Nevertheless, these lengthy studies mandate the use of analytical methods with proven reproducibility. An integrated experimental plan for LC-MS profiling of urine, involving sample sequence design and postacquisition correction routines, has been developed. This plan is based on the optimization of the frequency of analyzing identical quality control (QC) specimen injections and using the QC intensities of each metabolite feature to construct a correction trace for all the samples. The QC-based methods were tested against other current correction practices, such as total intensity normalization. The evaluation was based on the reproducibility obtained from technical replicates of 46 samples and showed the feature-based signal correction (FBSC) methods to be superior to other methods, resulting in ~1000 and 600 metabolite features with coefficient of variation (CV) < 15% within and between two blocks, respectively. Additionally, the required frequency of QC sample injection was investigated and the best signal correction results were achieved with at least one QC injection every 2 h of urine sample injections (n = 10). Higher rates of QC injections (1 QC/h) resulted in slightly better correction but at the expense of longer total analysis time.  相似文献   

14.
For automated production of tandem mass spectrometric data for proteins and peptides >3 kDa at >50 000 resolution, a dual online-offline approach is presented here that improves upon standard liquid chromatography-tandem mass spectrometry (LC-MS/MS) strategies. An integrated hardware and software infrastructure analyzes online LC-MS data and intelligently determines which targets to interrogate offline using a posteriori knowledge such as prior observation, identification, and degree of characterization. This platform represents a way to implement accurate mass inclusion and exclusion lists in the context of a proteome project, automating collection of high-resolution MS/MS data that cannot currently be acquired on a chromatographic time scale at equivalent spectral quality. For intact proteins from an acid extract of human nuclei fractionated by reversed-phase liquid chromatography (RPLC), the automated offline system generated 57 successful identifications of protein forms arising from 30 distinct genes, a substantial improvement over online LC-MS/MS using the same 12 T LTQ FT Ultra instrument. Analysis of human nuclei subjected to a shotgun Lys-C digest using the same RPLC/automated offline sampling identified 147 unique peptides containing 29 co- and post-translational modifications. Expectation values ranged from 10 (-5) to 10 (-99), allowing routine multiplexed identifications.  相似文献   

15.
Many manufacturing processes have various factors that affect the quality of the products, and the analysis and optimisation of these factors are critical activities for engineers. Although much research has been done on statistical methods to investigate the effects of these factors on quality metrics, these statistical methods are not always applied in real-world situations because of problems involving data integrity, lack of control/measurement, or technical/administrative constraints. On the other hand, conventional heuristic methods for the selection of critical quality factors are mostly devoid of metrics that can be examined objectively. This study, therefore, implements the analytic hierarchy process (AHP) for the quantitative prioritisation of the control factors involved in a flat end milling manufacturing process. In order to validate the metrics synthesised from the experience of skilled workers, the decision making is followed by a multivariate analysis of the variance based on the general linear model (GLM). The results show that AHP is able to provide fairly reliable metrics about the contribution of process parameters, and the group-wise judgment of qualified experts can improve the consistency of prioritisation.  相似文献   

16.
Modern manufacturing processes characterized by short series, complex part geometry and high refinement values often demonstrate conditions when traditional quality control (QC) methods do not work properly. This paper presents a prototype system for real-time QC where the developed methods and applications are integrated and post-process quality control is applied only as a complement and for reference measurements. All activities are supervised and fed with information from a developed active data acquisition system. The proposed concept contributes to bridging the gap between traditional post-process control and realtime QC of machining processes.  相似文献   

17.
We have developed novel scoring schemes for the identification of (phospho)peptides (PeptideScore) and for pinpointing phosphorylation sites (PhosphoSiteScore) using MS/MS data. These scoring schemes have been developed for the in-depth analysis of individual phosphoproteins, not for large-scale phosphoproteomic-type data. The scoring schemes are implemented into the new software tool Phosm, which provides a concise and comprehensive presentation of the results. For development and evaluation of these schemes, we have analyzed approximately 500 phosphopeptide MS/MS spectra, most of them nontryptic peptides. The novel scoring schemes turned out to be very powerful, even with CID MS/MS spectra of very low quality. Many phosphopeptides and phosphorylation sites that remained unassigned in our LC-MS/MS data sets with Mascot could be identified with Phosm. Especially the number of identified multiply phosphorylated peptides could be significantly increased. The applied scoring parameters are described, and the scoring for several selected examples of phosphopeptides is discussed in detail. Furthermore, a new and simple nomenclature for all types of phosphorylated fragment ions is introduced in this publication.  相似文献   

18.
The systematic monitoring of image quality and radiation dose is an ultimate solution to ensuring the continuously high quality of mammography examination. At present several protocols exist around the world, and different test objects are used for quality control (QC) of the physical and technical aspects of screen-film mammography. This situation may lead to differences in radiation image quality and dose reported. This article reviews the global QC perspective for the physical and technical aspects of screen-film mammography with regard to image quality and radiation dose. It points out issues that must be resolved in terms of radiation dose and that also affect the comparison.  相似文献   

19.
Liu G  Ji QC  Arnold ME 《Analytical chemistry》2010,82(23):9671-9677
Matrix ion suppression/enhancement is a well-observed and discussed phenomenon in electrospray ionization mass spectrometry. Nonuniform matrix ion suppression/enhancement across different types of samples in an analytical run is widely believed to be well compensated for by using a stable isotope-labeled internal standard (SIL-IS) in bioanalysis using liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS). Therefore, the risk of nonuniform matrix ion suppression/enhancement is usually deemed low when an SIL-IS is used. Here, we have identified, evaluated, and proposed solutions to control bioanalytical risks from nonuniform matrix ion suppression/enhancement even with an SIL-IS through a case study using omeprazole. Two lots of human blank urine were tested, and ion enhancement of about 500% for omeprazole was observed in one lot but not in the other. When a quadratic regression model had to be used, the assay failed the industry acceptance criteria due to unacceptable positive bias for the middle and high quality control (QC) samples. The failure was attributed to different extents of matrix ion enhancement between the standards (STDs) and QCs, which resulted in the misaligned results from the regression model. It was concluded that, for the same amount of drug, nonuniform ion enhancement for different types of samples (STD or QC) resulted in different ion intensities, therefore leading to different response behaviors (linear or nonlinear) at the mass spectrometer detector. A simplified mathematical model was used to evaluate the risk when unmatched response models occurred for different types of samples. A diagnostic factor Q (Q = X(ULOQ)(-A/B)) was proposed to monitor the risks, where X(ULOQ) is the upper limit of quantitation of the assay, A is the quadratic slope of the curve, and B is the linear slope of the curve. The potential maximum errors were estimated on the basis of the mathematical model for different scenarios, and Q values were given to control the risks under these conditions for bioanalysis using LC-MS/MS.  相似文献   

20.
Liao Z  Wan Y  Thomas SN  Yang AJ 《Analytical chemistry》2012,84(10):4535-4543
Accurate protein identification and quantitation are critical when interpreting the biological relevance of large-scale shotgun proteomics data sets. Although significant technical advances in peptide and protein identification have been made, accurate quantitation of high-throughput data sets remains a key challenge in mass spectrometry data analysis and is a labor intensive process for many proteomics laboratories. Here, we report a new SILAC-based proteomics quantitation software tool, named IsoQuant, which is used to process high mass accuracy mass spectrometry data. IsoQuant offers a convenient quantitation framework to calculate peptide/protein relative abundance ratios. At the same time, it also includes a visualization platform that permits users to validate the quality of SILAC peptide and protein ratios. The program is written in the C# programming language under the Microsoft .NET framework version 4.0 and has been tested to be compatible with both 32-bit and 64-bit Windows 7. It is freely available to noncommercial users at http://www.proteomeumb.org/MZw.html .  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号