首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
At present most quantitative proteomics investigations are focused on the analysis of protein expression differences between two or more sample specimens. With each analysis a static snapshot of a cellular state is captured with regard to protein expression. However, any information on protein turnover cannot be obtained using classic methodologies. Protein turnover, the result of protein synthesis and degradation, represents a dynamic process, which is of equal importance to understanding physiological processes. Methods employing isotopic tracers have been developed to measure protein turnover. However, applying these methods to live animals is often complicated by the fact that an assessment of precursor pool relative isotope abundance is required. Also, data analysis becomes difficult in case of low label incorporation, which results in a complex convolution of labeled and unlabeled peptide mass spectrometry signals. Here we present a protein turnover analysis method that circumvents this problem using a (15)N-labeled diet as an isotopic tracer. Mice were fed with the labeled diet for limited time periods and the resulting partially labeled proteins digested and subjected to tandem mass spectrometry. For the interpretation of the mass spectrometry data, we have developed the ProTurnyzer software that allows the determination of protein fractional synthesis rates without the need of precursor relative isotope abundance information. We present results validating ProTurnyzer with Escherichia coli protein data and apply the method to mouse brain and plasma proteomes for automated turnover studies.  相似文献   

2.
In shotgun proteomics, tandem mass spectrometry is used to identify peptides derived from proteins. After the peptides are detected, proteins are reassembled via a reference database of protein or gene information. Redundancy and homology between protein records in databases make it challenging to assign peptides to proteins that may or may not be in an experimental sample. Here, a probability model is introduced for determining the likelihood that peptides are correctly assigned to proteins. This model derives consistent probability estimates for assembled proteins. The probability scores make it easier to confidently identify proteins in complex samples and to accurately estimate false-positive rates. The algorithm based on this model is robust in creating protein complements from peptides from bovine protein standards, yeast, Ustilago maydis cell lysates, and Arabidopsis thaliana leaves. It also eliminates the side effects of redundancy and homology from the reference databases by employing a new concept of peptide grouping and by coherently distinguishing distinct peptides from unique records and shared peptides from homologous proteins. The software that runs the algorithm, called PANORAMICS, provides a tool to help analyze the data based on a researcher's knowledge about the sample. The software operates efficiently and quickly compared to other software platforms.  相似文献   

3.
Quantitative mass spectrometry using stable isotope-labeled tagging reagents such as isotope-coded affinity tags has emerged as a powerful tool for identification and relative quantitation of proteins in current proteomic studies. Here we describe an integrated approach using both automated two-dimensional liquid chromatography/ mass spectrometry (2D-LC/MS) and a novel class of chemically modified resins, termed acid-labile isotope-coded extractants (ALICE), for quantitative mass spectrometric analysis of protein mixtures. ALICE contains a thiol-reactive group that is used to capture all cysteine (Cys)-containing peptides from peptide mixtures, an acid-labile linker, and a nonbiological polymer. The acid-labile linker is synthesized in both heavy and light isotope-coded forms and therefore enables the direct relative quantitation of peptides/proteins through mass spectrometric analysis. To test the ALICE method for quantitative protein analysis, two model protein mixtures were fully reduced, alkylated, and digested in solution separately and then Cys-containing peptides covalently captured by either light or heavy ALICE. The reacted light and heavy ALICE were mixed and washed extensively under rigorous conditions and the Cys-containing peptides retrieved by mild acid-catalyzed elution. Finally, the eluted peptides were directly subjected to automated 2D-LC/MS for protein identification and LC/MS for accurate relative quantitation. Our initial study showed that quantitation of protein mixtures using ALICE was accurate. In addition, isolation of Cys-containing peptides by the ALICE method was robust and specific and thus yielded very low background in mass spectrometric studies. Overall, the use of ALICE provides improved dynamic range and sensitivity for quantitative mass spectrometric analysis of peptide or protein mixtures.  相似文献   

4.
Online liquid chromatography-mass spectrometric (LC-MS) analysis of intact proteins (i.e., top-down proteomics) is a growing area of research in the mass spectrometry community. A major advantage of top-down MS characterization of proteins is that the information of the intact protein is retained over the vastly more common bottom-up approach that uses protease-generated peptides to search genomic databases for protein identification. Concurrent to the emergence of top-down MS characterization of proteins has been the development and implementation of the stable isotope labeling of amino acids in cell culture (SILAC) method for relative quantification of proteins by LC-MS. Herein we describe the qualitative and quantitative top-down characterization of proteins derived from SILAC-labeled Aspergillus flavus using nanoflow reversed-phase liquid chromatography directly coupled to a linear ion trap Fourier transform ion cyclotron resonance mass spectrometer (nLC-LTQ-FTICR-MS). A. flavus is a toxic filamentous fungus that significantly impacts the agricultural economy and human health. SILAC labeling improved the confidence of protein identification, and we observed 1318 unique protein masses corresponding to 659 SILAC pairs, of which 22 were confidently identified. However, we have observed some limiting issues with regard to protein quantification using top-down MS/MS analyses of SILAC-labeled proteins. The role of SILAC labeling in the presence of competing endogenously produced amino acid residues and its impact on quantification of intact species are discussed in detail.  相似文献   

5.
For automated production of tandem mass spectrometric data for proteins and peptides >3 kDa at >50 000 resolution, a dual online-offline approach is presented here that improves upon standard liquid chromatography-tandem mass spectrometry (LC-MS/MS) strategies. An integrated hardware and software infrastructure analyzes online LC-MS data and intelligently determines which targets to interrogate offline using a posteriori knowledge such as prior observation, identification, and degree of characterization. This platform represents a way to implement accurate mass inclusion and exclusion lists in the context of a proteome project, automating collection of high-resolution MS/MS data that cannot currently be acquired on a chromatographic time scale at equivalent spectral quality. For intact proteins from an acid extract of human nuclei fractionated by reversed-phase liquid chromatography (RPLC), the automated offline system generated 57 successful identifications of protein forms arising from 30 distinct genes, a substantial improvement over online LC-MS/MS using the same 12 T LTQ FT Ultra instrument. Analysis of human nuclei subjected to a shotgun Lys-C digest using the same RPLC/automated offline sampling identified 147 unique peptides containing 29 co- and post-translational modifications. Expectation values ranged from 10 (-5) to 10 (-99), allowing routine multiplexed identifications.  相似文献   

6.
Liao Z  Wan Y  Thomas SN  Yang AJ 《Analytical chemistry》2012,84(10):4535-4543
Accurate protein identification and quantitation are critical when interpreting the biological relevance of large-scale shotgun proteomics data sets. Although significant technical advances in peptide and protein identification have been made, accurate quantitation of high-throughput data sets remains a key challenge in mass spectrometry data analysis and is a labor intensive process for many proteomics laboratories. Here, we report a new SILAC-based proteomics quantitation software tool, named IsoQuant, which is used to process high mass accuracy mass spectrometry data. IsoQuant offers a convenient quantitation framework to calculate peptide/protein relative abundance ratios. At the same time, it also includes a visualization platform that permits users to validate the quality of SILAC peptide and protein ratios. The program is written in the C# programming language under the Microsoft .NET framework version 4.0 and has been tested to be compatible with both 32-bit and 64-bit Windows 7. It is freely available to noncommercial users at http://www.proteomeumb.org/MZw.html .  相似文献   

7.
A new method for proteolytic stable isotope labeling is introduced to provide quantitative and concurrent comparisons between individual proteins from two entire proteome pools or their subfractions. Two 18O atoms are incorporated universally into the carboxyl termini of all tryptic peptides during the proteolytic cleavage of all proteins in the first pool. Proteins in the second pool are cleaved analogously with the carboxyl termini of the resulting peptides containing two 16O atoms (i.e., no labeling). The two peptide mixtures are pooled for fractionation and separation, and the masses and isotope ratios of each peptide pair (differing by 4 Da) are measured by high-resolution mass spectrometry. Short sequences and/or accurate mass measurements combined with proteomics software tools allow the peptides to be related to the precursor proteins from which they are derived. Relative signal intensities of paired peptides quantify the expression levels of their precursor proteins from proteome pools to be compared, using an equation described in the paper. Observation of individual (unpaired) peptides is mainly interpreted as differential modification or sequence variation for the protein from the respective proteome pool. The method is evaluated here in a comparison of virion proteins for two serotypes (Ad5 and Ad2) of adenovirus, taking advantage of information already available about protein sequences and concentrations. In general, proteolytic 18O labeling enables a shotgun approach for proteomic studies with quantitation capability and is proposed as a useful tool for comparative proteomic studies of very complex protein mixtures.  相似文献   

8.
Zhang Z  Zhang A  Xiao G 《Analytical chemistry》2012,84(11):4942-4949
Protein hydrogen/deuterium exchange (HDX) followed by protease digestion and mass spectrometric (MS) analysis is accepted as a standard method for studying protein conformation and conformational dynamics. In this article, an improved HDX MS platform with fully automated data processing is described. The platform significantly reduces systematic and random errors in the measurement by introducing two types of corrections in HDX data analysis. First, a mixture of short peptides with fast HDX rates is introduced as internal standards to adjust the variations in the extent of back exchange from run to run. Second, a designed unique peptide (PPPI) with slow intrinsic HDX rate is employed as another internal standard to reflect the possible differences in protein intrinsic HDX rates when protein conformations at different solution conditions are compared. HDX data processing is achieved with a comprehensive HDX model to simulate the deuterium labeling and back exchange process. The HDX model is implemented into the in-house developed software MassAnalyzer and enables fully unattended analysis of the entire protein HDX MS data set starting from ion detection and peptide identification to final processed HDX output, typically within 1 day. The final output of the automated data processing is a set (or the average) of the most possible protection factors for each backbone amide hydrogen. The utility of the HDX MS platform is demonstrated by exploring the conformational transition of a monoclonal antibody by increasing concentrations of guanidine.  相似文献   

9.
10.
Quantitative shotgun proteomic analyses are facilitated using chemical tags such as ICAT and metabolic labeling strategies with stable isotopes. The rapid high-throughput production of quantitative "shotgun" proteomic data necessitates the development of software to automatically convert mass spectrometry-derived data of peptides into relative protein abundances. We describe a computer program called RelEx, which uses a least-squares regression for the calculation of the peptide ion current ratios from the mass spectrometry-derived ion chromatograms. RelEx is tolerant of poor signal-to-noise data and can automatically discard nonusable chromatograms and outlier ratios. We apply a simple correction for systematic errors that improves the accuracy of the quantitative measurement by 32 +/- 4%. Our automated approach was validated using labeled mixtures composed of known molar ratios and demonstrated in a real sample by measuring the effect of osmotic stress on protein expression in Saccharomyces cerevisiae.  相似文献   

11.
To facilitate structural analysis of proteins and protein-protein interactions, we developed Pro-CrossLink, a suite of software tools consisting of three programs (Figure 1), DetectShift, IdentifyXLink, and AssignXLink. DetectShift was developed to detect ions of cross-linked peptide pairs in a mixture of 18O-labeled peptides obtained from protein proteolytic digests. The selected candidate ions of cross-linked peptide pairs subsequently undergo tandem mass spectrometric (MS/MS) analysis for sequence determination. Based on the masses of candidate ions as well as y- and b-type ions in the tandem mass spectra, IdentifyXLink assigns the candidate ions to cross-linked peptide pairs. For an identified cross-linked peptide pair, AssignXLink generates an extensive fragment ion list, including a-, b-, c-type, x-, y-, z-type, internal, and immonium ions with associated common losses of H2O, NH3, CO, and CO2, and facilitates a precise location of the cross-linked residues. Pro-CrossLink is automated, highly configurable by the user, and applicable to many studies that map low-resolution protein structures and molecular interfaces in protein complexes.  相似文献   

12.
Proteomics experiments based on Selected Reaction Monitoring (SRM, also referred to as Multiple Reaction Monitoring or MRM) are being used to target large numbers of protein candidates in complex mixtures. At present, instrument parameters are often optimized for each peptide, a time and resource intensive process. Large SRM experiments are greatly facilitated by having the ability to predict MS instrument parameters that work well with the broad diversity of peptides they target. For this reason, we investigated the impact of using simple linear equations to predict the collision energy (CE) on peptide signal intensity and compared it with the empirical optimization of the CE for each peptide and transition individually. Using optimized linear equations, the difference between predicted and empirically derived CE values was found to be an average gain of only 7.8% of total peak area. We also found that existing commonly used linear equations fall short of their potential, and should be recalculated for each charge state and when introducing new instrument platforms. We provide a fully automated pipeline for calculating these equations and individually optimizing CE of each transition on SRM instruments from Agilent, Applied Biosystems, Thermo-Scientific and Waters in the open source Skyline software tool ( http://proteome.gs.washington.edu/software/skyline ).  相似文献   

13.
The large time and effort devoted to software maintenance can be reduced by providing software engineers with software tools that automate tedious, error-prone tasks. However, despite the prevalence of tools such as IDEs, which automatically provide program information and automated support to the developer, there is considerable room for improvement in the existing software tools. The authors' previous work has demonstrated that using natural language information embedded in a program can significantly improve the effectiveness of various software maintenance tools. In particular, precise verb information from source code analysis is useful in improving tools for comprehension, maintenance and evolution of object-oriented code, by aiding in the discovery of scattered, action-oriented concerns. However, the precision of the extraction analysis can greatly affect the utility of the natural language information. The approach to automatically extracting precise natural language clues from source code in the form of verb- direct object (DO) pairs is described. The extraction process, the set of extraction rules and an empirical evaluation of the effectiveness of the automatic verb-DO pair extractor for Java source code are described.  相似文献   

14.
In order to facilitate the extraction of quantitative data from live cell image sets, automated image analysis methods are needed. This paper presents an introduction to the general principle of an overlap cell tracking software developed by the National Institute of Standards and Technology (NIST). This cell tracker has the ability to track cells across a set of time lapse images acquired at high rates based on the amount of overlap between cellular regions in consecutive frames. It is designed to be highly flexible, requires little user parameterization, and has a fast execution time.  相似文献   

15.
We have developed a complete system for the isotopic labeling, fractionation, and automated quantification of differentially expressed peptides that significantly facilitates candidate biomarker discovery. We describe a new stable mass tagging reagent pair, (12)C(6)- and (13)C(6)-phenyl isocyanate (PIC), that offers significant advantages over currently available tags. Peptides are labeled predominantly at their amino termini and exhibit elution profiles that are independent of label isotope. Importantly, PIC-labeled peptides have unique neutral-mass losses upon CID fragmentation that enable charge state and label isotope identification and, thereby, decouple the sequence identification from the quantification of candidate biomarkers. To exploit these properties, we have coupled peptide fractionation protocols with a Thermo LTQ-XL LC-MS(2) data acquisition strategy and a suite of automated spectrum analysis software that identifies quantitative differences between labeled samples. This approach, dubbed the PICquant platform, is independent of protein sequence identification and excludes unlabeled peptides that otherwise confound biomarker discovery. Application of the PICquant platform to a set of complex clinical samples showed that the system allows rapid identification of peptides that are differentially expressed between control and patient groups.  相似文献   

16.
In proteomics, effective methods are needed for identifying the relatively limited subset of proteins displaying significant changes in abundance between two samples. One way to accomplish this task is to target for identification by MS/MS only the "interesting" proteins based on the abundance ratio of isotopically labeled pairs of peptides. We have developed the software and hardware tools for online LC-FTICR MS/MS studies in which a set of initially unidentified peptides from a proteome analysis can be selected for identification based on their distinctive changes in abundance following a "perturbation". We report here the validation of this method using a mixture of standard proteins combined in different ratios after isotopic labeling. We also demonstrate the application of this method to the identification of Shewanella oneidensis peptides/proteins exhibiting differential abundance in suboxic versus aerobic cell cultures.  相似文献   

17.
Here we describe a new quadrupole Fourier transform ion cyclotron resonance hybrid mass spectrometer equipped with an intermediate-pressure MALDI ion source and demonstrate its suitability for "bottom-up" proteomics. The integration of a high-speed MALDI sample stage, a quadrupole analyzer, and a FT-ICR mass spectrometer together with a novel software user interface allows this instrument to perform high-throughput proteomics experiments. A set of linearly encoded stages allows sub-second positioning of any location on a microtiter-sized target with up to 1536 samples with micrometer precision in the source focus of the ion optics. Such precise control enables internal calibration for high mass accuracy MS and MS/MS spectra using separate calibrant and analyte regions on the target plate, avoiding ion suppression effects that would result from the spiking of calibrants into the sample. An elongated open cylindrical analyzer cell with trap plates allows trapping of ions from 1000 to 5000 m/z without notable mass discrimination. The instrument is highly sensitive, detecting less than 50 amol of angiotensin II and neurotensin in a microLC MALDI MS run under standard experimental conditions. The automated tandem MS of a reversed-phase separated bovine serum albumin digest demonstrated a successful identification for 27 peptides covering 45% of the sequence. An automated tandem MS experiment of a reversed-phase separated yeast cytosolic protein digest resulted in 226 identified peptides corresponding to 111 different proteins from 799 MS/MS attempts. The benefits of accurate mass measurements for data validation for such experiments are discussed.  相似文献   

18.
Software-based feature extraction from DNA microarray images still requires human intervention on various levels. Manual adjustment of grid and metagrid parameters, precise alignment of superimposed grid templates and gene spots, or simply identification of large-scale artifacts have to be performed beforehand to reliably analyze DNA signals and correctly quantify their expression values. Ideally, a Web-based system with input solely confined to a single microarray image and a data table as output containing measurements for all gene spots would directly transform raw image data into abstracted gene expression tables. Sophisticated algorithms with advanced procedures for iterative correction function can overcome imminent challenges in image processing. Herein is introduced an integrated software system with a Java-based interface on the client side that allows for decentralized access and furthermore enables the scientist to instantly employ the most updated software version at any given time. This software tool is extended from PixClust as used in Extractiff incorporated with Java Web Start deployment technology. Ultimately, this setup is destined for high-throughput pipelines in genome-wide medical diagnostics labs or microarray core facilities aimed at providing fully automated service to its users.  相似文献   

19.
A fully automated protein precipitation technique for biological sample preparation has been developed for the quantitation of drugs in various biological matrixes. All liquid handling during sample preparation was automated using a Hamilton MicroLab Star Robotic workstation, which included the preparation of standards and controls from a Watson laboratory information management system generated work list, shaking of 96-well plates, and vacuum application. Processing time is less than 30 s per sample or approximately 45 min per 96-well plate, which is then immediately ready for injection onto an LC-MS/MS system. An overview of the process workflow is discussed, including the software development. Validation data are also provided, including specific liquid class data as well as comparative data of automated vs manual preparation using both quality controls and actual sample data. The efficiencies gained from this automated approach are described.  相似文献   

20.
《Materials & Design》2005,26(6):517-533
The use of high speed milling (HSM) for the production of moulds and dies is becoming more widespread. Critical aspects of the technology include cutting tools, machinability data, cutter path generation and technology. Much published information exists on cutting tools and related data (cutting speeds, feed rates, depths of cut, etc.). However, relatively little information has been published on the optimisation of cutter paths for this application. Most of the research work is mainly focused on cutter path generation with the main aim on reducing production time. Work with regards to cutter path evaluation and optimisation on tool wear, tool life, surface integrity and relevant workpiece machinability characteristics are scant. Therefore, a detailed knowledge on the evaluation of cutter path when high speed rough and finish milling is essential in order to improve productivity and surface quality. The paper details techniques used to reduce machining times and improve workpiece surface roughness/accuracy when HSM hardened mould and die materials. Optimisation routines are considered for the roughing and finishing of cavities. The effects of machining parameters notably feed rate adaptation techniques and cutting tools are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号