首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 937 毫秒
1.
Heart failure (HF) remains a severe disease with a poor prognosis. HF biomarkers may include demographic features, cardiac imaging, or genetic polymorphisms but this term is commonly applied to circulating serum or plasma analytes. Biomarkers may have at least three clinical uses in the context of HF: diagnosis, risk stratification, and guidance in the selection of therapy. Proteomic studies on HF biomarkers can be designed as case/control using clinical endpoints; alternatively, left ventricular remodeling can be used as a surrogate endpoint. The type of samples (tissue, cells, serum or plasma) used for proteomic analysis is a key factor in the research of biomarkers. Since the final aim is the discovery of circulating biomarkers, and since plasma and serum samples are easily accessible, proteomic analysis is frequently used for blood samples. However, standardization of sampling and access to low-abundance proteins remains problematic. Although, proteomics is playing a major role in the discovery phase of biomarkers, validation in independent populations is necessary by using more specific methods. The knowledge of new HF biomarkers may allow a more personalized medicine in the future.  相似文献   

2.
 Modern drug discovery and genomic analysis depend on rapid analysis of large numbers of samples in parallel. The applicability of microfluidic devices in this field needs low cost devices, which can be fabricated in mass production. In close collaboration, Greiner Bio-One and Forschungszentrum Karlsruhe have developed a single-use plastic microfluidic capillary electrophoresis (CE) array in the standardized microplate footprint. Feasibility studies have shown that hot embossing with a mechanical micromachined molding tool is the appropriate technology for low cost mass fabrication. A subsequent sealing of the microchannels allows sub-microliter sample volumes in 96-channel multiplexed microstructures. Received: 16 May 2001 / Accepted: 3 July 2001  相似文献   

3.
This paper presents a microelectromechanical systems (MEMS) differential thermal biosensor integrated with microfluidics for metabolite measurements in either flow-injection or flow-through mode. The MEMS device consists of two identical freestanding polymer diaphragms, resistive heaters, and a thermopile between the diaphragms. Integrated with polymer-based microfluidic measurement chambers, the device allows sensitive measurement of small volumes of liquid samples. Enzymes specific to a metabolic analyte system are immobilized on microbeads packed in the chambers. When a sample solution containing the analyte is introduced to the device, the heat released from the enzymatic reactions of the analyte is detected by the thermopile. The device has been tested with glucose solutions at physiologically relevant concentrations. In flow-injection mode, the device demonstrates a sensitivity of approximately 2.1 muV/mM and a resolution of about 0.025 mM. In flow-through mode with a perfusion flow rate of 0.5 mL/h, the sensitivity and resolution of the device are determined to be approximately 0.24 muV/mM and 0.4 mM, respectively. These results illustrate that the device, when integrated with subcutaneous sampling methods, can potentially allow for continuous monitoring of glucose and other metabolites.  相似文献   

4.
Particle-based sampling and meshing of surfaces in multimaterial volumes   总被引:2,自引:0,他引:2  
Methods that faithfully and robustly capture the geometry of complex material interfaces in labeled volume data are important for generating realistic and accurate visualizations and simulations of real-world objects. The generation of such multimaterial models from measured data poses two unique challenges: first, the surfaces must be well-sampled with regular, efficient tessellations that are consistent across material boundaries; and second, the resulting meshes must respect the nonmanifold geometry of the multimaterial interfaces. This paper proposes a strategy for sampling and meshing multimaterial volumes using dynamic particle systems, including a novel, differentiable representation of the material junctions that allows the particle system to explicitly sample corners, edges, and surfaces of material intersections. The distributions of particles are controlled by fundamental sampling constraints, allowing Delaunay-based meshing algorithms to reliably extract watertight meshes of consistently high-quality.  相似文献   

5.
One of the first steps in drug discovery involves identification of novel compounds that interfere with therapeutically relevant biological processes.

Identification of ‘lead’ compounds in all therapeutic areas included in a drug discovery program requires labor-intensive evaluation of numerous samples in a battery of therapy targeted biological assays. To accelerate the identification of ‘lead’ compounds, Janssen Research Foundation (JRF) has developed in the past an automated high throughput screening (HTS) based on the unattended operation of a custom Zymark tracked robot system. Automation of enzymatic and cellular assays was realized with this system adapted to the handling of microtiter plates. The microtiter plate technology is the basis of our screening. All compounds within our chemical library are stored and distributed in micronic tube racks or microtiter plates for screening. An efficient in-house developed mainframe based laboratory information management system supported all screening activities. Our experience at JRF has shown that the preparation of test compounds and serial dilutions has been a rate-limiting step in the overall screening process. In order to increase compound throughput, it was necessary both to optimize the robotized assays and to automate the compound supply processes. In HTS applications, one of the primary requirements is highly accurate and precise pipetting of microliter volumes of samples into microplates. The SciClone™ is an automated liquid handling workstation capable of both 96- and 384-channel high precision pipetting. For high throughput applications, the SciClone™ instrumentation is able to pipette a variety of liquid solutions with a high degree of accuracy and precision between microplates (inter-plate variability) and tip-to-tip (intra-plate variability) within a single plate. The focus of this presentation is to review the liquid handling performance of the SciCloneTM system as a multipurpose instrument for pipetting aqueous or organic solutions, and virus suspensions into 96- and 384-well microplates. The capabilities of the system and the resulting benefits for our screening activities will be described.  相似文献   


6.
Methods for generating a random sample of networks with desired properties are important tools for the analysis of social, biological, and information networks. Algorithm-based approaches to sampling networks have received a great deal of attention in recent literature. Most of these algorithms are based on simple intuitions that associate the full features of connectivity patterns with specific values of only one or two network metrics. Substantive conclusions are crucially dependent on this association holding true. However, the extent to which this simple intuition holds true is not yet known. In this paper, we examine the association between the connectivity patterns that a network sampling algorithm aims to generate and the connectivity patterns of the generated networks, measured by an existing set of popular network metrics. We find that different network sampling algorithms can yield networks with similar connectivity patterns. We also find that the alternative algorithms for the same connectivity pattern can yield networks with different connectivity patterns. We argue that conclusions based on simulated network studies must focus on the full features of the connectivity patterns of a network instead of on the limited set of networkmetrics for a specific network type. This fact has important implications for network data analysis: for instance, implications related to the way significance is currently assessed.  相似文献   

7.
Narrow corridors are a cause of intense problems in sampling‐based approaches due to the small probability of generation of samples inside the narrow corridor. The obstacle‐based sampling and bridge test sampling techniques rely on generating fresh samples at every iteration. Memory of the forbidden configuration space can lead to the discovery of new narrow corridors or generating additional key samples inside the same narrow corridor. Hence, in this paper, it is proposed to additionally solve the problem of generating a roadmap in the forbidden configuration space, called as the dual roadmap. The dual roadmap, so generated, has vertices that are collision‐prone and stores the structure of the forbidden configuration space. To reduce memory and computation time, only the boundary of the forbidden configuration space is stored, which is more informative. The dual roadmap, so constructed, is used to generate valid samples inside the middle of the narrow corridors. The construction of the additional roadmap takes a very small time and memory and is largely the biproduct of obstacle‐based sampling that is normally thrown away. Experimental results show that the proposed sampling method is very effective for finding narrow corridors as compared to popular sampling methodologies existing in the literature.  相似文献   

8.
The problem of designing optimal blood sampling protocols for kinetic experiments in pharmacology, physiology and medicine is briefly described, followed by a presentation of several interesting results based on sequentially optimized studies we have performed in more than 75 laboratory animals. Experiences with different algorithms and design software are also presented. The overall approach appears to be highly efficacious, from the standpoints of both laboratory economics and resulting model accuracy. Optimal sampling schedules (OSS) have a number of different time points equal to the number of unknown parameters for a popular class of models. Replication rather than distribution of samples provide maximum accuracy when additional sampling is feasible; and specific replicates can be used to weight some parameter accuracies more than others, even when a D-optimality criterion is used. Our sequential experiment scheme often converged in 1 step and resulting optimal sampling schedules were reasonably robust, allowing for biological variation among the animals studied.  相似文献   

9.
Proteomics is increasingly being applied to the human plasma proteome to identify biomarkers of disease for use in non‐invasive assays. 2‐D DIGE, simultaneously analysing thousands of protein spots quantitatively and maintaining protein isoform information, is one technique adopted. Sufficient numbers of samples must be analysed to achieve statistical power; however, few reported studies have analysed inherent variability in the plasma proteome by 2‐D DIGE to allow power calculations. This study analysed plasma from 60 healthy volunteers by 2‐D DIGE. Two samples were taken, 7 days apart, allowing estimation of sensitivity of detection of differences in spot intensity between two groups using either a longitudinal (paired) or non‐paired design. Parameters for differences were: two‐fold normalised volume change, α of 0.05 and power of 0.8. Using groups of 20 samples, alterations in 1742 spots could be detected with longitudinal sampling, and in 1206 between non‐paired groups. Interbatch gel variability was small relative to the detection parameters, indicating robustness and reproducibility of 2‐D DIGE for analysing large sample sets. In summary, 20 samples can allow detection of a large number of proteomic alterations by 2‐D DIGE in human plasma, the sensitivity of detecting differences was greatly improved by longitudinal sampling and the technology was robust across batches.  相似文献   

10.
In the recent years, global proteomics approaches have been widely used to characterize a number of tissue proteomes including plasma and liver; however, the elevated complexity of these samples in combination with the high abundance of some specific proteins make the study of the lowest abundant proteins difficult. This review is focused on different strategies that have been developed to extend the proteome focused on these two tissues, as, for example, the analysis of sub-cellular proteomes. In this regard, two special kind of extracellular vesicles--exosomes and membrane plasma shedding vesicles--are emerging as excellent biological source both to extend the liver and plasma proteomes and to be applied in the discovery of non-invasive liver-specific disease biomarkers.  相似文献   

11.
Genomic analysis and drug discovery depend increasingly on rapid, accurate analysis of large sets of sample and extensive compound collections at relatively low cost. By capitalizing on advances in microfabrication, genomics, combinatorial chemistry, and assay technologies, new analytical systems are expected to provide order-of-magnitude increases in analysis throughput along with comparable decreases in per-sample analysis costs. ACLARA's single-use, plastic LabCard™ systems, which transport fluids between reservoirs and through interconnected microchannels using electrokinetic mechanisms, are intended to address these analytical needs. These devices take advantage of recent developments in microfluidic and microfabrication technologies to permit their application to DNA sequencing; genotyping and DNA fragment analysis, as well as pharmaceutical candidate screening, and preparing biological samples for analysis. In a parallel effort, ACLARA has developed a new class of reporter molecules that are particularly well suited to capillary electrophoretic analysis. These electrophoretic mobility tags, called eTag™ reporters, can be used to uniquely label multiplexed sets of oligonucleotide recognition probes or proteins, thereby permitting traditionally homogeneous biochemical reporter assays to be multiplexed for CE analysis. Biochemical multiplexing is key to achieving new thresholds in analytical throughput while maintaining economically viable formats in many application areas. ACLARA's microfluidic, lab-on-a-chip concept promises to revolutionize chemical analysis, similar to the way miniaturization revolutionized computing, making tools continually smaller, more integrated, less expensive, and higher performing.  相似文献   

12.
Many bottlenecks in drug discovery have been addressed with the advent of new assay and instrument technologies. However, storing and processing chemical compounds for screening remains a challenge for many drug discovery laboratories. Although automated storage and retrieval systems are commercially available for medium to large collections of chemical samples, these samples are usually stored at a central site and are not readily accessible to satellite research labs.Drug discovery relies on the rapid testing of new chemical compounds in relevant biological assays. Therefore, newly synthesized compounds must be readily available in various formats to biologists performing screening assays. Until recently, our compounds were distributed in screw cap vials to assayists who would then manually transfer and dilute each sample in an “assay-ready” compound plate for screening. The vials would then be managed by the individuals in an ad hoc manner.To relieve the assayist from searching for compounds and preparing their own assay-ready compound plates, a newly customized compound storage system with an ordering software application was implemented at our research facility that eliminates these bottlenecks. The system stores and retrieves compounds in 1 mL-mini-tubes or microtiter plates, facilitates compound searching by identifier or structure, orders compounds at varying concentrations in specified wells on 96- or 384-well plates, requests the addition of controls (vehicle or reference compounds), etc. The orders are automatically processed and delivered to the assayist the following day for screening. An overview of our system will demonstrate that we minimize compound waste and ensure compound integrity and availability.  相似文献   

13.
Liquid handling plays a pivotal role in life science laboratories. In experiments such as gene sequencing, protein crystallization, antibody testing, and drug screening, liquid biosamples frequently must be transferred between containers of varying sizes and/or dispensed onto substrates of varying types. The sample volumes are usually small, at the micro- or nanoliter level, and the number of transferred samples can be huge when investigating large-scope combinatorial conditions. Under these conditions, liquid handling by hand is tedious, time-consuming, and impractical. Consequently, there is a strong demand for automated liquid-handling methods such as sensor-integrated robotic systems. In this article, we survey the current state of the art in automatic liquid handling, including technologies developed by both industry and research institutions. We focus on methods for dealing with small volumes at high throughput and point out challenges for future advancements.  相似文献   

14.
Abstract

An important phase in the application of the General System Problem Solving framework (GSPS∥ is the choice of the sampling frequency of relevant variables. Optimal sampling frequency is generally a compromise between our endeavour to catch all dynamical changes of a signal without having redundant samples. This problem is not completely solved for crisp metric variables and very little is known about sampling qualitative and fuzzy variables. In this paper, one possible approach based on the data compression is suggested for determining the optimal sampling frequency of ordinal and nominal variables. The submitted approach can be on the specific conditions extended for the fuzzy variables also  相似文献   

15.
Within the context of early drug discovery, a new pharmacophore-based tool to score and align small molecules (Pharao) is described. The tool is built on the idea to model pharmacophoric features by Gaussian 3D volumes instead of the more common point or sphere representations. The smooth nature of these continuous functions has a beneficent effect on the optimization problem introduced during alignment. The usefulness of Pharao is illustrated by means of three examples: a virtual screening of trypsin-binding ligands, a virtual screening of phosphodiesterase 5-binding ligands, and an investigation of the biological relevance of an unsupervised clustering of small ligands based on Pharao.  相似文献   

16.
《Computer Networks》2008,52(14):2677-2689
This work explores the use of statistical techniques, namely stratified sampling and cluster analysis, as powerful tools for deriving traffic properties at the flow level. Our results show that the adequate selection of samples leads to significant improvements allowing further important statistical analysis. Although stratified sampling is a well-known technique, the way we classify the data prior to sampling is innovative and deserves special attention. We evaluate two partitioning clustering methods, namely clustering large applications (CLARA) and K-means, and validate their outcomes by using them as thresholds for stratified sampling. We show that using flow sizes to divide the population we can obtain accurate estimates for both size and flow durations. The presented sampling and clustering classification techniques achieve data reduction levels higher than that of existing methods, on the order of 0.1% while maintaining good accuracy for the estimates of the sum, mean and variance for both flow duration and sizes.  相似文献   

17.
The discovery of new biomarkers will be an essential step to enhance our ability to better diagnose and treat human disease. The proteomics research community has recently increased its use of human blood (plasma/serum) as a sample source for these discoveries. However, while blood is fairly non-invasive and readily available as a specimen, it is not easily analyzed by liquid chromatography (LC)/mass spectrometry (MS), because of its complexity. Therefore, sample preparation is a crucial step prior to the analysis of blood. This sample preparation must also be standardized in order to gain the most information from these valuable samples and to ensure reproducibility. We have designed a semi-automated and highly parallel procedure for the preparation of human plasma samples. Our process takes the samples through eight successive steps before analysis by LC/MS: (1) receipt, (2) reformatting, (3) filtration, (4) depletion, (5) concentration determination and normalization, (6) digestion, (7) extraction, and (8) randomization, triplication, and lyophilization. These steps utilize a number of different liquid handlers and liquid chromatography (LC) systems. This process enhances our ability to discover new biomarkers from human plasma.  相似文献   

18.
In biometric and biomedical applications, a special transporting mechanism must be designed for the micro total analysis system (μTAS) to move samples and reagents through the microchannels that connect the unit procedure components in the system. An important issue for this miniaturization and integration is the microfluid management technique, i.e., microfluid transportation, metering, and mixing. In view of this, an optimal fuzzy sliding-mode control (OFSMC) based on the 8051 microprocessor is designed and a complete microfluidic manipulated biochip system is implemented in this study, with a pneumatic pumping actuator, two feedback-signal photodiodes and flowmeters for better microfluidic management. This new technique successfully improved the efficiency of biochemical reaction by increasing the effective collision into the probe molecules as the target molecules flow back and forth. The new technique was used in DNA extraction. When the number of Escherichia coli cells was 2×102–104 in 25 μl of whole blood, the extraction efficiency of immobilized beads with solution flowing back and forth was 600-fold larger than that of free beads.  相似文献   

19.
Functional magnetic resonance imaging (fMRI) has become a popular technique for studies of human brain activity. Typically, fMRI is performed with >3-mm sampling, so that the imaging data can be regarded as two-dimensional samples that average through the 1.5-4-mm thickness of cerebral cortex. The increasing use of higher spatial resolutions, <1.5-mm sampling, complicates the analysis of fMRI, as one must now consider activity variations within the depth of the brain tissue. We present a set of surface-based methods to exploit the use of high-resolution fMRI for depth analysis. These methods utilize white-matter segmentations coupled with deformable-surface algorithms to create a smooth surface representation at the gray-white interface and pial membrane. These surfaces provide vertex positions and normals for depth calculations, enabling averaging schemes that can increase contrast-to-noise ratio, as well as permitting the direct analysis of depth profiles of functional activity in the human brain.  相似文献   

20.
In recent years, the deep web has become extremely popular. Like any other data source, data mining on the deep web can produce important insights or summaries of results. However, data mining on the deep web is challenging because the databases cannot be accessed directly, and therefore, data mining must be performed by sampling the datasets. The samples, in turn, can only be obtained by querying deep web databases with specific inputs. In this paper, we target two related data mining problems, association mining and differential rulemining. These are proposed to extract high-level summaries of the differences in data provided by different deep web data sources in the same domain. We develop stratified sampling methods to perform these mining tasks on a deep web source. Our contributions include a novel greedy stratification approach, which recursively processes the query space of a deep web data source, and considers both the estimation error and the sampling costs. We have also developed an optimized sample allocation method that integrates estimation error and sampling costs. Our experimental results show that our algorithms effectively and consistently reduce sampling costs, compared with a stratified sampling method that only considers estimation error. In addition, compared with simple random sampling, our algorithm has higher sampling accuracy and lower sampling costs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号