Serum peptide profiling by MS is an emerging approach for disease diagnosis and biomarker discovery. A magnetic bead‐based method for off‐line serum peptide capture coupled to MALDI‐TOF‐MS has been recently introduced. However, the reagents are not available to the general scientific community. Here, we developed a protocol for serum peptide capture using novel magnetic C18 beads, and automated the procedure on a high‐throughput magnetic particle processor. We investigated bead equilibration, peptide binding and peptide elution conditions. The method is evaluated in terms of peaks counts and reproducibility of ion intensities in control serum. Overall, the DynaBead‐RPC18‐based serum sample processing protocol reported here is reproducible, robust and allows for the detection of ?200 peptides at m/z 800–4000 of serum that was allowed to clot for 1 h. The average intra‐experiment %CV of normalized ion intensities for crude serum and 0.5% TFA/0.15% n‐octyl glucoside‐treated serum, respectively, were 12% (range 2–38%) and 10% (3–21%) and the inter‐experiment %CVs were 24% (10–53%) and 31% (10–59%). Importantly, this method can be used for serum peptide profiling by anyone in possession of a MALDI‐TOF instrument. In conjunction with the KingFisher® 96, the whole serum peptide capture procedure is high‐throughput (?20 min per isolation of 96 samples in parallel), thereby facilitating large‐scale disease profiling studies. 相似文献
International Journal on Software Tools for Technology Transfer - Simulation-based analyses are becoming increasingly vital for the development of cyber-physical systems. Co-simulation is one such... 相似文献
In this article a novel approach to visual tracking called the harmony filter is presented. It is based on the Harmony Search algorithm, a derivative free meta-heuristic optimisation algorithm inspired by the way musicians improvise new harmonies. The harmony filter models the target as a colour histogram and searches for the best estimated target location using the Bhattacharyya coefficient as a fitness metric. Experimental results show that the harmony filter can robustly track an arbitrary target in challenging conditions. We compare the speed and accuracy of the harmony filter with other popular tracking algorithms including the particle filter and the unscented Kalman filter. Experimental results show the harmony filter to be faster and more accurate than both the particle filter and the unscented Kalman filter. 相似文献
We introduce parallel symbolic algorithms for bisimulation minimisation, to combat the combinatorial state space explosion along three different paths. Bisimulation minimisation reduces a transition system to the smallest system with equivalent behaviour. We consider strong and branching bisimilarity for interactive Markov chains, which combine labelled transition systems and continuous-time Markov chains. Large state spaces can be represented concisely by symbolic techniques, based on binary decision diagrams. We present specialised BDD operations to compute the maximal bisimulation using signature-based partition refinement. We also study the symbolic representation of the quotient system and suggest an encoding based on representative states, rather than block numbers. Our implementation extends the parallel, shared memory, BDD library Sylvan, to obtain a significant speedup on multi-core machines. We propose the usage of partial signatures and of disjunctively partitioned transition relations, to increase the parallelisation opportunities. Also our new parallel data structure for block assignments increases scalability. We provide SigrefMC, a versatile tool that can be customised for bisimulation minimisation in various contexts. In particular, it supports models generated by the high-performance model checker LTSmin, providing access to specifications in multiple formalisms, including process algebra. The extensive experimental evaluation is based on various benchmarks from the literature. We demonstrate a speedup up to 95\(\times \) for computing the maximal bisimulation on one processor. In addition, we find parallel speedups on a 48-core machine of another 17\(\times \) for partition refinement and 24\(\times \) for quotient computation. Our new encoding of the reduced state space leads to smaller BDD representations, with up to a 5162-fold reduction.
Decision diagrams, such as binary decision diagrams, multi-terminal binary decision diagrams and multi-valued decision diagrams, play an important role in various fields. They are especially useful to represent the characteristic function of sets of states and transitions in symbolic model checking. Most implementations of decision diagrams do not parallelize the decision diagram operations. As performance gains in the current era now mostly come from parallel processing, an ongoing challenge is to develop datastructures and algorithms for modern multi-core architectures. The decision diagram package Sylvan provides a contribution by implementing parallelized decision diagram operations and thus allowing sequential algorithms that use decision diagrams to exploit the power of multi-core machines. This paper discusses the design and implementation of Sylvan, especially an improvement to the lock-free unique table that uses bit arrays, the concurrent operation cache and the implementation of parallel garbage collection. We extend Sylvan with multi-terminal binary decision diagrams for integers, real numbers and rational numbers. This extension also allows for custom MTBDD leaves and operations and we provide an example implementation of GMP rational numbers. Furthermore, we show how the provided framework can be integrated in existing tools to provide out-of-the-box parallel BDD algorithms, as well as support for the parallelization of higher-level algorithms. As a case study, we parallelize on-the-fly symbolic reachability in the model checking toolset LTSmin. We experimentally demonstrate that the parallelization of symbolic model checking for explicit-state modeling languages, as supported by LTSmin, scales well. We also show that improvements in the design of the unique table result in faster execution of on-the-fly symbolic reachability. 相似文献
In this article, we show how scheduling problems can be modelled in untimed process algebra, by using special tick actions. A minimal-cost trace leading to a particular action, is one that minimises the number of tick steps. As a result, we can use any (timed or untimed) model checking tool to find shortest schedules. Instantiating this
scheme to μCRL, we profit from a richer specification language than timed model checkers usually offer. Also, we can profit
from efficient distributed state space generators. We propose a variant of breadth-first search that visits all states between
consecutive tick steps, before moving to the next time slice. We experimented with a sequential and a distributed implementation of this algorithm.
In addition, we experimented with beam search, which visits only parts of the search space, to find near-optimal solutions. Our approach is applied to find optimal schedules
for test batches of a realistic clinical chemical analyser, which performs several kinds of tests on patient samples. 相似文献
The integration of FPGA-based accelerators into a complete heterogeneous system is a challenging task faced by many researchers and engineers, especially now that FPGAs enjoy increasing popularity as implementation platforms for efficient, application-specific accelerators for domains such as signal processing, machine learning and intelligent storage. To lighten the burden of system integration from the developers of accelerators, the open-source TaPaSCo framework presented in this work provides an automated toolflow for the construction of heterogeneous many-core architectures from custom processing elements, and a simple, uniform programming interface to utilize spatially distributed, parallel computation on FPGAs. TaPaSCo aims to increase the scalability and portability of FPGA designs through automated design space exploration, greatly simplifying the scaling of hardware designs and facilitating iterative growth and portability across FPGA devices and families. This work describes TaPaSCo with its primary design abstractions and shows how TaPaSCo addresses portability and extensibility of FPGA hardware designs for systems-on-chip. A study of successful projects using TaPaSCo shows its versatility and can serve as inspiration and reference for future users, with more details on the usage of TaPaSCo presented in an in-depth case study and a short overview of the workflow.
Dyslipidemia has been documented worldwide among human immunodeficiency virus-infected (HIV) individuals and these changes
are reminiscent of the metabolic syndrome (MetS). In South Africa, with the highest number of HIV infections worldwide, HIV-1
subtype C is prevalent, while HIV-1 subtype B (genetically different from C) prevails in Europe and the United States. We
aimed to evaluate if HIV infection (subtype C) is associated with dyslipidemia, inflammation and the occurrence of the MetS
in Africans. Three hundred newly diagnosed HIV-infected participants were compared to 300 age, gender, body mass index and
locality matched uninfected controls. MetS was defined according to the Adult Treatment Panel III (ATP III) and International
Diabetes Federation (IDF) criteria. The HIV-infected group showed lower high density lipoprotein cholesterol (1.23 vs. 1.70 mmol/L)
and low density lipoprotein cholesterol (2.60 vs. 2.80 mmol/L) and higher triglycerides (1.29 vs. 1.15 mmol/L), C-reactive
protein (3.31 vs. 2.13 mg/L) and interleukin 6 (4.70 vs. 3.72 pg/L) levels compared to the uninfected group. No difference
in the prevalence of the MetS was seen between the two groups (ATP III, 15.2 vs. 11.5%; IDF, 21.1 vs. 22.6%). This study shows
that HIV-1 subtype C is associated with dyslipidemia, but not with a higher incidence of MetS in never antiretroviral-treated
HIV-infected Africans. 相似文献
Abstract Transport and tramming operations on South African mines are an area of considerable accident risk. In the context of surface mining, 74 percent of such accidents were associated with ore transfer by haul truck and service vehicle operations. However, the extent to which haul road design and operation activities impact on the overall safety of transport operations in mining was previously unclear, as was the status of road design activities for the various types of mining encountered This paper presents some findings from die Safety in Mines Research Advisory Committee research project OTH308 which examined the role of haul road design in transportation accidents. The objective of research was to determine whether a relationship existed between haul road design, construction and maintenance practices and accidents. In the case of surface mines, the objective was addressed through an assessment of transportation accidents and incidents, together with an evaluation of formal haul road design activities and associated safety critical defects and accident potentials for the various classes of surface mines studied It was concluded that whilst the overall contribution to transportation accidents derived from inadequate road design alone was small, low tonnage surface mining operations exhibited higher accident frequency rates than the industry average. Furthermore, there was clear evidence to suggest that there was no formal recognition of road design and management in transportation management, especially in the case of smaller surface mining and quarrying operations. To improve awareness of the role of good design in reducing transportation accidents, a mine haul road safety audit system was developed. A mine haul road safety audit system is described and recommended as a means to attaining a reduction in transportation accidents through the structured recognition and assessment of haulage hazards and the application of optimally safe designs for mine haul roads. 相似文献