首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 62 毫秒
1.
Microfluidic systems are increasingly popular for rapid and cheap determinations of enzyme assays and other biochemical analysis. In this study reduced order models (ROM) were developed for the optimization of enzymatic assays performed in a microchip. The model enzyme assay used was β-galactosidase (β-Gal) that catalyzes the conversion of Resorufin β-d-galactopyranoside (RBG) to a fluorescent product as previously reported by Hadd et al. (Anal Chem 69(17): 3407–3412, 1997). The assay was implemented in a microfluidic device as a continuous flow system controlled electrokinetically and with a fluorescence detection device. The results from ROM agreed well with both computational fluid dynamic (CFD) simulations and experimental values. While the CFD model allowed for assessment of local transport phenomena, the CPU time was significantly reduced by the ROM approach. The operational parameters of the assay were optimized using the validated ROM to significantly reduce the amount of reagents consumed and the total biochip assay time. After optimization the analysis time would be reduced from 20 to 5.25 min which would also resulted in 50% reduction in reagent consumption.  相似文献   

2.
Verification and optimization of a PLC control schedule   总被引:1,自引:0,他引:1  
We report on the use of model checking techniques for both the verification of a process control program and the derivation of optimal control schedules. Most of this work has been carried out as part of a case study for the EU VHS project (Verification of Hybrid Systems), in which the program for a Programmable Logic Controller (PLC) of an experimental chemical plant had to be designed and verified. The original intention of our approach was to see how much could be achieved here using the standard model checking environment of SPIN/Promela. As the symbolic calculations of real-time model checkers can be quite expensive it is interesting to try and exploit the efficiency of established non-real-time model checkers like SPIN in those cases where promising work-arounds seem to exist. In our case we handled the relevant real-time properties of the PLC controller using a time-abstraction technique; for the scheduling we implemented in Promela a so-called variable time advance procedure . To compare and interpret the results we carried out the same case study with the aid of the real-time model checker Uppaal, enhanced with facilities for cost-guided state space exploration. Both approaches proved sufficiently powerful to verify the design of the controller and/or derive (time-)optimal schedules within reasonable time and space requirements. Published online: 2 October 2002 The work reported here was carried out while the second and third authors were employed by the Computer Science Department of the University of Nijmegen, Netherlands. The second author was supported by an NWO postdoc grant, the third author by an NWO PhD grant, and both were supported by the EU LTR project VHS (Project No. 26270).  相似文献   

3.
In this paper, we present a method called MODEEP (Motion-based Object DEtection and Estimation of Pose) to detect independently moving objects (IMOs) in forward-looking infrared (FLIR) image sequences taken from an airborne, moving platform. Ego-motion effects are removed through a robust multi-scale affine image registration process. Thereafter, areas with residual motion indicate potential object activity. These areas are detected, refined and selected using a Bayesian classifier. The resulting regions are clustered into pairs such that each pair represents one object's front and rear end. Using motion and scene knowledge, we estimate object pose and establish a region of interest (ROI) for each pair. Edge elements within each ROI are used to segment the convex cover containing the IMO. We show detailed results on real, complex, cluttered and noisy sequences. Moreover, we outline the integration of our fast and robust system into a comprehensive automatic target recognition (ATR) and action classification system.  相似文献   

4.
This paper describes the design and implementation of a machine vision system CATALOG for detection and classification of some important internal defects in hardwood logs via analysis of computer axial tomography (CT or CAT) images. The defect identification and classification in CATALOG consists of two phases. The first phase comprises of the segmentation of a single CT image slice, which results in the extraction of 2D defect-like regions from the CT image slice. The second phase comprises of the correlation of the 2D defect-like regions across CT image slices in order to establish 3D support. The segmentation algorithm for a single CT image is a complex form of multiple-value thresholding that exploits both, the prior knowledge of the wood structure within the log and the gray-level characteristics of the image. The algorithm for extraction of 2D defect-like regions in a single CT image first locates the pith of the log cross section, groups the pixels in the segmented image on the basis of their connectivity and classifies each 2D region as either a defect-like region or a defect-free region using shape, orientation and morphological features. Each 2D defect-like region is classified as a defect or non-defect via correlation across corresponding 2D defect-like regions in neighboring CT image slices. The 2D defect-like regions with adequate 3D support are labeled as true defects. The current version of CATALOG is capable of 3D reconstruction and rendering of the log and its internal defects from the individual CT image slices. CATALOG is also capable of simulation and rendering of key machining operations such as sawing and veneering on the 3D reconstructions of the logs. The current version of CATALOG is intended as a decision aid for sawyers and machinists in lumber mills and also as an interactive training tool for novice sawyers and machinists. Received: 1 August 1997 / Accepted: 25 August 1999  相似文献   

5.
There is a well-recognized need for low cost biodetection technologies for resource-poor settings with minimal medical infrastructure. Lab-on-a-chip (LOC) technology has the ability to perform biological assays in such settings. The aim of this work is to develop a low cost, high-throughput detection system for the analysis of 96 samples simultaneously outside the laboratory setting. To achieve this aim, several biosensing elements were combined: a syringe operated ELISA lab-on-a-chip (ELISA-LOC) which integrates fluid delivery system into a miniature 96-well plate; a simplified non-enzymatic reporter and detection approach using a gold nanoparticle-antibody conjugate as a secondary antibody and silver enhancement of the visual signal; and carbon nanotubes (CNT) to increase primary antibody immobilization and improve assay sensitivity. Combined, these elements obviate the need for an ELISA washer, electrical power for operation and a sophisticated detector. We demonstrate the use of the device for detection of Staphylococcal enterotoxin B, a major foodborne toxin using three modes of detection, visual detection, CCD camera and document scanner. With visual detection or using a document scanner to measure the signal, the limit of detection (LOD) was 0.5 ng/ml. In addition to visual detection, for precise quantitation of signal using densitometry and a CCD camera, the LOD was 0.1 ng/ml for the CCD analysis and 0.5 ng/ml for the document scanner. The observed sensitivity is in the same range as laboratory-based ELISA testing. The point of care device can analyze 96 samples simultaneously, permitting high throughput diagnostics in the field and in resource poor areas without ready access to laboratory facilities or electricity.  相似文献   

6.
It is important to model and predict residents’ behaviors in an emergency in order to establish good evacuation schemes during disasters. This research presents modeling and simulation of residents’ behaviors in a nuclear disaster focusing on residents’ decision-making processes: information acquisition, situation assessment, and selecting actions. We selected qualitative causal relations between residents’ behaviors and the attributes of information, human, and situations from 57 reviews of the past 12 disaster cases. We then constructed a conceptual model of residents’ behaviors in a conventional stimulus–organism–response (S–O–R) model of human information processing. We adopted probabilistic reasoning (Bayesian belief network) to simulate the situation assessment of a resident in a nuclear disaster. We carried out a simulation using the announcement log of the JCO criticality accident and confirmed that the model could simulate the tendencies in residents’ behaviors observed in the actual disaster and can reflect various features of the conceptual model.  相似文献   

7.
Location is one of the most important elements of context in ubiquitous computing. In this paper we describe a location model, a spatial-aware communication model and an implementation of the models that exploit location for processing and communicating context. The location model presented describes a location tree, which contains human-readable semantic and geometric information about an organisation and a structure to describe the current location of an object or a context. The proposed system is dedicated to work not only on more powerful devices like handhelds, but also on small computer systems that are embedded into everyday artefact (making them a digital artefact). Model and design decisions were made on the basis of experiences from three prototype setups with several applications, which we built from 1998 to 2002. While running these prototypes we collected experiences from designers, implementers and users and formulated them as guidelines in this paper. All the prototype applications heavily use location information for providing their functionality. We found that location is not only of use as information for the application but also important for communicating context. In this paper we introduce the concept of spatial-aware communication where data is communicated based on the relative location of digital artefacts rather than on their identity. Correspondence to: Michael Biegl, Telecooperation Office (TecO), University of Karlsruhe, Vincenz-Prieβritz-Str. 1 D-76131 Karlsruhe, Germany. Email: michael@teco.edu  相似文献   

8.
In many applications, especially from the business domain, the requirements specification mainly deals with use cases and class models. Unfortunately, these models are based on different modelling techniques and aim at different levels of abstraction, such that serious consistency and completeness problems are induced. To overcome these deficiencies, we refine activity graphs to meet the needs for a suitable modelling element for use case behaviour. The refinement in particular supports the proper coupling of use cases via activity graphs and the class model. The granularity and semantics of our approach allow for a seamless, traceable transition of use cases to the class model and for the verification of the class model against the use case model. The validation of the use case model and parts of the class model is supported as well. Experience from several applications has shown that the investment in specification, validation and verification not only pays off during system and acceptance testing but also significantly improves the quality of the final product.    相似文献   

9.
10.
In this study, a new framework of vision-based estimation is developed using some data fusion schemes to obtain previewed road curvatures and vehicular motion states based on the scene viewed from an in-vehicle camera. The previewed curvatures are necessary for the guidance of an automatically steering vehicle, and the desired vehicular motion variables, including lateral deviation, heading angle, yaw rate, and sideslip angle, are also required for proper control of the vehicular lateral motion via steering. In this framework, physical relationships of previewed curvatures among consecutive images, motion variables in terms of image features searched at various levels in the image plane, and dynamic correlation among vehicular motion variables are derived as bases of data fusion to enhance the accuracy of estimation. The vision-based measurement errors are analyzed to determine the fusion gains based on the technique of a Kalman filter such that the measurements from the image plane and predictions of physical models can be properly integrated to obtain reliable estimations. Off-line experimental works using real road scenes are performed to verify the whole framework for image sensing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号