首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 721 毫秒
1.
Abstract: Although data mining and knowledge discovery techniques have recently been used to diagnose human disease, little research has been conducted on disease diagnostic modelling using human gene information. Furthermore, to our knowledge, no study has reported on diagnosis models using single nucleotide polymorphism (SNP) information. A disease diagnosis model using data mining techniques and SNP information should prove promising from a practical perspective as more information on human genes becomes available. Data mining and knowledge discovery techniques can be put to practical use detecting human disease, since a haplotype analysis using high-density SNP markers has gained great attention for evaluating human genes related to various human diseases. This paper explores how data mining and knowledge discovery can be applied to medical informatics using human gene information. As an example, we applied case-based reasoning to a cancer detection problem using human gene information and SNP analysis because case-based reasoning has been applied in medicine relatively less often than other data mining techniques. We propose a modified case-based reasoning method that is appropriate for associated categorical variables to use in detecting gastric cancer.  相似文献   

2.
Microelectronic chip-based systems are available for a wide variety of applications. Many of these systems rely on NON-INTEGRATED optical detection schemes to collect data from the chips. A magnetoresistive detection format, however, can be completely integrated. This paper presents some basic concepts for optimizing micron-sized magnetoresistive sensors for single nucleotide polymorphism (SNP) analysis and DNA diagnostics. Magnetoresistive sensors are nano-fabricated thin film resistors whose resistance changes as a function of magnetic field. The magnetic DNA assay replaces the EXTERNAL optical reader apparatus with an INTEGRATED magnetoresistive sensor at each “pixel” of the array. The EXTERNAL light source can be replaced by an INTEGRATED magnetic field generation strap, or by a simple external coil. Magnetoresistive pixel sizes could presently be ˜ 3 microns on a side, and decrease to ˜ 100 nm with technological improvements. It is shown that, taking reasonable values for critical parameters, a signal to noise ratio of 10,000 : 1 is achievable using 10 nm paramagnetic beads as the assay label. As early demonstrations of the feasibility of this system, data have been collected using NVE's magnetoresistive sensors (non-optimized) to easily detect single micron-sized magnetic beads. Presently NVE is working on 1 million bit arrays of magnetoresistive sensors which are being fabricated into magnetoresistive random access memory (MRAM) chips. These arrays have many similarities to what is required for the magnetoresistive DNA assay including sub-micron bit size and single bit addressability.  相似文献   

3.
This review assesses the quality of the data acquired over a 13-week period from a High-Content Analysis screening project that used 297 unique cell lines. This article also evaluates the proficiency of a “tipless” (i.e., does not use disposable tips) full-automation design used for this project that prioritizes intralab system mobility and system configuration mutability. The request to assay a large number of cell lines with poorly characterized growth rates led us to devise an MDS PharmaServices, Inc. proprietary algorithm in an effort to select the proper cell plating density for each cell line. The performance metrics include coefficients of variation (CVs) of Controls for the cell plating data and Data Set Mean CVs for assessing replicate propinquity (i.e., how close the replicates are to each other). The performance of the automation system and our algorithm for this project produced data of superior quality.  相似文献   

4.
This article describes the concept of a "Central Data Management" (CDM) and its implementation within the large-scale population-based medical research project "Personalized Medicine". The CDM can be summarized as a conjunction of data capturing, data integration, data storage, data refinement, and data transfer. A wide spectrum of reliable "Extract Transform Load" (ETL) software for automatic integration of data as well as "electronic Case Report Forms" (eCRFs) was developed, in order to integrate decentralized and heterogeneously captured data. Due to the high sensitivity of the captured data, high system resource availability, data privacy, data security and quality assurance are of utmost importance. A complex data model was developed and implemented using an Oracle database in high availability cluster mode in order to integrate different types of participant-related data. Intelligent data capturing and storage mechanisms are improving the quality of data. Data privacy is ensured by a multi-layered role/right system for access control and de-identification of identifying data. A well defined backup process prevents data loss. Over the period of one and a half year, the CDM has captured a wide variety of data in the magnitude of approximately 5terabytes without experiencing any critical incidents of system breakdown or loss of data. The aim of this article is to demonstrate one possible way of establishing a Central Data Management in large-scale medical and epidemiological studies.  相似文献   

5.
In order to take into account the complex genomic distribution of SNP variations when identifying chromosomal regions with significant SNP effects, a single nucleotide polymorphism (SNP) association scan statistic was developed. To address the computational needs of genome wide association (GWA) studies, a fast Java application, which combines single-locus SNP tests and a scan statistic for identifying chromosomal regions with significant clusters of significant SNP effects, was developed and implemented. To illustrate this application, SNP associations were analyzed in a pharmacogenomic study of the blood pressure lowering effect of thiazide-diuretics (N=195) using the Affymetrix Human Mapping 100 K Set. 55,335 tagSNPs (pair-wise linkage disequilibrium R2<0.5) were selected to reduce the frequency correlation between SNPs. A typical workstation can complete the whole genome scan including 10,000 permutation tests within 3 h. The most significant regions locate on chromosome 3, 6, 13 and 16, two of which contain candidate genes that may be involved in the underlying drug response mechanism. The computational performance of ChromoScan-GWA and its scalability were tested with up to 1,000,000 SNPs and up to 4000 subjects. Using 10,000 permutations, the computation time grew linearly in these datasets. This scan statistic application provides a robust statistical and computational foundation for identifying genomic regions associated with disease and provides a method to compare GWA results even across different platforms.  相似文献   

6.
MIMU定位定向系统数据采集装置的设计   总被引:12,自引:0,他引:12  
介绍了一个用于微型惯性测量组合(Micro inertial measurement unit,MIMU)定位定向系统的数据采集装置。该装置由数据采集模块、数据处理模块、数据下载模块组成。数据采集模块用16位高精度AD676和浮点放大男采集MIMU输出信号;数据处理模块采用主从式紧藕合双处理男结构,分别以数字信号处理男(DSP)作为数据处理机,单片微控制器作为I/O接口处理机;数据下载模块增设了通讯监听处理器,以提高数据下载传输的可靠性。文中还介绍了数据采集的软件设计。该系统结构简单,满足了定位定向系统在数据采集处理方面提出的诸多性能指标,为捷联定位定向系统的小型化提供了一种新的思路。  相似文献   

7.
This investigation focuses upon the information utilization process; i.e., what it means for a decision-maker to utilize an information system. A definition of utilization is adopted that states that an information system is utilized if the output from the information system is included in the Human Information Processing system of a decision-maker. The definition of utilization is further refined by segmenting utilization into two distinct subsystems, a Human Information Processing system and a Data Selection system. As an initial investigation of the utilization process, it was decided specifically to study the relationship of the Data Selection system to the Human Information Processing system. The primary aim of this investigation was to determine whether factors that are internal to a decision-maker may affect the data selection process. “Cognitive style” was chosen as representative of internal factors. Measures of data selection and cognitive style were created with particular emphasis placed on the development of an instrument to measure cognitive style. An experiment was designed to investigate the effect of this factor on data selection. The results of this experiment indicate there is a strong relationship between cognitive style and data selection.  相似文献   

8.
Liquid-handling platforms often do not provide a mechanism for collecting weight data needed for instrument qualification and sample transfer confirmation. This paper discusses the development, implementation, and application of a system that facilitates liquid-handling confirmation required for Good Laboratory Practice (GLP) compliance and provides an avenue to track the amount of sample transferred for extraction.The Balance Data Collector (BDC) system was designed as a flexible generic balance tool to be used with Tecan Gemini© and Packard WinPrep® software. The BDC system provides a user interface for balance configuration, a pointer to a file for storing weight data, and an external interface through command-line arguments. BDC is currently used for instrument qualification and sample collection in bioanalytical applications. Instrument qualification includes refining instrument liquid classes and verifying pipetting accuracy and precision. For bioanalytical applications using 96-well plates, BDC collects individual aliquot weights of samples transferred during an assay.The BDC system provides the user with the capability to control a balance via liquid-handling programming platforms such as Tecan Gemini© and Packard WinPrep®. Integration of liquid-handling platforms and BDC reduces the time the scientist must spend recording weight data needed for GLP compliance and can be used to increase accuracy of calculated sample concentrations.  相似文献   

9.
The aim of the paper is to give a formal compositional semantics for spiking neural P systems (SNP systems) by following the Structural Operational Semantics (SOS) approach. A process algebra is introduced whose terms represent SNP systems. The algebra is equipped with a semantics, given as a labelled transition system. This semantics allows notions of behavioural equivalences over SNP systems to be studied. Some known equivalences are considered and their definition based on the given semantics is provided. Such equivalences are proved to be congruences.  相似文献   

10.
随着高通量的单核苷酸多态性(Single Nucleotide Polymorphism, SNP)检测技术的发展,世界各地的实验研究积累了大量的SNP数据,但是目前尚无一个全面综合的SNP数据库.SNP在致病基因发现、司法鉴定、个体化医疗等方面的应用得到了极大的关注和发展,因此有必要建立一个整合的人类SNP数据库.在整合中需要进行大规模的单核苷酸多态性位点的验证,该工作通过一个自主开发的、健壮的、用户友好的生物信息学集群计算工具包EasyCluster在集群系统上得以高效完成.  相似文献   

11.
A data base management system which is dedicated to plant layout type data, and should be useful to the practicing industrial engineer, is presented. The data items utilized in the data base for this microcomputer software represent only a minimal subset of the data which one would want for the creation of a proper, industrial facility design; however, they were felt to represent a good starting point for the creation of a microcomputer data base system which would aid a layout engineer. The Knowledgeman software (Micro Data Base Systems, Inc.) was used for this project; it consists of a relational database and an electronic spreadsheet. Two plant layout examples are presented to demonstrate the use of the software.  相似文献   

12.
网络数据获取系统是一个对海量网络数据实时获取以进行网络安全事件监测分析的大型系统,每个用户使用网络数据获取系统都需要通过一个用户端系统进行.以Visual C 为开发环境,开发了一个这样的用户端系统.系统制作、封装监测条件并向网络数据获取系统提交,获得监测数据并解析、存储,用户对数据进行处理,实现了对网络安全事件的分析监测.  相似文献   

13.
14.
随着高通量的单核苷酸多态性(Single Nucleotide Polymorphism, SNP)检测技术的发展,世界各地的实验研究积累了大量的SNP数据,但是目前尚无一个全面综合的SNP数据库。SNP在致病基因发现、司法鉴定、个体化医疗等方面的应用得到了极大的关注和发展,因此有必要建立一个整合的人类SNP数据库。在整合中需要进行大规模的单核苷酸多态性位点的验证,该工作通过一个自主开发的、健壮的、用户友好的生物信息学集群计算工具包EasyCluster在集群系统上得以高效完成。  相似文献   

15.
王辉  冯志勇  陈炬  陈世展 《计算机应用》2010,30(8):2170-2172
在基于语义关系的大规模Web服务组织结构:服务网络模型的基础上,构建了服务网络系统平台。提供了对Web服务和服务关系的描述,使得Web服务的自动化处理成为可能。设计了可动态优化的服务网络系统内核,实现了服务网络的数据存储、网络优化和关系挖掘;设计和实现了系统创建维护工具、系统服务增长工具、可视化工具等一整套核心工具;给出了服务关系构成建的基本方法和流程,完善了网络结构。最后,一些基于服务网络的应用证明了该系统的有效性和广泛适用性。  相似文献   

16.
The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is a high spatial resolution, multispectral imager with along-track stereo capabilities scheduled for launch on the first NASA spacecraft of the Earth Observing System (Terra) in 1999. Data will be obtained in 14 spectral bands covering the visible through the thermal infrared wavelength region. A number of standard data products will be available to requesters through an on-line archival and processing system. Particular, user-specified data acquisitions will be possible through a Data Acquisition Request system.  相似文献   

17.
Data protection aspects in a large, integrated Hospital Information System are described. A special feature in this case is the interaction between the operating system BOS and a large number of application subsystems. (Both system and application software has been developed in-house by BAZIS). Integration aspects are shown to contribute to data protection. The main topic in the present article is access control; however, several measures with regard to accuracy, consistency and availability of the information (data integrity) will also be mentioned.  相似文献   

18.
闫萍  张祥德  焦明海 《计算机工程》2005,31(22):228-230
阐述了Oracle8作业队列的工作原理,并介绍了Oracle8数据库调度作业的方法。针对由于移动通信网络管理系统中数据量大且数据来源复杂而造成的数据库作业调度难于管理和Oracle中SNP进程资源有限的问题,分析了网管系统中采集数据的特点,提出了在一个SNP进程中运行多个存储过程的设计方案,使数据库中的作业调度更加系统、有序,有效地提高了系统的运行效率。  相似文献   

19.
随着航天技术的不断发展,研制即插即用、低成本、小型化的卫星渐渐成为一种趋势,我国的星上数据管理系统使用1553B总线连接各个有数据交换需求的计算机与分系统,这些计算机与分系统也就成为1553B总线的远程终端;为了设计即插即用的1553B远程终端,对电子数据表单(Electronic data sheet,EDS)、龙芯1F中的1533B简易终端进行了研究,分析了电子数据表单的设计方法和使用方式,提出了应用EDS、1553B通信EDS的设计方法;分析了1553B简易终端对龙芯1F中测控接口的访问控制方式,在龙芯1F中添加了一个SPA(Space Plug-and-play Avionics)接口,与1553B简易终端联合使用,当龙芯1F作为1553B总线上的一个终端设备时,具有了即插即用的特征;由此说明1553B远程终端是可以实现即插即用的,同时把即插即用这一新的设计理念带入星载数据系统的设计中。  相似文献   

20.
李敏  倪少权  邱小平  黄强 《计算机应用》2015,35(5):1267-1272
针对物联网环境下异构大数据处理实时性低的问题,探讨了基于Hadoop框架实现数据处理与持久化的方法,提出了一种基于"上下文"的Hadoop大数据处理系统模型HDS,HDS利用Hadoop框架完成数据并行处理与持久化,将物联网环境下异构数据抽象为"上下文"作为HDS处理对象;并提出了"上下文距离"上下文邻域系统(CNS)"的定义;对于Hadoop框架本身数据处理实时性不高的问题,HDS在设计上增加了"上下文队列(CQ)"作为辅助存储来提高数据处理实时性;利用"上下文"的时空特性,建立了用户请求"上下文邻域系统"对任务进行重组.以成品油配送车辆调度问题为例,利用MapReduce并行实验对HDS的数据处理与实时性能进行了验证与分析.实验结果表明,在物联网环境下,HDS不仅在大数据处理性能上较传统单点处理模型(SDS)具有明显优势,在实验环境中10台服务器的情况下,其计算性能能够超过SDS 200倍以上;同时也验证了CQ作为辅助存储能够有效提高数据处理实时性,在10台服务器环境下,其数据处理实时性能够提高270倍以上.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号