首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
Building the IMPROVE data library   总被引:1,自引:0,他引:1  
Well-characterized and comprehensive test data sets are essential in the development and evaluation of biosignal interpretation methods for intensive-care patient monitoring. The IMPROVE (IMPROVing control of patient status in critical carE) data library is an annotated data library that contains practically all the monitored and other clinical data from critically ill patients at high risk of oxygen-transport-related problems. The data, collected at the Intensive Care Unit of the Kuopio University Hospital, is supported by continuous patient-state assessments carried out by a bedside physician and includes 59 patient records, each having a typical duration of 24 hours. In this article, we describe the technical setup of the system, including what signals and parameters were collected and how the annotations were done. We also discuss the lessons learned from the data collection process  相似文献   

2.
《Potentials, IEEE》1999,18(4):17-20
Magnetic recording data storage innovation and product evolution is arguably higher than it has ever been. For example, the current record in high density recording is 23.8 Gbit/in2, which was demonstrated by Seagate Technology. Data is stored by creating a pattern of magnetization in the media using a recording head. Basically, the head is a split ring-shaped core of easily magnetized material wrapped by a few turns of wire. When current flows in the wire, it induces a magnetic flux in the core and a field across the recording gap. Reversing the current's direction changes the direction of the magnetic field. Because the field lines spread out as they bridge the gap, they magnetize the media in a small zone near the gap. Since data is stored as binary digits, a pattern of current reversals can be coded to represent binary digits. For example, “1” might be represented by a magnetization reversal and “0” by the absence of a reversal. Data is read by sensing the fields that arise from the magnetization transition zones in the media. These fields are caused by a concentration of magnetic poles at the ends of each magnetized region. Since these fields point in the opposite direction to the media's magnetization, they are called “demagetizing fields” These fields extend beyond the media's surface. Thus, they can induce responses in a read head if it is brought close to the media  相似文献   

3.
Collecting EEG signals in the IMPROVE data library   总被引:2,自引:0,他引:2  
One of the key issues for the IMPROVE (IMPROVing control of patient status in critical carE) project was to define and build a data library (DL) of annotated data acquired in the intensive-care unit (ICU), with particular reference to problems of mismatch between oxygen utilisation and supply. An additional aim of the IMPROVE study was to test the feasibility and clinical value of including limited monitoring of high-quality long-term EEG signals with the main DL in a restricted number of patients. Such an EEG DL would form a useful basis for testing the applicability and validity of different signal processing and interpretation methods in ICU monitoring, and also demonstrate the degree to which useful information could be obtained by a degree of fusion between systemic and cerebral variables. In this article, we describe the setup for collection of the EEG DL, the tools developed to facilitate visual analysis of the EEG together with simultaneous data from other non-EEG variables, data concerning quality control, and some preliminary observations from detailed visual assessment of EEG patterns in relation to other ICU events  相似文献   

4.
5.
The article deals only with simulation models that have stochastic, or random input. Classical statistical methods for independent observations assume that each observation carries the maximum information, and therefore they compute the smallest confidence interval. Since stationary simulation output data carries less information, the confidence interval resulting from applying classical statistical computations using autocorrelated observations would be too small. This would lead one to conclude the parameter estimate is much more precise than is actually the case. To get around this problem, several methods have been suggested in the output data analysis literature. Two of the most widely accepted methods are: 1) the method of independent replications; and 2) the method of batch means. Both methods try to avoid autocorrelation by breaking the data into “independent” segments. The sample means of these segments are considered i.i.d, and used to calculate confidence intervals. In the first method, several independent runs are executed. In the second method, a long simulation run is executed and divided into several “nearly uncorrelated” batches. The article specifically examines the Java Simulation (JSIM) Web based environment which has evolved to incorporate component based technology. If component based technology succeeds, the long hoped for gains in software development productivity may finally be realized  相似文献   

6.
Biological research is becoming increasingly database driven, motivated, in part, by the advent of large-scale functional genomics and proteomics experiments such as those comprehensively measuring gene expression. These provide a wealth of information on each of the thousands of proteins encoded by a genome. Consequently, a challenge in bioinformatics is integrating databases to connect this disparate information as well as performing large-scale studies to collectively analyze many different data sets. This approach represents a paradigm shift away from traditional single-gene biology, and it often involves statistical analyses focusing on the occurrence of particular features (e.g., folds, functions, interactions, pseudogenes, or localization) in a large population of proteins. Moreover, the explicit application of machine learning techniques can be used to discover trends and patterns in the underlying data. In this article, we give several examples of these techniques in a genomic context: clustering methods to organize microarray expression data, support vector machines to predict protein function, Bayesian networks to predict subcellular localization, and decision trees to optimize target selection for high-throughput proteomics  相似文献   

7.
In high-resolution cardiac mapping, signals are simultaneously recorded from hundreds of electrodes in contact with the myocardium and then analyzed to reveal the underlying activation pattern. Activation mapping has a long history in both experimental and clinical cardiac electrophysiology and has also been used to study other organ systems. Much of the current emphasis in mapping technology is on data analysis-ways to extract useful information from the voluminous data stream-rather than on the acquisition of the data. Hence, in this article, the authors review the traditional method for analyzing and interpreting mapping data, isochronal mapping, and then report on additional techniques that have recently emerged. The authors focus on techniques applicable to quantifying the dynamics of complex tachyarrhythmias such as ventricular fibrillation (VF) and atrial fibrillation (AF)  相似文献   

8.
嵌入式Wi-Fi模块HLK-RM04高速率传输常出现少量数据丢失的问题,从而给基于该模块传输采样信号与开关量的应用带来了严重的问题,为此,本文提出了一种采样信号与开关量在Wi-Fi模块下的混合编码传输新方法。该方法在进行数据组帧时,利用采样信号有效位数小于实际传输字节位数的特点,将数字开关量以及每个采样值的识别码分散植入采样信号传输字节的冗余位中,从而使得接收端可以通过对数据帧中传输字节冗余位的解译,剔除因数据传输丢失导致的不完整采样数据并根据简单的数据统计与阈值判决识别开关量。实验表明,与传统采用固定位置编码组帧的方法相比,本文提出的混编方法在没有加大数据传输量的前提下,可有效避免由于Wi-Fi模块少量数据丢失而给采样信号传输带来的采样值不完整、采样数据错位组合等严重问题,同时也避免了开关量的错误识别问题。  相似文献   

9.
Present-day DNA sequencing techniques have evolved considerably from their early beginnings. A modern sequencing project is essentially an assembly-line environment and is therefore improved and accelerated by the degree to which slow and error-prone manual steps can be replaced by reliable and accurate automatic ones. For hardware, this typically means expanding the use of robotics, for example, to execute the multitude of micro-volume fluid transfers that occur for each of the samples processed in a project. Likewise, automated software replaces manual processing and analysis steps for samples wherever possible. In this article, we focus on one particular aspect of software: the automated handling of raw DNA data. Specifically, we discuss a number of critical software algorithms and components and how they have been woven into a framework for largely hands-off processing of Human Genome Project data at the Genome Sequencing Center. These data represent about 25% of the total public human sequencing project  相似文献   

10.
在个人通信业务(PCS)网中,需要用大容量数据库以提供终端移动性,个人移动性和业务移动性,通过快速接入用户数据来控制智能网(IN),以提供实时业务。文内主要介绍PCS移动性概念和用户数据库管理。  相似文献   

11.
随着现如今数据收集能力和存储能力的大大增强,大规模数据挖掘分析的重要性越来越显得重要.然而,对大规模数据的分析挖掘并不是一件容易的事情.因此,为了可以更高效的分析这些数据,很多新的算法和数据结构逐渐被引入到了数据挖掘分析中去.针对关联分析,提出了一种名为高效频繁模式挖掘(advanced frequent pattern mining,AFPM)算法.基于前置频繁模式树(pre-frequent pattern tree,PFP-tree)来提升关联分析的性能,并提供了相应的算法来实现基于这种数据结构的关联分析.通过大量的实验数据验证了这种新型的数据结构在关联分析问题上是优于频繁模式增长(FP-growth)算法.  相似文献   

12.
Chip mounters and surface mount device (SMD) inspection systems use image processing techniques for the placement of SMDs onto printed circuit boards (PCB) and the inspection of SMDs. Such techniques require the component configuration data which define the shape of SMDs; however, the creation of this data is currently not automated. The goal of this paper is to offer a system that generates component configuration data automatically by processing images of SMDs. There are several target components, such as IC, BGA (ball grid array), chips, connectors, etc., for which data can be generated. In this paper we will focus on generation of data for IC components. © 2008 Wiley Periodicals, Inc. Electr Eng Jpn, 165(4): 76–83, 2008; Published online in Wiley InterScience ( www.interscience.wiley.com ). DOI 10.1002/eej.20686  相似文献   

13.
可视化无损检测(NDT)在深度学习技术发展下,在数据处理方面正面领着巨大的机遇.但是,获取足够的标记数据集是一个很大的挑战.实现无损检测图像数据集的扩充有利于提升深度学习在缺陷检测中的能力.因此,通过研究无损检测图像数据特点,结合循环一致生成对抗网络(CycleGANs)方法,对现有的数据进行了有效的扩充.改善了深度卷...  相似文献   

14.

Purpose  

Today’s available chemical shift imaging (CSI) analysis tools are based on Fourier transform of the entire data set prior to interactive display. This strategy is associated with limitations particularly when arbitrary voxel positions within a 3D spatial volume are needed by the user. In this work, we propose and demonstrate a processing-resource-efficient alternative strategy for both interactive and automated CSI data processing up to three spatial dimensions.  相似文献   

15.
16.
基于Agent的继电保护数据传输系统   总被引:3,自引:2,他引:3  
为了提高变电站自动化系统站内局域网通信中继电保护数据传输的实时性、有效性和灵活性,文章结合软件Agent技术,提出了一个应用于变电站站内局域网继电保护数据传输的Agent系统。它是基于多Agent的体系结构,包括3个子Agent。该系统具有主动探测并评估网络传输状况、进行数据分析,从而灵活调整传输策略的功能。实例系统的仿真结果表明,该系统可以有效地解决变电站继电保护信息传输中的各类数据流量冲突的问题,保障信息的可靠实时传输。  相似文献   

17.
Blackfin DSP在数据高速采集中的应用   总被引:3,自引:0,他引:3  
张洁 《电子测量技术》2007,30(2):133-135
在传统的数字采集中通常采用MCU与FIFO配合的方案,不适合数据量大的场合.对比于传统数字采集系统,本文给出了一种基于Blackfin系列DSP的高速数据采集的实现方案,这一系统硬件结构简单、功能强大,完全可以适应大数据量、高实时性的要求.软件方面,配合以AD公司的仿真系统,使软件开发非常容易.实验结果表明:使用本系统可以很好地重建输入模拟信号;而使用传统系统,则会丢失很多细节,破坏了信号的完整性.  相似文献   

18.
提出了一种万能式断路器短延时动作可返回试验方法。在多磁路变压器一次侧接入一个可调阻抗和一个机电式投切开关(如接触器),通过程序控制器控制投切开关的闭合与断开时间,实现了二次侧电流从万安级短时间通电后返回至千安级。  相似文献   

19.
低成本多通道采集系统   总被引:1,自引:1,他引:0  
在大批量生产,同时需要产品全检的工业领域中,为了完成对所有产品的质量检测,多通道检测设备往往拥有比单通道设备更快的检测速度与性价比.本文介绍一种低成本多通道采集系统的实现方式,使用MCS51作为系统的处理器.通过多芯片协同工作,分别控制多路开关采样的方法,基于M块采集板与N路多路开关电路,实现M×N路通道的采集系统,达到尽可能的低成本与尽可能的多通道.本文主要分析单板的多路开关采样和多芯片通讯的实现方法.  相似文献   

20.
传感器观测服务中异构感知数据的接入与管理   总被引:2,自引:0,他引:2       下载免费PDF全文
传感器观测服务(SOS)是传感器网络整合框架(SWE)的核心服务之一,用于管理多源传感器观测数据,并向终端用户提供数据服务。开放地理信息联盟(OGC )制定了向SOS插入观测数据和从SOS获取观测数据的规范,但SOS不具备自适应处理不同接入方式、不同数据传输协议的异构传感器的能力。当通过SOS管理多源传感器观测数据时,必须接入传感器、解析来自该传感器的观测数据,并将其转换为满足SOS规范的数据格式。设计了一种异构传感器观测数据接入SOS的技术方案,从传感器注册、接入网选型、数据接收与解析、插入文档生成与数据插入等方面论述了方案的实现过程。以穿戴式心率监测传感器监测数据接入为例实现了心率监测传感器监测数据在SOS中的管理,表明该技术方案具有可行性,为异构传感器接入SOS奠定了基础。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号