首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
为了对比评价超声B-mode ratio及MR-IP-OP两种检查方法对糖尿病患者颌下腺脂肪沉积的诊断效能。选取志愿者174人(糖尿病患者89人和健康志愿者85人),利用超声B-mode ratio技术和3D MR-IP-OP方法评估和检测颌下腺的脂肪含量。使用独立样本t检验比较两组患者B-radio和MRI-IP-OP的FF值组间差异。采用受试者工作特征曲线(ROC)分析两种技术对颌下腺脂肪沉积的诊断效能。用Spear-man相关系数评价两组参数之间的相关性。糖尿病组的B-mode ratio值(1.337±0.128)明显高于正常组(0.917±0.138)。糖尿病组MR-IP-OP的FF值(21.88±7.86%)明显高于正常组(8.87±3.09%)。ROC曲线表明脂肪沉积评价中B-mode ratio和3D MR-IP-OP的曲线下面积(AUC)分别为0.860和0.909,超声B-mode ratio和3D MR-IP-OP方法获得的结果间有良好相关性(r=0.502,p<0.001)。超声B-mode ratio对颌下腺脂肪变性的诊断具有良好的诊断效能,可以替代MR-IP-OP检查技术,并且有很好的临床应用前景。  相似文献   

2.
Detection and delineation of P and T waves in 12-lead electrocardiograms   总被引:2,自引:1,他引:1  
Abstract: This paper presents an efficient method for the detection and delineation of P and T waves in 12-lead electrocardiograms (ECGs) using a support vector machine (SVM). Digital filtering techniques are used to remove power line interference and baseline wander. An SVM is used as a classifier for the detection and delineation of P and T waves. The performance of the algorithm is validated using original simultaneously recorded 12-lead ECG recordings from the standard CSE (Common Standards for Quantitative Electrocardiography) ECG multi-lead measurement library. A significant detection rate of 95.43% is achieved for P wave detection and 96.89% for T wave detection. Delineation performance of the algorithm is validated by calculating the mean and standard deviation of the differences between automatic and manual annotations by the referee cardiologists. The proposed method not only detects all kinds of morphologies of QRS complexes, P and T waves but also delineates them accurately. The onsets and offsets of the detected P and T waves are found to be within the tolerance limits given in the CSE library.  相似文献   

3.
A novel algorithm is proposed for the segmentation of the lumen and bifurcation boundaries of the carotid artery in B-mode ultrasound images. It uses the image contrast characteristics of the lumen and bifurcation of the carotid artery in relation to other tissues and structures for their identification. The relevant ultrasound data regarding the artery presented in the input image is identified using morphologic operators and processed by an anisotropic diffusion filter for speckle noise removal. The information obtained is then used to define two initial contours, one corresponding to the lumen and the other one regarding the bifurcation boundaries, for the application of the Chan-Vese level set segmentation model. A set of longitudinal ultrasound B-mode grayscale images of the common carotid artery was acquired using a GE Healthcare Vivid-e ultrasound system. The results reveal that the new algorithm is effective and robust, and that its main advantage relies on the automatic identification of the carotid lumen, which overcomes the known limitations of the traditional algorithms.  相似文献   

4.
This paper seeks to present an algorithm for the prediction of frontal spatial fidelity and surround spatial fidelity of multichannel audio, which are two attributes of the subjective parameter called basic audio quality. A number of features chosen to represent spectral and spatial changes were extracted from a set of recordings and used in a regression model as independent variables for the prediction of spatial fidelities. The calibration of the model was done by ridge regression using a database of scores obtained from a series of formal listening tests. The statistically significant features based on interaural cross correlation and spectral features found from an initial model were employed to build a simplified model and these selected features were validated. The results obtained from the validation experiment were highly correlated with the listening test scores and had a low standard error comparable to that encountered in typical listening tests. The applicability of the developed algorithm is limited to predicting the basic audio quality of low-pass filtered and down-mixed recordings (as obtained in listening tests based on a multistimulus test paradigm with reference and two anchors: a 3.5-kHz low-pass filtered signal and a mono signal).  相似文献   

5.
《Pattern recognition letters》2003,24(4-5):677-691
Speckle appears in all conventional medical B-mode ultrasonic images and can be an undesirable property since it may mask small but diagnostically significant features. In this paper, an adaptive filtering algorithm is proposed for speckle reduction. It selects a filtering region size using an appropriately estimated homogeneity value for region growth. Homogeneous regions are processed with an arithmetic mean filter. Edge pixels are filtered using a nonlinear median filter. The performance of the proposed technique is compared to two other methods––the adaptive weighted median filter and the homogeneous region growing mean filter. Results of processed images show that the method proposed reduces speckle noise and preserves edge details effectively.  相似文献   

6.
BACKGROUND: Assessing the physical demands of the heterogeneous jobs in hospitals requires appropriate and validated assessment methodologies. METHODS: As part of an integrated assessment, we adapted Rapid Entire Body Assessment (REBA), using it in a work sampling mode facilitated by a hand-held personal digital assistant, expanding it with selected items from the UC Computer Use Checklist, and developed a scoring algorithm for ergonomics risk factors for the upper (UB) and lower body (LB). RESULTS: The inter-rater reliability kappa was 0.54 for UB and 0.66 for LB. The scoring algorithm demonstrated significant variation (ANOVA p<0.05) by occupation in anticipated directions (administrators ranked lowest; support staff ranked highest on both scores). A supplemental self-assessment measure of spinal loading correlated with high strain LB scores (r=0.30; p<0.001). CONCLUSION: We developed and validated a scoring algorithm incorporating a revised REBA schema adding computer use items, appropriate for ergonomics assessment across a range of hospital jobs.  相似文献   

7.
Detection of sleep apnea is one of the major tasks in sleep studies. Several methods, analyzing the various features of bio-signals, have been applied for automatic detection of sleep apnea, but it is still required to detect apneic events efficiently and robustly from a single nasal airflow signal under varying situations. This study introduces a new algorithm that analyzes the nasal airflow (NAF) for the detection of obstructive apneic events. It is based on mean magnitude of the second derivatives (MMSD) of NAF, which can detect respiration strength robustly under offset or baseline drift. Normal breathing epochs are extracted automatically by examining the stability of SaO(2) and NAF regularity for each subject. The standard MMSD and period of NAF, which are regarded as the values at the normal respiration level, are determined from the normal breathing epochs. In this study, 24 Polysomnography (PSG) recordings diagnosed as obstructive sleep apnea (OSA) syndrome were analyzed. By analyzing the mean performance of the algorithm in a training set consisting of three PSG recordings, apnea threshold is determined to be 13% of the normal MMSD of NAF. NAF signal was divided into 1-s segments for analysis. Each segment is compared with the apnea threshold and classified into apnea events if the segment is included in a group of apnea segments and the group satisfies the time limitation. The suggested algorithm was applied to a test set consisting of the other 21 PSG recordings. Performance of the algorithm was evaluated by comparing the results with the sleep specialist's manual scoring on the same record. The overall agreement rate between the two was 92.0% (kappa=0.78). Considering its simplicity and lower computational load, the suggested algorithm is found to be robust and useful. It is expected to assist sleep specialists to read PSG more quickly and will be useful for ambulatory monitoring of apneas using airflow signals.  相似文献   

8.
罗党  张曼曼 《控制与决策》2020,35(6):1476-1482
基于灰色B型关联分析的基本思想,针对现有面板数据灰色关联模型中对象排列顺序变化引起的关联序不一致的问题,以及因未充分考虑对象维度序列关于同一时刻不同对象下均值的变化率导致关联结果失真的问题,从时间维度和对象维度两个方面构建基于面板数据的灰色B型关联模型.在时间维度上,通过引进各指标间的总体位移差、一阶斜率差及二阶斜率差得到横向关联度;在对象维度上,采用各点与同一时刻不同对象下均值之比来刻画纵向关联度,并对两者求加权平均,进而构建出基于面板数据的灰色B型关联模型.讨论模型的规范性、一致保序性等性质.对比分析表明,模型简单有效且不受对象排列顺序的影响.以豫北平原5个市的干旱灾害风险指数为特征指标序列,理清干旱灾害风险指数与其12个影响因素的关联关系,为旱灾风险评估与调控提供理论支持.  相似文献   

9.
The next two decades will see dramatic changes in the health needs of the world's populations with chronic diseases as the leading causes of disability, according to recent World Health Organization reports. Increases in the senior population living "confined" in domestic area are also expected producing a steep increase in the need for long-term monitoring and home care services. Independently of the particular features and specific architectures, long-term monitoring systems usually produce a large amount of data to be analyzed and inspected by the practitioners and in particular by the cardiologists dealing with ECG recordings analysis. This problem is well known and regards also the traditional holter-based practice. In this paper we present a program for discovering patterns in ECG recordings, to be considered as a medical decision-making support. Computational methods are based on a QRS detector especially designed for noisy applications followed by a parameters space reduction operated by the KL transform modified on a "user-fit" basis. Events characterization is based on a recently introduced clustering method, called KHM (K-harmonic means). The most representative beat families and the corresponding prototypes (physiological and pathological) are then presented to the user through appropriate graphics to facilitate an easy and fast interpretation. We tested the QRS detection algorithm using the MIT-BIH arrhythmia database. Our method produced 565 false positive beats and 379 false negative beats and a total detection failure of 0.85% considering all the 109.809 annotated beats in the database. While a clinical experimentation of our program is on the way, we used the VALE Database to perform a preliminary evaluation of the methods used for data exploration (PCA, KHM). Considering the entire database, we succeeded in identifying pathological clusters in 97% of the cases.  相似文献   

10.
It is useful to have a disaggregated population database at uniform grid units in disaster situations. This study presents a method for settlement location probability and population density estimations at a 90 m resolution for northern Iraq using the Shuttle Radar Topographic Mission (SRTM) digital terrain model and Landsat Enhanced Thematic Mapper satellite imagery. A spatial model each for calculating the probability of settlement location and for estimating population density is described. A randomly selected subset of field data (equivalent to 50%) is first analysed for statistical links between settlement location probability and population density; and various biophysical features which are extracted from Landsat or SRTM data. The model is calibrated using this subset. Settlement location probability is attributed to the distance from roads and water bodies and land cover. Population density can be estimated based upon land cover and topographic features. The Landsat data are processed using a segmentation and subsequent feature–based classification approach making this method robust to seasonal variations in imagery and therefore applicable to a time series of images regardless of acquisition date. The second half of the field data is used to validate the model. Results show a reasonable estimate of population numbers (r = 0.205, p<0.001) for both rural and urban settlements. Although there is a strong overall correlation between the results of this and the LandScan model (r = 0.464, p<0.001), this method performs better than the 1 km resolution LandScan grid for settlements with fewer than 1000 people, but is less accurate for estimating population numbers in urban areas (LandScan rural r = 0.181, p<0.001; LandScan urban r = 0.303, p<0.001). The correlation between true urban population numbers is superior to that of LandScan however when the 90 m grid values are summed using a filter which corresponds to the LandScan spatial resolution (r = 0.318, p<0.001).  相似文献   

11.
A recursive algorithm for estimating the constant but unknown parameters of a controlled ARMA process is presented. The algorithm is a recursive version of an off-line algorithm using three stages of standard least-squares. In the first stage the parameters of a controlled AR model of degree p are estimated. The residuals used in this stage are employed in the second stage to estimate the parameters of a controlled ARMA process. The first two stages constitute a recursive version of Durbin's algorithm. The model obtained in the second stage is used to filter the input, output and residuals and these filtered variables are used in the final stage to obtain improved estimates of the controlled ARMA process. It is shown that the estimate is (globally) p-consistent, i.e. that the estimate converges a.s. as the number of data tends to infinity, to a vector which, in turn, converges to the true parameter vector as the degree p of the AR model tends to infinity.  相似文献   

12.
《Ergonomics》2012,55(10):1035-1041
The current popularity of backpack-type load carriage systems (LCS) by students has precipitated a prevalence of postural abnormalities and pain. This study compared subjective perceptual comfort in standard and vertically loaded LCSs. Sixteen females ages 18–23 years rated their personal LCSs for perceived shoulder, neck, and lower back comfort and for overall comfort, each day for two weeks using 100 mm visual analogue scales (VAS). Each scale contained polar extremities of ‘very comfortable’ to ‘very uncomfortable’ and a vertical mark placed on the 100 mm line by the participants indicated their perception of comfort. Following two weeks, participants were given LCSs that distributed the weight vertically and were asked to rate the system in the same way for an additional two-week period. Statistical analysis revealed significant differences in shoulder (p=0.015), neck (p=0.005), and lower back (p=0.036) comfort and overall comfort (p=0.001) between the participants' personal LCSs and the experimental LCS. In conclusion, vertical load placement may redistribute the load in a manner that reduces symptoms of selected anatomical discomfort.  相似文献   

13.
We consider inference in a general data-driven object-based model of multichannel audio data, assumed generated as a possibly underdetermined convolutive mixture of source signals. We work in the short-time Fourier transform (STFT) domain, where convolution is routinely approximated as linear instantaneous mixing in each frequency band. Each source STFT is given a model inspired from nonnegative matrix factorization (NMF) with the Itakura–Saito divergence, which underlies a statistical model of superimposed Gaussian components. We address estimation of the mixing and source parameters using two methods. The first one consists of maximizing the exact joint likelihood of the multichannel data using an expectation-maximization (EM) algorithm. The second method consists of maximizing the sum of individual likelihoods of all channels using a multiplicative update algorithm inspired from NMF methodology. Our decomposition algorithms are applied to stereo audio source separation in various settings, covering blind and supervised separation, music and speech sources, synthetic instantaneous and convolutive mixtures, as well as professionally produced music recordings. Our EM method produces competitive results with respect to state-of-the-art as illustrated on two tasks from the international Signal Separation Evaluation Campaign (SiSEC 2008).   相似文献   

14.
《Ergonomics》2012,55(10):1276-1286
The effect of different handle angles on work distribution during hand cycling was determined. Able-bodied subjects performed hand cycling at 20% of maximum power level (mean (SD) power level: 90.0 (25.8) W) at a cadence of 70 rpm using handle angles of ±30°, ±15° and 0°. The handle angle had a significant effect on work during the pull down (p < 0.001) and lift up (p = 0.005) sector, whereby the highest work was performed with handle angles of +30° and ?15° respectively. The cycle sector had a significant effect on work (p < 0.001) and significantly (p = 0.002) higher work was performed in the pull down sector (25% higher than mean work over one cycle) as compared to the lift up sector (30% lower than mean work over one cycle). Therefore, a fixed handle angle of +30° is suggested to be optimal for power generation. The results of this study help to optimise the handbike–user interface. A more pronated handle angle compared to the one conventionally used was found to improve the performance of hand cycling and thereby the mobility of disabled people.  相似文献   

15.
The purpose of this work was to determine the feasibility and efficacy of retrospective registration of MR and CT images of the liver. The open-source ITK Insight Software package developed by the National Library of Medicine (USA) contains a multi-resolution, voxel-similarity-based registration algorithm which we selected as our baseline registration method. For comparison we implemented a multi-scale surface fitting technique based on the head-and-hat algorithm. Registration accuracy was assessed using the mean displacement of automatically selected point landmarks. The ITK voxel-similarity-based registration algorithm performed better than the surface-based approach with mean misregistration in the range of 7.7-8.4 mm for CT-CT registration, 8.2 mm for MR-MR registration, and 14.0-18.9 mm for MR-CT registration compared to mean misregistration from the surface-based technique in the range of 9.6-11.1 mm for CT-CT registration, 9.2-12.4 mm for MR-MR registration, and 15.2-19.0 mm for MR-CT registration.  相似文献   

16.
A new stochastic computational method was developed to estimate the endogenous glucose production, the meal-related glucose appearance rate (R(a meal)), and the glucose disposal (R(d)) during the meal tolerance test. A prior probability distribution was adopted which assumes smooth glucose fluxes with individualized smoothness level within the context of a Bayes hierarchical model. The new method was contrasted with the maximum likelihood method using data collected in 18 subjects with type 2 diabetes who ingested a mixed meal containing [U-(13)C]glucose. Primed [6,6-(2)H(2)]glucose was infused in a manner that mimicked the expected endogenous glucose production. The mean endogenous glucose production, R(a meal), and R(d) calculated by the new method and maximum likelihood method were nearly identical. However, the maximum likelihood gave constant, nonphysiological postprandial endogenous glucose production in two subjects whilst the new method gave plausible estimates of endogenous glucose production in all subjects. Additionally, the two methods were compared using a simulated triple-tracer experiment in 12 virtual subjects. The accuracy of the estimates of the endogenous glucose production and R(a meal) profiles was similar [root mean square error (RMSE) 1.0±0.3 vs. 1.4±0.7μmol/kg/min for EGP and 2.6±1.0 vs. 2.9±0.9μmol/kg/min for R(a meal); new method vs. maximum likelihood method; P=NS, paired t-test]. The accuracy of R(d) estimates was significantly increased by the new method (RMSE 5.3±1.9 vs. 4.2±1.3; new method vs. ML method; P<0.01, paired t-test). We conclude that the new method increases plausibility of the endogenous glucose production and improves accuracy of glucose disposal compared to the maximum likelihood method.  相似文献   

17.
黄志标  姚宇 《计算机应用》2017,37(2):569-573
B型心脏超声图像分割是计算心功能参数前重要的一步。针对超声图像的低分辨率影响分割精度及基于模型的分割算法需要大样本训练集的问题,结合B型心脏超声图像的先验知识,提出了一种基于像素聚类进行图像分割的算法。首先,通过各向异性扩散处理图像;然后,使用一维K-均值对像素进行聚类;最后,根据聚类结果和先验知识将像素值修改为最佳类中心像素值。理论分析表明该算法可以使图像的峰值信噪比(PSNR)达到最大值。实验结果表明:所提算法比大津算法等更准确,PSNR较大津算法提高11.5%;即使在单张图像上也可以进行分割,且适应于分割任意形状的超声图像,有利于更准确地计算各种心功能参数。  相似文献   

18.
Coronary Artery Disease (CAD) causes maximum death among all types of heart disorders. An early detection of CAD can save many human lives. Therefore, we have developed a new technique which is capable of detecting CAD using the Heart Rate Variability (HRV) signals. These HRV signals are decomposed to sub-band signals using Flexible Analytic Wavelet Transform (FAWT). Then, two nonlinear parameters namely; K-Nearest Neighbour (K-NN) entropy estimator and Fuzzy Entropy (FzEn) are extracted from the decomposed sub-band signals. Ranking methods namely Wilcoxon, entropy, Receiver Operating Characteristic (ROC) and Bhattacharya space algorithm are implemented to optimize the performance of the designed system. The proposed methodology has shown better performance using entropy ranking technique. The Least Squares-Support Vector Machine (LS-SVM) with Morlet wavelet and Radial Basis Function (RBF) kernels obtained the highest classification accuracy of 100% for the diagnosis of CAD. The developed novel algorithm can be used to design an expert system for the diagnosis of CAD automatically using Heart Rate (HR) signals. Our system can be used in hospitals, polyclinics and community screening to aid the cardiologists in their regular diagnosis.  相似文献   

19.
An iterative algorithm based on a general regularization scheme for nonlinear ill-posed problems in Hilbert scales (method A) is applied to the magnetocardiographic inverse problem imaging the surface myocardial activation time map. This approach is compared to an algorithm using an optimization routine for nonlinear ill-posed problems based on Tikhonov's approach of second order (method B). Method A showed good computational performance and the scheme for determining the proper regularization parameter lambda was found to be easier than in case of method B. The formulation is applied to magnetocardiographic recordings from a patient suffering from idiopathic ventricular tachycardia in which a sinus rhythm sequence was followed by a ventricular extrasystolic beat.  相似文献   

20.
基于嵌入式隐马尔可夫模型的步态识别   总被引:1,自引:0,他引:1  
针对从多帧步态中更有效提取步态特征的问题,提出了一种基于嵌入式隐马尔可夫模型的步态识别算 法.首先采用背景减除方法提取出人体的侧影轮廓,通过分析轮廓宽度向量的自相关性计算出步态的周期,并得到 平均步态能量图.接着利用二维离散余弦变换获得平均步态能量图的空间特征信息,然后把能量图的观测块转化为 观测向量实现了步态识别.最后运用最近邻法在两个不同的数据库上进行算法验证,实验结果表明该算法具有较好 的识别性能.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号