首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Two rapid estimation algorithms for construction of cerebral blood flow (CBF) and oxygen utilization (CMRO) images with dynamic positron emission tomography (PET) are presented. These algorithms are based on the linear least squares (LLS) and generalized linear least squares (GLLS) methodologies. Using the conventional two-compartmental model and multiple tracer studies, we derived a linear relationship for brain tissue activity to arterial blood activity, time-integrated arterial blood activity and time-integrated brain tissue activity. The LLS technique is computationally efficient as no regression analysis is required, while GLLS is used to refine the estimates obtained from LLS. A comparative study using non-linear least squares regression (NLS) revealed excellent correlation between the new algorithms for various noise levels expected in clinical applications. A sensitivity analysis was performed to examine reliability and identifiability of the parameter estimates. In view of the results, LLS and GLLS provide rapid and reliable estimates of CBF and CMRO when applied to dynamic PET data. These algorithms are particularly suitable for pixel-by-pixel construction of high resolution and highly accurate PET functional images.  相似文献   

2.
This article introduces new low cost algorithms for the adaptive estimation and tracking of principal and minor components. The proposed algorithms are based on the well-known OPAST method which is adapted and extended in order to achieve the desired MCA or PCA (Minor or Principal Component Analysis). For the PCA case, we propose efficient solutions using Givens rotations to estimate the principal components out of the weight matrix given by OPAST method. These solutions are then extended to the MCA case by using a transformed data covariance matrix in such a way the desired minor components are obtained from the PCA of the new (transformed) matrix. Finally, as a byproduct of our PCA algorithm, we propose a fast adaptive algorithm for data whitening that is shown to overcome the recently proposed RLS-based whitening method.  相似文献   

3.
为克服现有算法带来边缘定位不精确和人工参与太多等缺点,提出了一种新的基于区域增长的适合医学图像中ROI的分割算法。该算法先利用改进的Canny边缘算子进行边缘粗检测,再利用给出的灰度和纹理等信息进行区域增长,最终得到分割图像。为了更好地进行区域增长,新算法通过对ROI中像素的灰度和纹理进行分析,给出结合点向量运算和灰度判断的增长准则。实验结果表明,该方法能对医学图像中复杂区域或畸形区域进行分割,具有很好的鲁棒性与实用性。  相似文献   

4.
视觉传感器应用中三维扫描点云数据处理的研究   总被引:2,自引:0,他引:2  
分析了便携式激光视觉扫描系统获取的点云数据存在的问题,针对具体问题分析了数据处理中的关键步骤和算法,使用手动剔除和系统判断相结合的方法,有效地剔除扫描数据中的噪声数据。同时,采用数据缩减算法实现对扫描点云的采样,在保证扫描曲面特征不失真的情况下,尽可能地缩减不必要的数据。数据经过处理后,不仅可以提高模型重构的精准度,更可以降低模型重构的复杂程度。  相似文献   

5.
Minor component analysis (MCA) is a statistical method of extracting the eigenvector associated with the smallest eigenvalue of the covariance matrix of input signals. Convergence is essential for MCA algorithms towards practical applications. Traditionally, the convergence of MCA algorithms is indirectly analyzed via their corresponding deterministic continuous time (DCT) systems. However, the DCT method requires the learning rate to approach zero, which is not reasonable in many applications due to the round-off limitation and tracking requirements. This paper studies the convergence of the deterministic discrete time (DDT) system associated with the OJAn MCA learning algorithm. Unlike the DCT method, the DDT method does not require the learning rate to approach zero. In this paper, some important convergence results are obtained for the OJAn MCA learning algorithm via the DDT method. Simulations are carried out to illustrate the theoretical results achieved.  相似文献   

6.
程琳琳  陈昭炯 《计算机工程》2010,36(16):195-197
针对图像非真实感绘制(NPR)算法和感兴趣区域(ROI)提取算法的不足,提出一种基于ROI的图像NPR算法。该算法利用Mean-Shift算法实现图像的NPR效果,结合图像的ROI解决传统算法时间效率不高的问题,应用色彩传递的方法对图像自动进行色彩添加和变更,以改善视觉效果。实验结果表明,该算法具有良好的艺术效果,能有效提高算法效率。  相似文献   

7.
The use of the functional PET information from PET-CT scans to improve liver segmentation from low-contrast CT data is yet to be fully explored. In this paper, we fully utilize PET information to tackle challenging liver segmentation issues including (1) the separation and removal of the surrounding muscles from liver region of interest (ROI), (2) better localization and mapping of the probabilistic atlas onto the low-contrast CT for a more accurate tissue classification, and (3) an improved initial estimation of the liver ROI to speed up the convergence of the expectation-maximization (EM) algorithm for the Gaussian distribution mixture model under the guidance of a probabilistic atlas. The primary liver extraction from the PET volume provides a simple mechanism to avoid the complicated pre-processing of feature extraction as used in the existing liver CT segmentation methods. It is able to guide the probabilistic atlas to better conform to the CT liver region and hence helps to overcome the challenge posed by liver shape variability. Our proposed method was evaluated against manual segmentation by experienced radiologists. Experimental results on 35 clinical PET-CT studies demonstrated that our method is accurate and robust in automated normal liver segmentation.  相似文献   

8.
从脑部MRI图像提取脑组织是影像学分析中的一项重要的预处理过程。本文提出了一种基于感兴趣区域和混合水平集方法的脑组织提取方法。该方法先采用BET方法得到脑组织的感兴趣区域,然后在该感兴趣区域内演化改进的混和水平集得到真实的脑组织边界。改进的混合水平集采用了一个非线性的速度函数,能够有效地防止边界泄露。该方法所用到的MRI数据均来自于互联网(IBSR, Internet Brain Segmentation Repository Web)。利用18组IBSR网站的MRI数据,本方法得到的结果接近于标准手动提取结果,并且在多个评价参数上都取得最好结果。实验表明该方法提取脑组织具有一定的准确性和稳定性。  相似文献   

9.
Neural network algorithms on principal component analysis (PCA) and minor component analysis (MCA) are of importance in signal processing. Unified (dual purpose) algorithm is capable of both PCA and MCA, thus it is valuable for reducing the complexity and the cost of hardware implementations. Coupled algorithm can mitigate the speed-stability problem which exists in most noncoupled algorithms. Though unified algorithm and coupled algorithm have these advantages compared with single purpose algorithm and noncoupled algorithm, respectively, there are only few of unified algorithms and coupled algorithms have been proposed. Moreover, to the best of the authors’ knowledge, there is no algorithm which is both unified and coupled has been proposed. In this paper, based on a novel information criterion, we propose two self-stabilizing algorithms which are both unified and coupled. In the derivation of our algorithms, it is easier to obtain the results compared with traditional methods, because it is not needed to calculate the inverse Hessian matrix. Experiment results show that the proposed algorithms perform better than existing coupled algorithms and unified algorithms.  相似文献   

10.
刘亚东  李翠华 《微机发展》2008,18(3):200-202
提出一种基于多尺度边缘和局部熵原理的前方车辆的检测算法。该算法利用车辆图像的边缘和纹理等视觉特征,根据摄像机参数得到远、中、近距离的三个尺度的图像,用一种改进的边缘检测算法分析每幅图像的边缘,得到车辆的感兴趣区域ROI,最后通过应用局部熵原理来排除错误的结果。对同一帧序列用文中算法和传统算法进行测试,文中算法提高了检测的正确率,并减少了误检的数量,该算法同时适用于静止和运动的车辆,并且对中远距离车辆有较好的检测效果。  相似文献   

11.
针对目前高强度聚焦超声(High-Intensity Focused Ultrasound,HIFU)治疗过程中图像分割环节耗时过长且依赖于临床医生对损伤区域的准确判断的问题。提出了一种新的自动分割方法,该方法面向HIFU治疗这一具体应用,由治疗前治疗计划拟定的靶点和靶区大小对治疗后B超图像划定感兴趣区域(ROI)并对区域内图像进行最大类间方差法(Otsu)阈值分析,然后根据分析得到的阈值信息对治疗后图像进行阈值水平集分割。实验结果证明,与基于区域阂值的分割方法进行比较,该方法的敏感性和准确度分别可达98.67%和93.52%。  相似文献   

12.
在骨龄自动化评价的研究中,如何对指骨ROI和腕骨ROI的有效定位和成功提取是其研究的难点和急需解决的关键问题之一。在利用手指骨和腕骨形状信息的基础上,提出了用二元三次线性回归方法来拟合图像背景,从而移除图像背景;用基于k余弦的方法来定位腕骨ROI和指骨ROI的关键点,最后成功提取出腕骨ROI和指骨ROI。通过超过60例的临床骨龄X光片图像数据验证最后提取的正确率在93%以上。使用该方法不用考虑骨龄图像背景灰度值的改变、图像位置和方向的变化,因而具有极大的鲁棒性,可以直接应用到骨龄自动化评价的后续研究中。  相似文献   

13.
把传统的人工考试流程进行分解,对用计算机实现在线考试系统的系统功能、关系型数据模型设计、组卷算法、网上考试和评分算法进行分析和研究,并建立相应的算法,为在线考试系统的软件设计和实现奠定理论基础.  相似文献   

14.
A new method to make an automatic initial estimation of the position of the middle of the left ventricular (LV) myocardial wall (LV myocardial midwall) in emission tomograms is presented. This method eliminates the manual interaction still required by other, more accurate LV delineation algorithms, and which consists of indicating the LV long axis and/or the LV extremities. A well-known algorithm from the world of neural networks, Kohonen's self-organizing maps, was adapted to use general shapes and to behave well for data with large background noise  相似文献   

15.
In this paper, we propose an algorithm for lossy adaptive encoding of digital three-dimensional (3D) images based on singular value decomposition (SVD). This encoding allows us to design algorithms for progressive transmission and reconstruction of the 3D image, for one or several selected regions of interest (ROI) avoiding redundancy in data transmission. The main characteristic of the proposed algorithms is that the ROIs can be selected during the transmission process and it is not necessary to re-encode the image again to transmit the data corresponding to the selected ROI. An example with a data set of a CT scan consisting of 93 parallel slices where we added an implanted tumor (the ROI in this example) and a comparative with JPEG2000 are given.  相似文献   

16.
目的 在骨龄智能评估研究中,如何准确地提取手腕参照骨的兴趣区域(region of interest, ROI)是保证骨龄精确评估的关键。基于传统深度学习的方法用于手腕骨ROI提取,存在个别参照骨漏判、误判等情况,导致平均提取准确率较低。本文结合目标检测强大的定位和识别能力,以准确提取所有手腕骨ROI为目的,提出了一种参照骨自动匹配与修正方法。方法 针对不同参照骨形状、位置等特征表现出的规律性和关联性,本文采集了大量不同性别、不同年龄段的人手腕图谱作为参照骨样本匹配,然后分多个阶段提取参照骨ROI:1)基于目标检测算法初步提取出所有参照骨候选ROI,并根据一定的阈值剔除置信度较低的区域;2)结合参照骨的大数据样本构建位置点匹配模型,对剔除区域进行自动匹配与填补,保证ROI提取的完整性;3)通过多尺度滑动窗口以及ROI分类模型,对填补得到的ROI位置进行滑动修正,进一步保证提取准确率。结果 实验结果表明,本文结合目标检测与匹配修正的方法优于现有绝大多数方法。其中,匹配修正方法在目标检测算法的提取结果基础上,提升了约1.42%的平均准确率,当结合Faster R-CNN(region-co...  相似文献   

17.
传统特征工程从关系实体中提取特征完全倚靠人工,繁琐、费时且易出错,深度特征合成算法可以为结构化数据合成大量特征,实现关系实体的自动特征工程。针对深度特征合成算法中合成特征冗余严重且难以筛选的问题,提出一种基于Kullback-Leibler(KL)散度和Hellinger距离结合的属性过滤算法。通过映射连接实体与标记,度量实体中属性的重要程度,对实体中的属性多重过滤,拒绝实体中重要程度低的属性参与深度特征合成算法,得到优化的特征合成结果。选取三种不同类型的公开数据集在不同的机器学习算法上进行实验验证。结果表明,改进的方法能够明显减少算法运行时间与合成数据规模,有效提高合成特征的质量与最终预测准确率。  相似文献   

18.
提出了一种新颖的适合医学图像的压缩算法AR-EWC(embedded wavelet coding of arbitrary ROI).该算法保证了感兴趣区域边界与背景光滑过渡,支持任意形状ROI的无失真编码,生成内嵌有损到无损图像质量渐进优化的码流,并且可以对ROI独立随机存取.针对医学图像的特点,对包含重要临床诊断信息区域采用完全无失真的无损压缩,而对背景采用高压缩比的有损压缩,既保证了医学图像对高质量的要求,相对于传统的无损方案又大大提高了图像的整体压缩比.在临床头部MR图像数据集上的实验表明,该算法在保证了ROI区域无损压缩的前提下,达到了与经典的有损压缩算法相当的压缩比率.  相似文献   

19.
Principal component analysis (PCA) and Minor component analysis (MCA) are similar but have different dynamical performances. Unexpectedly, a sequential extraction algorithm for MCA proposed by Luo and Unbehauen [11] does not work for MCA, while it works for PCA. We propose a different sequential-addition algorithm which works for MCA. We also show a conversion mechanism by which any PCA algorithms are converted to dynamically equivalent MCA algorithms and vice versa.  相似文献   

20.
Penalized-distance volumetric skeleton algorithm   总被引:13,自引:0,他引:13  
Introduces a refined general definition of a skeleton that is based on a penalized distance function and that cannot create any of the degenerate cases of the earlier CEASAR (Center-line Extraction Algorithm-Smooth, Accurate and Robust) and TEASAR (Tree-structure Extraction Algorithm for Skeletons-Accurate and Robust) algorithms. Additionally, we provide an algorithm that finds the skeleton accurately and rapidly. Our solution is fully automatic, which frees the user from having to engage in manual data pre-processing. We present the accurate skeletons computed on a number of test data sets. The algorithm is very efficient, as demonstrated by the running times, which were all below seven minutes  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号