首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The primary objective of this study was to develop a computer-aided method for the quantification of three-dimensional (3-D) cartilage changes over time in knees with osteoarthritis (OA). We introduced a local coordinate system (LCS) for the femoral and tibial cartilage boundaries that provides a standardized representation of cartilage geometry, thickness, and volume. The LCS can be registered in different data sets from the same patient so that results can be directly compared. Cartilage boundaries are segmented from 3-D magnetic resonance (MR) slices with a semi-automated method and transformed into offset-maps, defined by the LCS. Volumes and thickness are computed from these offset-maps. Further anatomical labeling allows focal volumes to be evaluated in predefined subregions. The accuracy of the automated behavior of the method was assessed, without any human intervention, using realistic, synthetic 3-D MR images of a human knee. The error in thickness evaluation is lower than 0.12 mm for the tibia and femur. Cartilage volumes in anatomical subregions show a coefficient of variation ranging from 0.11% to 0.32%. This method improves noninvasive 3-D analysis of cartilage thickness and volume and is well suited for in vivo follow-up clinical studies of OA knees.  相似文献   

2.
曲线合成孔径雷达(CurviLinear Synthetic Aperture Radar,CLSAR)利用雷达平台的单条曲线轨迹就可形成三维成像所需的曲线合成孔径。由于CLSAR采集的数据在三维频率空间是稀疏的,简单地采用非参数化方法所获得的图像几乎无法使用,所以有价值的目标三维像必须采用参数化方法来获得。该文提出一种新的适用于CLSAR的目标三维成像算法。该算法巧妙地利用了接收数据中距离方向与垂直距离方向参数间的弱耦合性,将高维优化问题解耦为低维优化问题,并顺序地估计出相应参数,最后采用一个迭代过程进行参数求精。仿真实验表明,新算法是一种适用于CLSAR的有效的目标三维成像算法。  相似文献   

3.
为了解决硬质材料的3维深雕问题,在充分分析3维图像空间遮挡关系的基础上,提出一种用激光在实体材料上雕刻真正3维图形的新算法。首先利用盒子算法,使3维空间图像对应一个空间大盒子,再利用平行于z轴的射线射切三角片,得到一种3维模型文件格式(STL)完备的包络点,同时,把点归一化到空间小盒子,并置位对应的被遮挡的小盒子列,从而得到STL实体雕刻数据。该算法已经应用于激光3维雕刻系统,处理一个面片数目为130000片量级的STL3维文件,尺寸268mm422mm253mm,算法仅耗时2s~5s。结果表明,该算法实用高效。  相似文献   

4.
三维直方图重建和降维的Otsu阈值分割算法   总被引:3,自引:0,他引:3       下载免费PDF全文
申铉京  龙建武  陈海鹏  魏巍 《电子学报》2011,39(5):1108-1114
针对三维Otsu阈值分割算法中因区域误分而产生的抗噪性差这一问题,提出了一种三维直方图重建和降维的Otsu阈值分割算法.该方法首先在详细分析三维直方图中噪声点分布的基础上,通过重建三维直方图,减弱了噪声干扰;然后将三维直方图区域划分由八分法改为二分法,使得阈值搜索的空间维度从三维降低到一维,减少了处理时间和存储空间.本...  相似文献   

5.
Exploiting Motion Correlations in 3-D Articulated Human Motion Tracking   总被引:1,自引:0,他引:1  
In 3-D articulated human motion tracking, the curse of dimensionality renders commonly-used particle-filter-based approaches inefficient. Also, noisy image measurements and imperfect feature extraction call for strong motion prior. We propose to learn the correlation between the right-side and the left-side human motion using partial least square (PLS) regression. The correlation effectively constrains the sampling of the proposal distribution to portions of the parameter space that correspond to plausible human motions. The learned correlation is then used as motion prior in designing a Rao–Blackwellized particle filter algorithm, RBPF-PLS, which estimates only one group of state variables using the Monte Carlo method, leaving the other group being exactly computed through an analytical filter that utilizes the learned motion correlation. We quantitatively assessed the accuracy of the proposed algorithm with challenging HumanEva-I/II data set. Experiments with comparison with both the annealed particle filter and the standard particle filter show that the proposed method achieves lower estimation error in processing challenging real-world data of 3-D human motion. In particular, the experiments demonstrate that the learned motion correlation model generalizes well to motions outside of the training set and is insensitive to the choice of the training subjects, suggesting the potential wide applicability of the method.   相似文献   

6.
Adaptive fuzzy segmentation of magnetic resonance images   总被引:34,自引:0,他引:34  
An algorithm is presented for the fuzzy segmentation of two-dimensional (2-D) and three-dimensional (3-D) multispectral magnetic resonance (MR) images that have been corrupted by intensity inhomogeneities, also known as shading artifacts. The algorithm is an extension of the 2-D adaptive fuzzy C-means algorithm (2-D AFCM) presented in previous work by the authors. This algorithm models the intensity inhomogeneities as a gain field that causes image intensities to smoothly and slowly vary through the image space. It iteratively adapts to the intensity inhomogeneities and is completely automated. In this paper, we fully generalize 2-D AFCM to three-dimensional (3-D) multispectral images. Because of the potential size of 3-D image data, we also describe a new faster multigrid-based algorithm for its implementation. We show, using simulated MR data, that 3-D AFCM yields lower error rates than both the standard fuzzy C-means (FCM) algorithm and two other competing methods, when segmenting corrupted images. Its efficacy is further demonstrated using real 3-D scalar and multispectral MR brain images.  相似文献   

7.
We consider the problem of approximating a set of arbitrary, discrete-time, Gaussian random processes by a single, representative wavelet-based, Gaussian process. We measure the similarity between the original processes and the wavelet-based process with the Bhattacharyya (1943) coefficient. By manipulating the Bhattacharyya coefficient, we reduce the task of defining the representative process to finding an optimal unitary matrix of wavelet-based eigenvectors, an associated diagonal matrix of eigenvalues, and a mean vector. The matching algorithm we derive maximizes the nonadditive Bhattacharyya coefficient in three steps: a migration algorithm that determines the best basis by searching through a wavelet packet tree for the optimal unitary matrix of wavelet-based eigenvectors; and two separate fixed-point algorithms that derive an appropriate set of eigenvalues and a mean vector. We illustrate the method with two different classes of processes: first-order Markov and bandlimited. The technique is also applied to the problem of robust terrain classification in polarimetric SAR images  相似文献   

8.
In this paper, we propose a new wavelet-based reconstruction method suited to three-dimensional (3-D) cone-beam (CB) tomography. It is derived from the Feldkamp algorithm and is valid for the same geometrical conditions. The demonstration is done in the framework of nonseparable wavelets and requires ideally radial wavelets. The proposed inversion formula yields to a filtered backprojection algorithm but the filtering step is implemented using quincunx wavelet filters. The proposed algorithm reconstructs slice by slice both the wavelet and approximation coefficients of the 3-D image directly from the CB projection data. The validity of this multiresolution approach is demonstrated on simulations from both mathematical phantoms and 3-D rotational angiography clinical data. The same quality is achieved compared with the standard Feldkamp algorithm, but in addition, the multiresolution decomposition allows to apply directly image processing techniques in the wavelet domain during the inversion process. As an example, a fast low-resolution reconstruction of the 3-D arterial vessels with the progressive addition of details in a region of interest is demonstrated. Other promising applications are the improvement of image quality by denoising techniques and also the reduction of computing time using the space localization of wavelets.  相似文献   

9.
Presented is a novel compressive sensing (CS) based indoor positioning approach, which uses the signal strength differentials (SSDs) as location fingerprints (LFs). By using certain kernel-based transformation basis, the 2-D target location is represented as an unknown sparse location vector in the discrete spatial domain. Then it just takes a little number of online noisy SSD measurements for the exact recovery of the sparse location vector by solving an ?1-minimization program. In order to effectively apply CS theory for high precision indoor positioning, we further import some data pre-processing algorithms in the LF space. Firstly, to mitigate the influence of large measurements noise on the recovery accuracy, a LF space denosing algorithm is designed to discriminate the unequal localization contribution rate of every SSD measurement in each LF. The basic idea of the denosing algorithm is to transform the original LF space into a robust and decorrelated LF space. Moreover, in order to lower the high computational complexity of the CS recovery algorithm, several LF space filtering algorithms are also exploited to remove certain percentage of useless LFs in the radio map according to the real-time RSS observations. The performance of these denosing and filtering algorithms are investigated and compared in real-world WLAN experiment test. Both experimental results and simulations demonstrate that we achieve remarkable improvements on the positioning performance of the CS based localization by using the proposed algorithms.  相似文献   

10.
Eye movement artifacts occurring during 3-D optical coherence tomography (OCT) scanning is a well-recognized problem that may adversely affect image analysis and interpretation. A particle filtering algorithm is presented in this paper to correct motion in a 3-D dataset by considering eye movement as a target tracking problem in a dynamic system. The proposed particle filtering algorithm is an independent 3-D alignment approach, which does not rely on any reference image. 3-D OCT data is considered as a dynamic system, while the location of each A-scan is represented by the state space. A particle set is used to approximate the probability density of the state in the dynamic system. The state of the system is updated frame by frame to detect A-scan movement. The proposed method was applied on both simulated data for objective evaluation and experimental data for subjective evaluation. The sensitivity and specificity of the x-movement detection were 98.85% and 99.43%, respectively, in the simulated data. For the experimental data (74 3-D OCT images), all the images were improved after z-alignment, while 81.1% images were improved after x-alignment. The proposed algorithm is an efficient way to align 3-D OCT volume data and correct the eye movement without using references.  相似文献   

11.
Reconstruction of 3-D horizons from 3-D seismic datasets   总被引:2,自引:0,他引:2  
We propose a method for extracting automatically and simultaneously the quasi-horizontal surfaces in three-dimensional (3-D) seismic data. The proposed algorithm identifies connected sets of points which form surfaces in 3-D space. To improve reliability, this algorithm takes into consideration the relative positions of all horizons, and uses globally self-consistent connectivity criteria which respect the temporal order of horizon creation. The first stage of the algorithm consists of the preliminary estimation of the local direction of each horizon at each point of the 3-D space. The second stage consists of smoothing the signal along the detected layer structure to reduce noise. The last stage consists of the simultaneous building of all 3-D horizons. The output of the processing is a set of 3-D horizons represented by a series of triangulated surfaces.  相似文献   

12.
3-D radar imaging using range migration techniques   总被引:8,自引:0,他引:8  
An imaging system with three-dimensional (3-D) capability can be implemented by using a stepped frequency radar which synthesizes a two-dimensional (2-D) planar aperture. A 3-D image can be formed by coherently integrating the backscatter data over the measured frequency band and the two spatial coordinates of the 2-D synthetic aperture. This paper presents a near-field 3-D synthetic aperture radar (SAR) imaging algorithm. This algorithm is an extension of the 2-D range migration algorithm (RMA). The presented formulation is justified by using the method of the stationary phase (MSP). Implementation aspects including the sampling criteria, resolutions, and computational complexity are assessed. The high computational efficiency and accurate image reconstruction of the algorithm are demonstrated both with numerical simulations and measurements using an outdoor linear SAR system  相似文献   

13.
The radial derivative of the three-dimensional (3-D) radon transform of an object is an important intermediate result in many analytically exact cone-beam reconstruction algorithms. The authors briefly review Grangeat's (1991) approach for calculating radon derivative data from cone-beam projections and then present a new, efficient method for 3-D radon inversion, i.e., reconstruction of the image from the radial derivative of the 3-D radon transform, called direct Fourier inversion (DFI). The method is based directly on the 3-D Fourier slice theorem. From the 3-D radon derivative data, which is assumed to be sampled on a spherical grid, the 3-D Fourier transform of the object is calculated by performing fast Fourier transforms (FFTs) along radial lines in the radon space. Then, an interpolation is performed from the spherical to a Cartesian grid using a 3-D gridding step in the frequency domain. Finally, this 3-D Fourier transform is transformed back to the spatial domain via 3-D inverse FFT. The algorithm is computationally efficient with complexity in the order of N 3 log N. The authors have done reconstructions of simulated 3-D radon derivative data assuming sampling conditions and image quality requirements similar to those in medical computed tomography (CT)  相似文献   

14.
对于V-BLAST系统的检测,最大似然(ML)算法有着最优的性能却也有最大的计算复杂度;经典的排序连续干扰抵消(OSIC)算法复杂度较低,但数值稳定性差,且性能与ML差距较大.因此,本文基于检测性能和计算复杂度折中的思想,针对4×4 V-BLAST系统提出了一种分组最大似然(Group ML,GML)检测算法,在保证较好检测性能的基础上,通过将四维ML检测器分成两组二维ML检测器来降低计算复杂度.此外,本文还提出了一种简化的最大似然(Simpli-fled ML,SML)检测算法,通过将每组中的二维ML检测器的搜索空间从二维降至一维,进一步降低了计算复杂度,并证明其与ML算法具有一致的性能.仿真表明,在误符号率为10~(-3)时GML算法相比OSIC算法有约7dB的性能提升.经分析知.GML算法复杂度与ML-OSIC算法相比在高阶调制方式下有着显著的降低,易于硬件实现.  相似文献   

15.
A fully three-dimensional (3-D) implementation of the maximum a posteriori (MAP) method for single photon emission computed tomography (SPECT) is demonstrated. The 3-D reconstruction exhibits a major increase in resolution when compared to the generation of the series of separate 2-D slice reconstructions. As has been noted, the iterative EM algorithm for 2-D reconstruction is highly computational; the 3-D algorithm is far worse. To accommodate the computational complexity, previous work in the 2-D arena is extended, and an implementation on the class of massively parallel processors of the 3-D algorithm is demonstrated. Using a 16000- (4000-) processor MasPar/DECmpp-Sx machine, the algorithm is demonstrated to execute at 2.5 (7.8) s/EM-iteration for the entire 64x64x64 cube of 96 planar measurements obtained from the Siemens Orbiter rotating camera operating in the high-resolution mode.  相似文献   

16.
Exact and approximate rebinning algorithms for 3-D PET data   总被引:9,自引:0,他引:9  
This paper presents two new rebinning algorithms for the reconstruction of three-dimensional (3-D) positron emission tomography (PET) data. A rebinning algorithm is one that first sorts the 3-D data into an ordinary two-dimensional (2-D) data set containing one sinogram for each transaxial slice to be reconstructed; the 3-D image is then recovered by applying to each slice a 2-D reconstruction method such as filtered-backprojection. This approach allows a significant speedup of 3-D reconstruction, which is particularly useful for applications involving dynamic acquisitions or whole-body imaging. The first new algorithm is obtained by discretizing an exact analytical inversion formula. The second algorithm, called the Fourier rebinning algorithm (FORE), is approximate but allows an efficient implementation based on taking 2-D Fourier transforms of the data. This second algorithm was implemented and applied to data acquired with the new generation of PET systems and also to simulated data for a scanner with an 18° axial aperture. The reconstructed images were compared to those obtained with the 3-D reprojection algorithm (3DRP) which is the standard “exact” 3-D filtered-backprojection method. Results demonstrate that FORE provides a reliable alternative to 3DRP, while at the same time achieving an order of magnitude reduction in processing time  相似文献   

17.
We have developed a new integrated approach for quantitative computed tomography of the knee in order to quantify bone mineral density (BMD) and subchondral bone structure. The present framework consists of image acquisition and reconstruction, 3-D segmentation, determination of anatomic coordinate systems, and reproducible positioning of analysis volumes of interest (VOI). Novel segmentation algorithms were developed to identify growth plates of the tibia and femur and the joint space with high reproducibility. Five different VOIs with varying distance to the articular surface are defined in the epiphysis. Each VOI is further subdivided into a medial and a lateral part. In each VOI, BMD is determined. In addition, a texture analysis is performed on a high-resolution computed tomography (CT) reconstruction of the same CT scan in order to quantify subchondral bone structure. Local and global homogeneity, as well as local and global anisotropy were measured in all VOIs. Overall short-term precision of the technique was evaluated using double measurements of 20 osteoarthritic cadaveric human knees. Precision errors for volume were about 2-3% in the femur and 3-5% in the tibia. Precision errors for BMD were about 1-2% lower. Homogeneity parameters showed precision errors up to about 2% and anisotropy parameters up to about 4%.  相似文献   

18.
李晶  张顺生  常俊飞 《信号处理》2012,28(5):737-743
双基地合成孔径雷达(SAR)由于收发分置,具有广阔的应用前景,但常规的频域算法不仅面临距离史双根号问题,而且数据采集受Nyquist理论限制,数据量大。近年来提出的压缩感知(CS)理论指出,在一定条件下可以从很少的采样点中以很大的概率重建原始未知稀疏信号。本文将CS理论与双基地SAR模型相结合,提出一种基于CS的双基地SAR二维高分辨成像算法。该算法将二维随机降采样回波数据作为测量值,根据发射信号构造距离向测量矩阵,通过方位向多普勒相位因子构建方位向测量矩阵,利用CS恢复算法对目标进行了分维重建。仿真结果与性能分析表明,该算法在严重欠采样情况下仍能完好的重建原始目标,而且对噪声具有一定的鲁棒性和免疫性。与传统双基SAR成像算法相比,该算法具有更高的分辨率,成像结果峰值更加尖锐,峰值旁瓣比(PLSR)和积分旁瓣比(ILSR)都较低,而且采样率低、数据量少,具有一定的有效性和实用性。   相似文献   

19.
A three-dimensional (3-D) space vector algorithm of multilevel converters for compensating harmonics and zero sequence in three-phase four-wire systems with neutral is presented. The low computational cost of the proposed method is always the same and it is independent of the number of levels of the converter. The conventional two-dimensional (2-D) space vector algorithms are particular cases of the proposed generalized modulation algorithm. In general, the presented algorithm is useful in systems with or without neutral, unbalanced load, triple harmonics and for generating 3-D control vectors.  相似文献   

20.
A major limitation of the use of endoscopes in minimally invasive surgery is the lack of relative context between the endoscope and its surroundings. The purpose of this work was to fuse images obtained from a tracked endoscope to surfaces derived from three-dimensional (3-D) preoperative magnetic resonance or computed tomography (CT) data, for assistance in surgical planning, training and guidance. We extracted polygonal surfaces from preoperative CT images of a standard brain phantom and digitized endoscopic video images from a tracked neuro-endoscope. The optical properties of the endoscope were characterized using a simple calibration procedure. Registration of the phantom (physical space) and CT images (preoperative image space) was accomplished using fiducial markers that could be identified both on the phantom and within the images. The endoscopic images were corrected for radial lens distortion and then mapped onto the extracted surfaces via a two-dimensional 2-D to 3-D mapping algorithm. The optical tracker has an accuracy of about 0.3 mm at its centroid, which allows the endoscope tip to be localized to within 1.0 mm. The mapping operation allows multiple endoscopic images to be "painted" onto the 3-D brain surfaces, as they are acquired, in the correct anatomical position. This allows panoramic and stereoscopic visualization, as well as navigation of the 3-D surface, painted with multiple endoscopic views, from arbitrary perspectives.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号