首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Quantitative trabecular bone density (TBD) measurements in the appendicular skeleton, using computer tomography, are described. Pixel-value frequency histograms are generated, to determine the spatial coordinates of the centers of mass distribution for both bones in the image field. A line joining these centers is used as an axis for subsequent enhancement of horizontal image features, and for radial fan searches, to separate bone images in the analysis field. One bone image is then deleted and the outer contour of the remaining bone is determined using cross-correlation and mathematical morphology operators. This contour is then used as a template to calculate average linear attenuation coefficients (LACs) over 1-pixel-wide annuli for TBD calculation. Data from 33 individual subjects (1251 images) have been processed using both automatic and manual analysis methods. Both methods give the same numerical values for TBD, but the automatic method has slightly better precision. The analysis method is general and is adaptable to other imaging situations.  相似文献   

2.
A basic method to calibrate imagery from synthetic aperture radar (SAR) systems is presented. SAR images are calibrated by monitoring all the terms of the radar equation. This procedure includes the use of both external (calibrated reference reflectors) and internal (system-generated calibration signals) sources to monitor the total SAR system transfer function. To illustrate the implementation of the procedure, two calibrated SAR images (X-band, 3.2-cm wavelength) are presented, along with the radar cross-section measurements of specific scenes within each image. The sources of error within the SAR image calibration procedure are identified  相似文献   

3.
Long bone panoramas from fluoroscopic X-ray images   总被引:6,自引:0,他引:6  
This paper presents a new method for creating a single panoramic image of a long bone from several individual fluoroscopic X-ray images. Panoramic images are useful preoperatively for diagnosis, and intraoperatively for long bone fragment alignment, for making anatomical measurements, and for documenting surgical outcomes. Our method composes individual overlapping images into an undistorted panoramic view that is the equivalent of a single X-ray image with a wide field of view. The correlations between the images are established from the graduations of a radiolucent ruler imaged alongside the long bone. Unlike existing methods, ours uses readily available hardware, requires a simple image acquisition protocol with minimal user input, and works with existing fluoroscopic C-arm units without modifications. It is robust and accurate, producing panoramas whose quality and spatial resolution is comparable to that of the individual images. The method has been successfully tested on in vitro and clinical cases.  相似文献   

4.
Automatic tumor segmentation using knowledge-based techniques   总被引:11,自引:0,他引:11  
A system that automatically segments and labels glioblastoma-multiforme tumors in magnetic resonance images (MRIs) of the human brain is presented. The MRIs consist of T1-weighted, proton density, and T2-weighted feature images and are processed by a system which integrates knowledge-based (KB) techniques with multispectral analysis. Initial segmentation is performed by an unsupervised clustering algorithm. The segmented image, along with cluster centers for each class are provided to a rule-based expert system which extracts the intracranial region. Multispectral histogram analysis separates suspected tumor from the rest of the intracranial region, with region analysis used in performing the final tumor labeling. This system has been trained on three volume data sets and tested on thirteen unseen volume data sets acquired from a single MRI system. The KB tumor segmentation was compared with supervised, radiologist-labeled “ground truth” tumor volumes and supervised K-nearest neighbors tumor segmentations. The results of this system generally correspond well to ground truth, both on a per slice basis and more importantly in tracking total tumor volume during treatment over time  相似文献   

5.
Automated analysis of nerve-cell images using active contour models   总被引:2,自引:0,他引:2  
The number of nerve fibers (axons) in a nerve, the axon size, and shape can all be important neuroanatomical features in understanding different aspects of nerves in the brain. However, the number of axons in a nerve is typically in the order of tens of thousands and a study of a particular aspect of the nerve often involves many nerves. Potentially meaningful studies are often prohibited by the huge number involved when manual measurements have to be employed. A method that automates the analysis of axons from electron-micrographic images is presented. It begins with a rough identification of all the axon centers by use of an elliptical Hough transform procedure. Boundaries of each axons are then extracted based on active contour model, or snakes, approach where physical properties of the axons and the given image data are used in an optimization scheme to guide the snakes to converge to axon boundaries for accurate sheath measurement. However, false axon detection is still common due to poor image quality and the presence of other irrelevant cell features, thus a conflict resolution scheme is developed to eliminate false axons to further improve the performance of detection. The developed method has been tested on a number of nerve images and its results are presented.  相似文献   

6.
一种基于多小波变换的医学图像融合方法   总被引:4,自引:0,他引:4  
那彦  杨万海  张强 《信号处理》2004,20(6):642-645
本文讨论了医学图像CT与NMR的融合问题。由于骨骼组织仅能在CT中清晰显示,而软组织仅能在NMR中清晰显示,所以CT与NMR图像各自都不能同时清晰显示骨骼组织和软组织。在分析了CT与NMR图像成像机理的基础上,提出了一种基于多小波变换的融合方法。它可将CT与NMR图像进行有效的综合。所获得的融合图像,可同时清晰地显示骨组织和软组织信息。  相似文献   

7.
This paper proposes an automated procedure for segmenting an magnetic resonance (MR) image of a human brain based on fuzzy logic. An MR volumetric image composed of many slice images consists of several parts: gray matter, white matter, cerebrospinal fluid, and others. Generally, the histogram shapes of MR volumetric images are different from person to person. Fuzzy information granulation of the histograms can lead to a series of histogram peaks. The intensity thresholds for segmenting the whole brain of a subject are automatically determined by finding the peaks of the intensity histogram obtained from the MR images. After these thresholds are evaluated by a procedure called region growing, the whole brain can be identified. A segmentation experiment was done on 50 human brain MR volumes. A statistical analysis showed that the automated segmented volumes were similar to the volumes manually segmented by a physician. Next, we describe a procedure for decomposing the obtained whole brain into the left and right cerebral hemispheres, the cerebellum and the brain stem. Fuzzy if-then rules can represent information on the anatomical locations, segmentation boundaries as well as intensities. Evaluation of the inferred result using the region growing method can then lead to the decomposition of the whole brain. We applied this method to 44 MR volumes. The decomposed portions were statistically compared with those manually decomposed by a physician. Consequently, our method can identify the whole brain, the left cerebral hemisphere, the right cerebral hemisphere, the cerebellum and the brain stem with high accuracy and therefore can provide the three dimensional shapes of these regions.  相似文献   

8.
A comprehensive methodology for image segmentation is presented. Tools for differential and intensity contouring, and outline optimization are discussed, as well as the methods for automating such procedures. After segmentation, regional volumes and image intensity distributions can be determined. The methodology is applied to nuclear magnetic resonance images of the brain. Examples of the results of volumetric calculations for the cerebral cortex, white matter, cerebellum, ventricular system, and caudate nucleus are presented. An image intensity distribution is demonstrated for the cerebral cortex.  相似文献   

9.
Image compression is indispensable in medical applications where inherently large volumes of digitized images are presented. JPEG 2000 has recently been proposed as a new image compression standard. The present recommendations on the choice of JPEG 2000 encoder options were based on nontask-based metrics of image quality applied to nonmedical images. We used the performance of a model observer [non-prewhitening matched filter with an eye filter (NPWE)] in a visual detection task of varying signals [signal known exactly but variable (SKEV)] in X-ray coronary angiograms to optimize JPEG 2000 encoder options through a genetic algorithm procedure. We also obtained the performance of other model observers (Hotelling, Laguerre-Gauss Hotelling, channelized-Hotelling) and human observers to evaluate the validity of the NPWE optimized JPEG 2000 encoder settings. Compared to the default JPEG 2000 encoder settings, the NPWE-optimized encoder settings improved the detection performance of humans and the other three model observers for an SKEV task. In addition, the performance also was improved for a more clinically realistic task where the signal varied from image to image but was not known a priori to observers [signal known statistically (SKS)]. The highest performance improvement for humans was at a high compression ratio (e.g., 30:1) which resulted in approximately a 75% improvement for both the SKEV and SKS tasks.  相似文献   

10.
Three-dimensional elastic matching of volumes   总被引:3,自引:0,他引:3  
Registering volumes that have been deformed with respect to each other involves recovery of the deformation. A 3-D elastic matching algorithm has been developed to use surface information for registering volumes. Surface extraction is performed in two steps: extraction of contours in 2-D image planes using active contours, and forming triangular patch surface models from the stack of 2-D contours. One volume is modeled as being deformed with respect to another goal volume. Correspondences between surfaces in the two image volumes are used to warp the deformed volume towards its goal. This process of contour extraction, surface formation and matching, and warping is repeated a number of times, with decreasing image volume stiffness. As the iterations continue the stretched volume is refined towards its goal volume. Registration examples of deformed volumes are presented.  相似文献   

11.
12.
In this paper a method for the automatic segmentation of the brain in magnetic resonance images is presented and validated. The proposed method involves two steps 1) the creation of an initial model and 2) the deformation of this model to fit the exact contours of the brain in the images. A new method to create the initial model has been developed and compared to a more traditional approach in which initial models are created by means of brain atlases. A comprehensive validation of the complete segmentation method has been conducted on a series of three-dimensional T1-weighted magnetization-prepared rapid gradient echo image volumes acquired both from control volunteers and patients suffering from Cushing's disease. This validation study compares results obtained with the method we propose and contours drawn manually. Averages differences between manual and automatic segmentation with the model creation method we propose are 1.7% and 2.7% for the control volunteers and the Cushing's patients, respectively. These numbers are 1.8% and 5.6% when the atlas-based method is used.  相似文献   

13.
This work addresses dynamic texture representation and recognition via a convolutional multilayer architecture. The proposed method considers an image sequence as a concatenation of spatial images along the time axis as well as spatio-temporal images along both horizontal and vertical axes of an image sequence and uses multilayer convolutional operations to describe each plane. The filters used are learned via principal component analysis (PCA) on each of the three orthogonal planes of an image sequence. A particularly advantageous attribute of the technique is the unsupervised training procedure of the proposed network. An inter-database evaluation has been performed to investigate the generalisation capability of the proposed approach. Moreover, a multi-scale extension of the proposed architecture is presented to capture texture details at multiple resolutions. Through extensive evaluations on different databases, it is shown that the proposed PCA-based network on three orthogonal planes (PCANet-TOP) yields very discriminative features for dynamic texture classification.  相似文献   

14.
A new method for shaded surface display of biological and medical images   总被引:3,自引:0,他引:3  
Shaded surface display is a useful aid in visualizing and analyzing three-dimensional biological and medical images. However, currently available algorithms have limitations, particularly when applied to clinically important image data requiring fast and flexible interactive analysis. In addition to the problem of computation time is the cost of specialized hardware and the quality of shading. A new algorithm has been designed for use with three-dimensional biological/ medical images which attempts to overcome these limitations. This is accomplished by eliminating less important capabilities and optimizing the essential ones of speed and realistic shading. The algorithm has been successfully employed in planning reconstructive bone surgery, in assessment of both congenital and acquired heart disease, and in studies of normal and pathological lung physiology. Examples which illustrate the versatility and speed of the new algorithm are presented.  相似文献   

15.
Optical recognition of motor vehicle license plates   总被引:33,自引:0,他引:33  
A system for the recognition of car license plates is presented. The aim of the system is to read automatically the Italian license number of a car passing through a tollgate. A CCTV camera and a frame grabber card are used to acquire a rear-view image of the vehicle. The recognition process consists of three main phases. First, a segmentation phase locates the license plate within the image. Then, a procedure based upon feature projection estimates some image parameters needed to normalize the license plate characters. Finally, the character recognizer extracts some feature points and uses template matching operators to get a robust solution under multiple acquisition conditions. A test has been done on more than three thousand real images acquired under different weather and illumination conditions, thus obtaining a recognition rate close to 91%  相似文献   

16.
The paper deals with the integration of a powerful parallel computer based image analysis and visualization system for cardiology into a hospital information system. Further services are remote access to the hospital Web server through an Internet network. The visualization system includes dynamic three dimensional representation of two types of medical images (e.g., magnetic resonance and nuclear medicine) as well as two images in the same modality (e.g., basal versus stress images). A series of software tools for quantitative image analysis developed for supporting diagnosis of cardiac disease are also available, including automated image segmentation and quantitative time evaluation of left ventricular volumes and related indices during cardiac cycle, myocardial mass, and myocardial perfusion indices. The system has been tested both at a specialized cardiologic center and for remote consultation in diagnosis of cardiac disease by using anatomical and perfusion magnetic resonance images  相似文献   

17.
Enhancement techniques are often used in image processing. In this paper, a quantitative measure of the image quality through evaluation of the coefficient of information content and the entropy has been suggested to evaluate the effect of enhancement. The image has been assumed to be a sample function of a homogeneous random field and the pixel values are estimated from the ‘past’ pixel values. The difference between the estimated value and the actual value of the pixel has been used as the criterion for defining the coefficient of information content. Also, the entropy obtained using the co-occurrance matrix of the image, has been used as a quantitative measure of the image quality. Measurements of the image quality through the evaluation of the coefficient of information content and the entropy have been carried out for the test images and the results of these measurements have been presented.  相似文献   

18.
Guide-wire tracking during endovascular interventions   总被引:2,自引:0,他引:2  
A method is presented to extract and track the position of a guide wire during endovascular interventions under X-ray fluoroscopy. The method can be used to improve guide-wire visualization in low-quality fluoroscopic images and to estimate the position of the guide wire in world coordinates. A two-step procedure is utilized to track the guide wire in subsequent frames. First, a rough estimate of the displacement is obtained using a template-matching procedure. Subsequently, the position of the guide wire is determined by fitting a spline to a feature image. The feature images that have been considered enhance line-like structures on: 1) the original images; 2) subtraction images; and 3) preprocessed images in which coherent structures are enhanced. In the optimization step, the influence of the scale at which the feature is calculated and the additional value of using directional information is investigated. The method is evaluated on 267 frames from ten clinical image sequences. Using the automatic method, the guide wire could be tracked in 96% of the frames, with a similar accuracy to three observers, although the position of the tip was estimated less accurately.  相似文献   

19.
The aim of this work is the three-dimensional (3-D) reconstruction of the left or right heart chamber from digital biplane angiograms. The approach used, the binary reconstruction, exploits the density information of subtracted ventriculograms from two orthogonal views in addition to the ventricular contours. The ambiguity of the problem is largely reduced by incorporating a priori knowledge of human ventricles. A model-based reconstruction program is described that is applicable to routinely acquired biplane ventriculographic studies. Prior to reconstruction, several geometric and densitometric imaging errors are corrected. The finding of corresponding density profiles and anatomical landmarks is supported by a biplane image pairing procedure that takes the movement of the gantry system into account. Absolute measurements are based on geometric isocenter calibration and a slice-wise density calibration technique. The reconstructed ventricles allow 3-D visualization and regional wall motion analysis independently of the gantry setting. The method is applied to clinical angiograms and tested in left- and right-ventricular phantoms yielding a well shape conformity even with few model information. The results indicate that volumes of binary reconstructed ventricles are less projection-dependent compared to volume data derived by purely contour-based methods. A limitation is that the heart chamber must not be superimposed by other dye-filled structures in both projections.  相似文献   

20.
基于冲刷模拟的灰度模式骨架化算法   总被引:3,自引:0,他引:3  
刘俊义  王润生 《电子学报》2001,29(9):1259-1262
通过模拟水流冲刷图像表面的过程,本文提出了一种直接针对灰度图像的高效的骨架化算法.研究表明:该算法可以得到连通的、单象素宽的、与原始图像拓扑一致的、处于模式中线上的、对于图像灰度的严格单调变换不变的骨架.对于二值图像和灰度图像的实验显示了算法的高效性和可靠性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号