首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Multidimensional Systems and Signal Processing - Medical imaging has been an indispensable tool in modern medicine in last decades. Various types of imaging systems provide structural and...  相似文献   

2.
Poisson or shot noise is a major degrading factor in low-light and infrared imaging. The authors show how images from a standard video camera can be artificially degraded to simulate the effect of Poisson noise. A specific algorithm is given, together with details of the computational cost  相似文献   

3.
Multidimensional Systems and Signal Processing - Nowadays medical images are captured through various imaging modalities for clinical diagnosis. It is more complicated to process the images...  相似文献   

4.
5.
To efficiently compress rasterized compound documents, an encoder must be content-adaptive. Content adaptivity may be achieved by employing a layered approach. In such an approach, a compound image is segmented into layers so that appropriate encoders can be used to compress these layers individually. A major factor in using standard encoders efficiently is to match the layers’ characteristics to those of the encoders by using data filling techniques to fill-in the initially sparse layers. In this work we present a review of methods dealing with data filling and propose also a sub-optimal non-linear projections scheme that efficiently matches the baseline JPEG coder in compressing background layers, leading to smaller files with better image quality.  相似文献   

6.
A new technique to reduce clinical magnetic resonance imaging (MRI) scan time by varying acquisition parameters and sharing k-space data between images, is proposed. To improve data utilization, acquisition of multiple images of different contrast is combined into a single scan, with variable acquisition parameters including repetition time (TR), echo time (TE), and echo train length (ETL). This approach is thus referred to as a "combo acquisition." As a proof of concept, simulations of MRI experiments using spin echo (SE) and fast SE (FSE) sequences were performed based on Bloch equations. Predicted scan time reductions of 25%-50% were achieved for 2-contrast and 3-contrast combo acquisitions. Artifacts caused by nonuniform k-space data weighting were suppressed through semi-empirical optimization of parameter variation schemes and the phase encoding order. Optimization was assessed by minimizing three quantitative criteria: energy of the "residue point spread function (PSF)," energy of "residue profiles" across sharp tissue boundaries, and energy of "residue images." In addition, results were further evaluated by quantitatively analyzing the preservation of contrast, the PSF, and the signal-to-noise ratio. Finally, conspicuity of lesions was investigated for combo acquisitions in comparison with standard scans. Implications and challenges for the practical use of combo acquisitions are discussed.  相似文献   

7.
The proliferation of digital information in our society has enticed a lot of research into data-embedding techniques that add information to digital content, like images, audio, and video. In this paper, we investigate high-capacity lossless data-embedding methods that allow one to embed large amounts of data into digital images (or video) in such a way that the original image can be reconstructed from the watermarked image. We present two new techniques: one based on least significant bit prediction and Sweldens' lifting scheme and another that is an improvement of Tian's technique of difference expansion. The new techniques are then compared with various existing embedding methods by looking at capacity-distortion behavior and capacity control.  相似文献   

8.
A novel technique for despeckling the medical ultrasound images using lossy compression is presented. The logarithm of the input image is first transformed to the multiscale wavelet domain. It is then shown that the subband coefficients of the log-transformed ultrasound image can be successfully modeled using the generalized Laplacian distribution. Based on this modeling, a simple adaptation of the zero-zone and reconstruction levels of the uniform threshold quantizer is proposed in order to achieve simultaneous despeckling and quantization. This adaptation is based on: (1) an estimate of the corrupting speckle noise level in the image; (2) the estimated statistics of the noise-free subband coefficients; and (3) the required compression rate. The Laplacian distribution is considered as a special case of the generalized Laplacian distribution and its efficacy is demonstrated for the problem under consideration. Context-based classification is also applied to the noisy coefficients to enhance the performance of the subband coder. Simulation results using a contrast detail phantom image and several real ultrasound images are presented. To validate the performance of the proposed scheme, comparison with two two-stage schemes, wherein the speckled image is first filtered and then compressed using the state-of-the-art JPEG2000 encoder, is presented. Experimental results show that the proposed scheme works better, both in terms of the signal to noise ratio and the visual quality.  相似文献   

9.
Personalization is a key aspect of biophysical models in order to impact clinical practice. In this paper, we propose a personalization method of electromechanical models of the heart from cine-MR images based on the adjoint method. After estimation of electrophysiological parameters, the cardiac motion is estimated based on a proactive electromechanical model. Then cardiac contractilities on two or three regions are estimated by minimizing the discrepancy between measured and simulation motion. Evaluation of the method on three patients with infarcted or dilated myocardium is provided.  相似文献   

10.
In this paper, a reversible data hiding in encrypted images (RDHEI) method combining GCC (group classification encoding) and SIBRW containing sixteen image-based rearrangement ways is proposed to achieve high-capacity data embedding in encrypted images. Each way of SIBRW aims at bringing strongly-correlated bits of each higher bit-plane together by rearranging each higher bit-plane. For each higher bit-plane, the optimal way achieving the most concentrated aggregation performance is selected from SIBRW to rearrange this bit-plane, and then, GCC compresses the rearranged bit-plane in group-by-group manner. By making full use of strong-correlation between adjacent groups, GCC can compress not only consecutive several groups whose bits are valued 1 (or 0) but also a single group so that a large embedding space is provided. The encryption method including the bit-level XOR-encryption and scrambling operations enhances the security. The experimental results show that the proposed scheme can achieve large embedding capacity and high security.  相似文献   

11.
Automated analysis of nerve-cell images using active contour models   总被引:2,自引:0,他引:2  
The number of nerve fibers (axons) in a nerve, the axon size, and shape can all be important neuroanatomical features in understanding different aspects of nerves in the brain. However, the number of axons in a nerve is typically in the order of tens of thousands and a study of a particular aspect of the nerve often involves many nerves. Potentially meaningful studies are often prohibited by the huge number involved when manual measurements have to be employed. A method that automates the analysis of axons from electron-micrographic images is presented. It begins with a rough identification of all the axon centers by use of an elliptical Hough transform procedure. Boundaries of each axons are then extracted based on active contour model, or snakes, approach where physical properties of the axons and the given image data are used in an optimization scheme to guide the snakes to converge to axon boundaries for accurate sheath measurement. However, false axon detection is still common due to poor image quality and the presence of other irrelevant cell features, thus a conflict resolution scheme is developed to eliminate false axons to further improve the performance of detection. The developed method has been tested on a number of nerve images and its results are presented.  相似文献   

12.
利用量化判据进行多聚焦融合图像的重构分析   总被引:1,自引:0,他引:1  
张新曼  韩九强 《通信学报》2005,26(5):128-131
提出一种基于对比度视觉模型的图像融合自适应分块搜索算法,利用几种量化判据,即均方根误差、熵、交叉熵和互信息进行融合图像序列分析及效果评价,对同一场景两幅严格配准的多聚焦图像的清晰恢复进行了深入研究。通过大量的图像实验表明:无论是依据视觉的主观评价,还是客观评价准则,该方法能实现多聚焦图像对标准参考图像的精确重构或优化融合效果。  相似文献   

13.
Image registration is a real challenge because physicians handle many images. Temporal registration is necessary in order to follow the various steps of a disease, whereas multimodal registration allows us to improve the identification of some lesions or to compare pieces of information gathered from different sources. This paper presents an algorithm for temporal and/or multimodal registration of retinal images based on point correspondence. As an example, the algorithm has been applied to the registration of fluorescein images (obtained after a fluorescein dye injection) with green images (green filter of a color image). The vascular tree is first detected in each type of images and bifurcation points are labeled with surrounding vessel orientations. An angle-based invariant is then computed in order to give a probability for two points to match. Then a Bayesian Hough transform is used to sort the transformations with their respective likelihoods. A precise affine estimate is finally computed for most likely transformations. The best transformation is chosen for registration.  相似文献   

14.
A new inverse microwave imaging algorithm is presented which has the ability to obtain quantitative dielectric maps of large biological bodies. By using a priori information, obtained with a first order algorithm, the final image is obtained by solving the direct problem and an ill-conditioned system of equations into an iterative procedure. The algorithm has been successfully tested with real data from an experimental scanner  相似文献   

15.
The combined assessment of data obtained by positron emission tomography (PET) and gene array techniques provide new capabilities for the interpretation of kinetic tracer studies. The correlative analysis of the data helps to detect dependencies of the kinetics of radiotracer on gene expression. Furthermore, gene expression may be predicted using regression functions if a significant correlation exists, which raises new aspects regarding the interpretation of dynamic PET examinations. The development of new radiopharmaceuticals requires the knowledge of the enhanced expression of genes, especially genes controlling receptors and cell surface proteins. The GenePET program facilitates an interactive approach together with the use of key words to identify possible targets for new radiopharmaceuticals.  相似文献   

16.
This paper proposes a reversible data hiding method based on image interpolation and the detection of smooth and complex regions in the cover images. A binary image that represents the locations of reference pixels is constructed according the local image activity. In complex regions, more reference pixels are chosen and, thus, fewer pixels are used for embedding, which reduces the image degradation. On the other hand, in smooth regions, less reference pixels are chosen, which increases the embedding capacity without introducing significant distortion. Pixels are interpolated according to the constructed binary image, and the interpolation errors are then used to embed data through histogram shifting. The pixel values in the cover image are modified one grayscale unit at most to ensure that a high quality stego image can be produced. The experimental results show that the proposed method provides better image quality and embedding capacity compared with prior works.  相似文献   

17.
A new approach for correcting bias field in magnetic resonance (MR) images is proposed using the mathematical model of singularity function analysis (SFA), which represents a discrete signal or its spectrum as a weighted sum of singularity functions. Through this model, an MR image's low spatial frequency components corrupted by a smoothly varying bias field are first removed, and then reconstructed from its higher spatial frequency components not polluted by bias field. The thus reconstructed image is then used to estimate bias field for final image correction. The approach does not rely on the assumption that anatomical information in MR images occurs at higher spatial frequencies than bias field. The performance of this approach is evaluated using both simulated and real clinical MR images.  相似文献   

18.
In this paper, we describe a new framework to extract visual attention regions in images using robust subspace estimation and analysis techniques. We use simple features like hue and intensity endowed with scale adaptivity in order to represent smooth and textured areas in an image. A polar transformation maps homogeneity in the features into a linear subspace that also encodes spatial information of a region. A new subspace estimation algorithm based on the Generalized Principal Component Analysis (GPCA) is proposed to estimate multiple linear subspaces. Sensitivity to outliers is achieved by weighted least squares estimate of the subspaces in which weights calculated from the distribution of K nearest neighbors are assigned to data points. Iterative refinement of the weights is proposed to handle the issue of estimation bias when the number of data points in each subspace is very different. A new region attention measure is defined to calculate the visual attention of each region by considering both feature contrast and spatial geometric properties of the regions. Compared with existing visual attention detection methods, the proposed method directly measures global visual attention at the region level as opposed to pixel level.  相似文献   

19.
For pre- and post-earthquake remote-sensing images, registration is a challenging task due to the possible deformations of the objects to be registered. To overcome this problem, a registration method based on robust weighted kernel principal component analysis is proposed to precisely register the variform objects. Firstly, a robust weighted kernel principal component analysis (RWKPCA) method is developed to capture the common robust kernel principal components (RKPCs) of the variform objects. Secondly, a registration approach is derived from the projection on RKPCs. Finally, two experiments are conducted on the SAR image registration in Wenchuan earthquake on May 12, 2008, and the results showed that the method is very effective in capturing structure patterns and generalized well for registration.  相似文献   

20.
In this paper, a high-capacity data hiding is proposed for embedding a large amount of information into halftone images. The embedded watermark can be distributed into several error-diffused images with the proposed minimal-error bit-searching technique (MEBS). The method can also be generalized to self-decoding mode with dot diffusion or color halftone images. From the experiments, the embedded capacity from 33% up to 50% and good quality results are achieved. Furthermore, the proposed MEBS method is also extended for robust watermarking against the degradation from printing-and-scanning and several kinds of distortions. Finally, a least-mean square-based halftoning is developed to produce an edge-enhanced halftone image, and the technique also cooperates with MEBS for all the applications described above, including high-capacity data hiding with secret sharing or self-decoding mode, as well as robust watermarking. The results prove much sharper than the error diffusion or dot diffusion methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号