首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到15条相似文献,搜索用时 15 毫秒
1.
Image compression has become an inevitable tool along with the advancing medical data acquisition and telemedicine systems. The run-length encoding (RLE), one of the most effective and practical lossless compression techniques, is widely used in two-dimensional space with common scanning forms such as zigzag and linear. In this study, an algorithm which takes advantage of the potential simplicity of the run-length algorithm is devised in a volumetric approach for three-dimensional (3D) binary medical data. The proposed algorithm, namely 3D-RLE, being different from the two-dimensional approach that utilizes only intra-slice correlations, is designed to compress binary volumetric data by employing also the inter-slice correlation between the voxels. Furthermore, it is extended to several scanning forms such as Hilbert and perimeter to determine an optimal scanning procedure coherent with the morphology of the segmented organ in data. The algorithm is employed on four datasets for a comprehensive assessment. Numerical simulation results demonstrated that the performance of the algorithm is 1:30 better than those of the state-of-the-art techniques, on average.  相似文献   

2.
The advancement in medical imaging systems such as computed tomography (CT), magnetic resonance imaging (MRI), positron emitted tomography (PET), and computed radiography (CR) produces huge amount of volumetric images about various anatomical structure of human body. There exists a need for lossless compression of these images for storage and communication purposes. The major issue in medical image is the sequence of operations to be performed for compression and decompression should not degrade the original quality of the image, it should be compressed loss lessly. In this article, we proposed a lossless method of volumetric medical image compression and decompression using adaptive block‐based encoding technique. The algorithm is tested for different sets of CT color images using Matlab. The Digital Imaging and Communications in Medicine (DICOM) images are compressed using the proposed algorithm and stored as DICOM formatted images. The inverse process of adaptive block‐based algorithm is used to reconstruct the original image information loss lessly from the compressed DICOM files. We present the simulation results for large set of human color CT images to produce a comparative analysis of the proposed methodology with block‐based compression, and JPEG2000 lossless image compression technique. This article finally proves the proposed methodology gives better compression ratio than block‐based coding and computationally better than JPEG 2000 coding. © 2013 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 23, 227–234, 2013  相似文献   

3.
To transfer the medical image from one place to another place or to store a medical image in a particular place with secure manner has become a challenge. In order to solve those problems, the medical image is encrypting and compressing before sending or saving at a place. In this paper, a new block pixel sort algorithm has been proposed for compressing the encrypted medical image. The encrypted medical image acts as an input for this compression process. During the compression, encrypted secret image E12(;) is compressed by the pixel block sort encoding (PBSE). The image is divided into four identical blocks, similar to 2×2 matrix. The minimum occurrence pixel(s) are found out from every block and the positions of the minimum occurrence pixel(s) are found using the verdict occurrence process. The pixel positions are shortened with the help of a shortening process. The features (symbols and shortened pixel positions) are extracted from each block and the extracted features are stored in a particular place, and the values of these features put together as a compressed medical image. The next process of PBSE is pixel block short decoding (PBSD) process. In the decoding process, there are nine steps involved while decompressing the compressed encrypted medical image. The feature extraction value of compressed information is found out from the feature extraction, the symbols are split and the positions are shortened in a separate manner. The position is retrieved from the rescheduled process and the symbols and reconstructed positions of the minimum occurrence pixels are taken block wise. Every symbol is placed based on the position in each block: if the minimum occurrence pixel is ‘0’, then the rest of the places are automatically allocated as ‘1’ or if the minimum occurrence pixel is ‘1’ the remaining place is automatically allocated as ‘0’. Both the blocks are merged as per order 2×2. The final output is the reconstructed encrypted medical image. From this compression method, we can achieve the high compression ratio, minimum time, less compression size and lossless compression, which are the things experimented and proved.  相似文献   

4.
The aim of image compression endeavour is to reduce the total data required to represent the image, which, in turn, decreases the demand of transmission bandwidth and storage space. In this work, we propose an image fusion based idea that can be exploited extensively to reduce the file size of JPEG compressed image further. Before performing the JPEG compression, we compute both intensity and a subsampled colour representation of the image undergoing compression. Then, similar to the JPEG compression, discrete cosine transformation, quantisation and entropy coding processes are applied on these images and stored in a single image file container. In the decoder, these two images are reconstructed and fused to obtain the resultant decoded image. Our experiments show that the proposed method does meet the lower storage and bandwidth requirement by reducing the average bits per pixel of the encoded image than that of the JPEG compressed image.  相似文献   

5.
Image compression technique is used to reduce the number of bits required in representing image, which helps to reduce the storage space and transmission cost. Image compression techniques are widely used in many applications especially, medical field. Large amount of medical image sequences are available in various hospitals and medical organizations. Large images can be compressed into smaller size images, so that the memory occupation of the image is considerably reduced. Image compression techniques are used to reduce the number of pixels in the input image, which is also used to reduce the broadcast and transmission cost in efficient form. This is capable by compressing different types of medical images giving better compression ratio (CR), low mean square error (MSE), bits per pixel (BPP), high peak signal to noise ratio (PSNR), input image memory size and size of the compressed image, minimum memory requirement and computational time. The pixels and the other contents of the images are less variant during the compression process. This work outlines the different compression methods such as Huffman, fractal, neural network back propagation (NNBP) and neural network radial basis function (NNRBF) applied to medical images such as MR and CT images. Experimental results show that the NNRBF technique achieves a higher CR, BPP and PSNR, with less MSE on CT and MR images when compared with Huffman, fractal and NNBP techniques.  相似文献   

6.
Medical images are known for their huge volume which becomes a real problem for their archiving or transmission notably for telemedicine applications. In this context, we present a new method for medical image compression which combines image definition resizing and JPEG compression. We baptise this new protocol REPro.JPEG (reduction/expansion protocol combined with JPEG compression). At first, the image is reduced then compressed before its archiving or transmission. At last, the user or the receiver decompresses the image then enlarges it before its display. The obtain results prove that, at the same number of bits per pixel lower than 0.42, that REPRo.JPEG guarantees a better preservation of image quality compared to the JPEG compression for dermatological medical images. Besides, applying the REPRo.JPEG on these colour medical images is more efficient while using the HSV colour space compared to the use of RGB or YCbCr colour spaces.  相似文献   

7.
In the process of medical image formation, the medical image is often interfered by various factors, and it is deteriorated by some new noise that may reduce the quality of the obtained image, which affect the clinical diagnosis seriously. A new medical image enhancement method is proposed in this article. Firstly, the initial medical image is decomposed into the NSCT domain with a low‐frequency sub‐band, and several high‐frequency sub‐bands. Secondly, linear transformation is adopted for the coefficients of the low‐frequency sub‐band. An adaptive thresholding method is used for denoising the coefficients of the high‐frequency sub‐bands. Then, all sub‐bands were reconstructed into spatial domains using the inverse transformation of NSCT. Finally, unsharp masking was used to enhance the details of the reconstructed image. The results of experiment show that the proposed method is superior to other methods in image entropy, EME, and PSNR. © 2015 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 25, 199–205, 2015  相似文献   

8.
Standard X‐ray images using conventional screen‐film technique have a limited field of view and failed to visualize the entire long bone on a single image. To produce images with whole body parts, digitized images from the films that contain portions of the body parts are assembled using image stitching. This article presents a new medical image stitching method that uses minimum average correlation energy filters to identify and merge pairs of X‐ray medical images. The effectiveness of the proposed method is demonstrated in the experiments involving two databases that contain a total of 40 pairs of overlapping and nonoverlapping images. Then the experimental results are compared to those of the normalized cross correlation (NCC) method. It is found that the proposed method outperforms the NCC method in identifying both the overlapping and nonoverlapping medical images. The efficacy of the proposed method is further vindicated by its average execution time which is about five times shorter than that of the NCC method. © 2012 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 22, 166–171, 2012  相似文献   

9.
Contextual compression is an essential part of any medical image compression since it facilitates no loss of diagnostic information. Although there are many techniques available for contextual image compression still there is a need for developing an efficient and optimized technique which would produce good quality images at lower bit rates. This article presents an efficient contextual compression algorithm using wavelet and contourlet transforms to capture the fine details of the image, along with directional information to produce good quality at high Compression Ratio (CR). The 2D discrete wavelet transform, which uses the simplest Daubechies wavelets, db1, or haar wavelet, is chosen and used to get the subband coefficients. The approximate coefficients of the higher subbands undergo contourlet transform employing length N ladder filters for capturing the directional information of the subbands at different scale and orientations. An optimized approach is used for predicting the quantized and the normalized subband coefficients resulting in improved compression performance. The proposed contextual compression approach was evaluated for its performance in terms of CR, Peak Signal to Noise Ratio, Feature SIMilarity index, Structure SIMilarity Index, and Universal quality (Q) after reconstruction. The results clarify the efficiency of the proposed method over other compression techniques.  相似文献   

10.
Multi-modality medical image fusion (MMIF) procedures have been generally utilized in different clinical applications. MMIF can furnish an image with anatomical as well as physiological data for specialists that could advance the diagnostic procedures. Various models were proposed earlier related to MMIF though there is a need still exists to enhance the efficiency of the previous techniques. In this research, the authors proposed a novel fusion model based on optimal thresholding with deep learning concepts. An enhanced monarch butterfly optimization (EMBO) is utilized to decide the optimal threshold of fusion rules in shearlet transform. Then, low and high-frequency sub-bands were fused on the basis of feature maps and were given by the extraction part of the deep learning method. Here, restricted Boltzmann machine (RBM) was utilized to conduct the MMIF procedure. A benchmark dataset was utilized for training and testing purposes. The investigations were conducted utilizing a set of generally-utilized pre-enrolled CT and MR images that are publicly accessible. From the usage of fused low and high level frequency groups, the fused image can be attained. The simulation performance results were attained and the proposed model was proved to offer effective performance in terms of SD, edge quality (EQ), mutual information (MI), fusion factor (FF), entropy, correlation factor (CF), and spatial frequency (SF) with respective values being 97.78, 0.96, 5.71, 6.53, 7.43, 0.97, and 25.78 over the compared methods.  相似文献   

11.
Ground Motion Prediction Equations (GMPEs) are empirical relationships which are used for determining the peak ground response at a particular distance from an earthquake source. They relate the peak ground responses as a function of earthquake source type, distance from the source, local site conditions where the data are recorded and finally the depth and magnitude of the earthquake. In this article, a new prediction algorithm, called Conic Multivariate Adaptive Regression Splines (CMARS), is employed on an available dataset for deriving a new GMPE. CMARS is based on a special continuous optimization technique, conic quadratic programming. These convex optimization problems are very well-structured, resembling linear programs and, hence, permitting the use of interior point methods. The CMARS method is performed on the strong ground motion database of Turkey. Results are compared with three other GMPEs. CMARS is found to be effective for ground motion prediction purposes.  相似文献   

12.
The main cause of failure of the hip acetabular component is aseptic loosening. Preclinical test methods currently used to assess the stability of hip acetabular implants rely on crude simplifications. Normally, either one component of motion or bone strains are measured. We developed a test method to measure implant 3D translations and rotations and bone strains using digital image correlation. Hemipelvises were aligned and potted to allow consistent testing. A force was applied in the direction of the load peak during level walking. The force was applied in 100‐cycle packages, each load package being 20% larger than the previous one. A digital image correlation system allowed measuring the cup‐bone relative 3D displacements (permanent migrations and inducible micromotions) and the strain distribution in the periacetabular bone. To assess the test repeatability, the protocol was applied to six composite hemipelvises implanted with very stable cups. To assess the suitability of the method to detect mobilisation, six loose implants were tested. The method was repeatable: the interspecimen variability was 16 μm for the bone/cup relative translations, 0.04° for the rotations. The method was capable of tracking extremely loose implants (translations up to 4.5 mm; rotations up to 30°). The strain distribution in the bone was measured, showing the areas of highest strain. We have shown that it is possible to measure the 3D relative translations and rotations of an acetabular cup inside the pelvis and simultaneously to measure the full‐field strain distribution in the bone surface. This will allow better preclinical testing of the stability of acetabular implants.  相似文献   

13.
The breathing motion moves internal organs and targeted regions determined by radiation therapy planning. For the radiation therapy, accurate prediction for breathing motion is of great interest as the outer targeted region treatment could endanger sensitive tissue. In this study, the use of a prediction algorithm with adaptive support vector regression (aSVR) was proposed and compared with the adaptive neural network (ANN) algorithm considering the prediction accuracy and training and predicting time. Respiration data from 87 patients treated by radiation therapy, were acquired with an optical marker at 30 Hz. Five types of prediction filters with the ANN or aSVR filters, were implemented and their performance was compared according to the size of the sliding window (2.5 and 5.0 sec), and the prediction latencies (100, 200, 300, 400, and 500 msec). Training and testing of the prediction algorithms with aSVR and ANN were performed. The root mean square error (RMSE) was used as the accuracy metric. The aSVR with an RBF kernel outperformed other prediction filters, including not only various types of ANN filters but also the aSVR with a linear kernel. A sliding window of 2.5 sec significantly and independently enhanced the overall accuracy. Otherwise, the training and prediction testing times were significantly prolonged in case of aSVR with an RBF kernel. The aSVR filter with the RBF kernel is in all cases superior to other filters regarding its accuracy; it also shows clinically applicable results from the viewpoint of training and predicting time, which may be effective for predicting patient breathing motion and thus enhancing the efficacy of radiation therapy.  相似文献   

14.
This paper details an advanced method of continuous fatigue damage prediction of rubber fibre composite structures. A novel multiaxial energy‐based approach incorporating a mean stress correction is presented and also used to predict the fatigue life of a commercial vehicle air spring. The variations of elastic strain and complementary energies are joined to form the energy damage parameter. Material parameter α is introduced to adapt for any observed mean stress effect as well as being able to reproduce the well‐known Smith‐Watson‐Topper criterion. Since integration to calculate the energies is simplified, the approach can be employed regardless of the complexity of the thermo‐mechanical load history. Several numerical simulations and experimental tests were performed in order to obtain the required stress‐strain tensors and the corresponding fatigue lives, respectively. In simulations, the rubber material of the air spring was simulated as nonlinear elastic. The mean stress parameter α , which controls the influence of the mean stress on fatigue life, was adjusted with respect to those energy life curves obtained experimentally. The predicted fatigue life and the location of failure are in very good agreement with experimental observations.  相似文献   

15.
Testing of filament-wound composites (FWC) with simplified methods is studied until now in order to generalize the mechanical response of FWC structures. To assure a good analogy between meso- and macro scales, it is necessary to design a representative specimen that include the characteristic winding pattern. This research aims at characterizing the strain field of FWC pattern at ±55° using flat specimens by measuring the displacement field with digital image correlation. Experimental procedure involves tensile testing of epoxy/glass specimens with two unit cells aligned at hoop and axial direction of the winding pattern. Validation of the strain values from digital image correlation is carried out by comparing them with strain gauge measurements and FEM simulations. Failure sequence and modes of FWC flat unit cells show good concordance with those observed in FWC cylinders exposed to buckling in previous works [1].  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号