首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Numerous approaches to super‐resolution (SR) of sequentially observed images (image sequence) of low resolution (LR) have been presented in the past two decades. However, neural network methods are almost ignored for solving SR problems. This is because the SR problem traditionally has been regarded as the optimization of an ill‐posed large set of linear equations. A designed neural network based on this has a large number of neurons, thereby requiring a long learning time. Also, the deduced cost function is overly complex. These defects limit applications of a neural network to an SR problem. We think that the underlying meaning of the SR problem should refer to super‐resolving an imaging system by image sequence observation, instead of merely improving the image sequence itself. SR can be regarded as a pattern mapping from LR to SR images. The parameters of the pattern mapping can be learned from the imaging process of the image sequence. This article presents a neural network for SR based on learning from the imaging process of the image sequence. In order to speed up the convergence, we employ vector mapping to train the neural network. A mapping vector is composed of some neighbor subpixels. Such a well‐trained neural network has powerful generalization ability so that it can be used directly to estimate the SR image of the other image sequences without learning again. Our simulations show the effectiveness of the proposed neural network. © 2004 Wiley Periodicals, Inc. Int J Imaging Syst Technol 14, 8–15, 2004; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.20001  相似文献   

2.
Image restoration has received considerable attention. In many practical situations, unfortunately, the blur is often unknown, and little information is available about the true image. Therefore, the true image is identified directly from the corrupted image by using partial or no information about the blurring process and the true image. In addition, noise will be amplified to induce severely ringing artifacts in the process of restoration. This article proposes a novel technique for the blind super‐resolution, whose mechanism alternates between de‐convolution of the image and the point spread function based on the improved Poisson maximum a posteriori super‐resolution algorithm. This improved Poisson MAP super‐resolution algorithm incorporates the functional form of a Wiener filter into the Poisson MAP algorithm operating on the edge image further to reduce noise effects and speed restoration. Compared with that based on the Poisson MAP, the novel blind super‐resolution technique presents experimental results from 1‐D signals and 2‐D images corrupted by Gaussian point spread functions and additive noise with significant improvements in quality. © 2003 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 12, 239–246, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.10032  相似文献   

3.
In this study, the inverse problem of reconstructing the in‐plane (2D) displacements of a monitored surface through a sequence of two‐dimensional digital images, severely ill‐posed in Hadamard's sense, is deeply investigated. A novel variational formulation is presented for the continuum 2D digital image correlation problem, and critical issues such as semi‐coercivity and solution multiplicity are discussed by functional analysis tools. In the framework of a Galerkin, finite element discretization of the displacement field, a robust implementation for 2D digital image correlation is outlined, aiming to attenuate the spurious oscillations which corrupt the deformation scenario, especially when very fine meshes are adopted. Recourse is made to a hierarchical family of grids linked by suitable restriction and prolongation operators and defined over an image pyramid. Multi‐grid cycles are performed ascending and descending along the pyramid, with only one Newton iteration per level irrespective of the tolerance satisfaction, as if the problem were linear. At each level, the conventional least‐square matching functional is herein enriched by a Tychonoff regularization provision, preserving the solution against an unstable response. The algorithm is assessed on the basis of both synthetic and truly experimental image pairs. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
A high‐efficiency and contrast‐enhanced halftoning approach is presented in this article. The contrast of digital image depends highly on the nature of the ambient lighting and the quality of the lens. The required processing time skyrockets when a high printed work of high resolution and quality is needed. In this article, we use the benefits of high‐speed ordered dithering and high‐quality error diffusion to simultaneously solve the problems described earlier. Experimental results demonstrate that this technique is able to enlarge the dynamic range of the histogram and achieve improvement in execution efficiency of 15–47% with the tested images. © 2009 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 19, 356–361, 2009  相似文献   

5.
Super‐resolution fluorescence microscopy enables imaging of fluorescent structures beyond the diffraction limit. However, this technique cannot be applied to weakly fluorescent cellular components or labels. As an alternative, photothermal microscopy based on nonradiative transformation of absorbed energy into heat has demonstrated imaging of nonfluorescent structures including single molecules and ~1‐nm gold nanoparticles. However, previously photothermal imaging has been performed with a diffraction‐limited resolution only. Herein, super‐resolution, far‐field photothermal microscopy based on nonlinear signal dependence on the laser energy is introduced. Among various nonlinear phenomena, including absorption saturation, multiphoton absorption, and signal temperature dependence, signal amplification by laser‐induced nanobubbles around overheated nano‐objects is explored. A Gaussian laser beam profile is used to demonstrate the image spatial sharpening for calibrated 260‐nm metal strips, resolving of a plasmonic nanoassembly, visualization of 10‐nm gold nanoparticles in graphene, and hemoglobin nanoclusters in live erythrocytes with resolution down to 50 nm. These nonlinear phenomena can be used for 3D imaging with improved lateral and axial resolution in most photothermal methods, including photoacoustic microscopy.  相似文献   

6.
The advances in material characterization by means of imaging techniques require powerful computational methods for numerical analysis. The present contribution focuses on highlighting the advantages of coupling the extended finite elements method and the level sets method, applied to solve microstructures with complex geometries. The process of obtaining the level set data starting from a digital image of a material structure and its input into an extended finite element framework is presented. The coupled method is validated using reference examples and applied to obtain homogenized properties for heterogeneous structures. Although the computational applications presented here are mainly two‐dimensional, the method is equally applicable for three‐dimensional problems. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

7.
Unit‐cell homogenization techniques are frequently used together with the finite element method to compute effective mechanical properties for a wide range of different composites and heterogeneous materials systems. For systems with very complicated material arrangements, mesh generation can be a considerable obstacle to usage of these techniques. In this work, pixel‐based (2D) and voxel‐based (3D) meshing concepts borrowed from image processing are thus developed and employed to construct the finite element models used in computing the micro‐scale stress and strain fields in the composite. The potential advantage of these techniques is that generation of unit‐cell models can be automated, thus requiring far less human time than traditional finite element models. Essential ideas and algorithms for implementation of proposed techniques are presented. In addition, a new error estimator based on sensitivity of virtual strain energy to mesh refinement is presented and applied. The computational costs and rate of convergence for the proposed methods are presented for three different mesh‐refinement algorithms: uniform refinement; selective refinement based on material boundary resolution; and adaptive refinement based on error estimation. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

8.
This article addresses the problem of reconstructing a magnetic resonance image from highly undersampled data, which frequently arises in accelerated magnetic resonance imaging. We propose to impose sparsity of first and second order difference sparse coefficients within the complement of the known support. Second order variation is involved to overcome blocky effects and support information is used to reduce the sampling rate further. The resulting optimization problem consists of a data fidelity term and first‐second order variation terms penalizing entries within the complement of the known support. The efficient split Bregman algorithm is used to solve the problem. Reconstruction results from magnetic resonance imaging data corresponding to different sampling rates are shown to illustrate the performance of the proposed method. Then, we also assess the tolerance of the new method to noise briefly. © 2015 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 25, 277–284, 2015  相似文献   

9.
It is a significant challenge to accurately reconstruct medical computed tomography (CT) images with important details and features. Reconstructed images always suffer from noise and artifact pollution because the acquired projection data may be insufficient or undersampled. In reality, some “isolated noise points” (similar to impulse noise) always exist in low‐dose CT projection measurements. Statistical iterative reconstruction (SIR) methods have shown greater potential to significantly reduce quantum noise but still maintain the image quality of reconstructions than the conventional filtered back‐projection (FBP) reconstruction algorithm. Although the typical total variation‐based SIR algorithms can obtain reconstructed images of relatively good quality, noticeable patchy artifacts are still unavoidable. To address such problems as impulse‐noise pollution and patchy‐artifact pollution, this work, for the first time, proposes a joint regularization constrained SIR algorithm for sparse‐view CT image reconstruction, named “SIR‐JR” for simplicity. The new joint regularization consists of two components: total generalized variation, which could process images with many directional features and yield high‐order smoothness, and the neighborhood median prior, which is a powerful filtering tool for impulse noise. Subsequently, a new alternating iterative algorithm is utilized to solve the objective function. Experiments on different head phantoms show that the obtained reconstruction images are of superior quality and that the presented method is feasible and effective.  相似文献   

10.
It is well known that cone‐beam data acquired with a circular orbit are insufficient for exact image reconstruction. Despite this, because a cone‐beam scanning configuration with a circular orbit is easy to implement in practice, it has been widely employed for data acquisition in, e.g., micro‐CT and CT imaging in radiation therapy. The algorithm developed by Feldkamp, Davis, and Kress (FDK) and its modifications, such as the Tent–FDK (T‐FDK) algorithm, have been used for image reconstruction from circular cone‐beam data. In this work, we present an algorithm with spatially shift‐variant filtration for image reconstruction in circular cone‐beam CT. We performed computer‐simulation studies to compare the proposed and existing algorithms. Numerical results in these studies demonstrated that the proposed algorithm has resolution properties comparable to, and noise properties better than, the FDK algorithm. As compared to the T‐FDK algorithm, our proposed algorithm reconstructs images with an improved in‐plane spatial resolution. © 2005 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 14, 213–221, 2004; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.20026  相似文献   

11.
Super‐resolution fluorescence microscopy allows for unprecedented in situ visualization of biological structures, but its application to materials science has so far been comparatively limited. One of the main reasons is the lack of powerful dyes that allow for labeling and photoswitching in materials science systems. In this study it is shown that appropriate substitution of diarylethenes bearing a fluorescent closed and dark open form paves the way for imaging nanostructured materials with three of the most popular super‐resolution fluorescence microscopy methods that are based on different concepts to achieve imaging beyond the diffraction limit of light. The key to obtain optimal resolution lies in a proper control over the photochemistry of the photoswitches and its adaption to the system to be imaged. It is hoped that the present work will provide researchers with a guide to choose the best photoswitch derivative for super‐resolution microscopy in materials science, just like the correct choice of a Swiss Army Knife's tool is essential to fulfill a given task.  相似文献   

12.
In object‐oriented image coding applications, the use of the SA‐DCT (Shape‐Adaptive Discrete Cosine Transform) and of the block‐based DCT schemes associated to the EI (Extension‐Interpolation) padding technique have shown promising results at high and low bit rates, respectively. Both the SA‐DCT and the EI padding algorithm consist of two 1D‐DCT processing. Recent works have shown that their efficiencies can be further improved by the most appropriate selection of the first direction to be processed. This paper introduces novel methods to determine the preferential direction of processing boundary blocks of the SA‐DCT and of the EI padding technique. These methods are based on two known strategies related to the variances of the lengths of object segments in both directions, and a third one, which measures the energy compaction efficiency of the transforms. The proposed methods use the morphological feature TNOP (distribution of Texture according to the Number of Object Pixels) to adaptively select the most adequate strategy for each group of boundary blocks exhibiting similar number of object pixels. At last, it is introduced an adaptive switching scheme that selects between the proposed scheme for the block‐based DCT or for the SA‐DCT, when the available bit rate is known. This novel scheme allows different switching rules for distinct groups of boundary blocks, according to the feature TNOP. It outperforms the isolated use of both the SA‐DCT and the block‐based DCT schemes, at any specific bit rate. © 2008 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 18, 219–227, 2008; Published online in Wiley InterScience (www.interscience.wiley.com).  相似文献   

13.
One of the challenging tasks in the application of compressed sensing to magnetic resonance imaging is the reconstruction algorithm that can faithfully recover the MR image from randomly undersampled k‐space data. The nonlinear recovery algorithms based on iterative shrinkage start with a single initial guess and use soft‐thresholding to recover the original MR image from the partial Fourier data. This article presents a novel method based on projection onto convex set (POCS) algorithm but it takes two images and then randomly combines them at each iteration to estimate the original MR image. The performance of the proposed method is validated using the original data taken from the MRI scanner at St. Mary's Hospital, London. The experimental results show that the proposed method can reconstruct the original MR image from variable density undersampling scheme in less number of iterations and exhibits better performance in terms of improved signal‐to‐noise ratio, artifact power, and correlation as compared to the reconstruction through low‐resolution and POCS algorithms. © 2014 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 24, 203–207, 2014  相似文献   

14.
Intended to avoid the complicated computations of elasto‐plastic incremental analysis, limit analysis is an appealing direct method for determining the load‐carrying capacity of structures. On the basis of the static limit analysis theorem, a solution procedure for lower‐bound limit analysis is presented firstly, making use of the element‐free Galerkin (EFG) method rather than traditional numerical methods such as the finite element method and boundary element method. The numerical implementation is very simple and convenient because it is only necessary to construct an array of nodes in the domain under consideration. The reduced‐basis technique is adopted to solve the mathematical programming iteratively in a sequence of reduced self‐equilibrium stress subspaces with very low dimensions. The self‐equilibrium stress field is expressed by a linear combination of several self‐equilibrium stress basis vectors with parameters to be determined. These self‐equilibrium stress basis vectors are generated by performing an equilibrium iteration procedure during elasto‐plastic incremental analysis. The Complex method is used to solve these non‐linear programming sub‐problems and determine the maximal load amplifier. Numerical examples show that it is feasible and effective to solve the problems of limit analysis by using the EFG method and non‐linear programming. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

15.
The simultaneous electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI) recording technique has recently received considerable attention and has been used in many studies on cognition and neurological disease. EEG‐fMRI simultaneous recording has the advantage of enabling the monitoring of brain activity with both high temporal resolution and high spatial resolution in real time. The successful removal of the ballistocardiographic (BCG) artifact from the EEG signal recorded during an MRI is an important prerequisite for real‐time EEG‐fMRI joint analysis. We have developed a new framework dedicated to BCG artifact removal in real‐time. This framework includes a new real‐time R‐peak detection method combining a k‐Teager energy operator, a thresholding detector, and a correlation detector, as well as a real‐time BCG artifact reduction procedure combining average artifact template subtraction and a new multi‐channel referenced adaptive noise cancelling method. Our results demonstrate that this new framework is efficient in the real‐time removal of the BCG artifact. The multi‐channel adaptive noise cancellation (mANC) method performs better than the traditional ANC method in eliminating the BCG residual artifact. In addition, the computational speed of the mANC method fulfills the requirements of real‐time EEG‐fMRI analysis. © 2016 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 26, 209–215, 2016  相似文献   

16.
Acoustic holography is a transmission‐based ultrasound imaging method that uses optical image reconstruction and provides a larger field of view than pulse‐echo ultrasound imaging. A focus parameter controls the position of the focal plane along the optical axis, and the images obtained contain defocused content from objects not near the focal plane. Moreover, it is not always possible to bring all objects of interest into simultaneous focus. In this article, digital image processing techniques are presented to (1) identify a “best focused” image from a sequence of images taken with different focus settings and (2) simultaneously focus every pixel in the image through fusion of pixels from different frames in the sequence. Experiments show that the three‐dimensional image information provided by acoustic holography requires position‐dependent filtering for the enhancement step. It is found that filtering in the spatial domain is more computationally efficient than in the frequency domain. In addition, spatial domain processing gives the best performance. © 2002 Wiley Periodicals, Inc. Int J Imaging Syst Technol 12, 101–111, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.10017  相似文献   

17.
This article presents an approach to extend our previous work of the minimum‐weighted norm method in computerized tomography. In particular concentrating on applications of ultrasonic nondestructive testing, the resolution enhancement in the image reconstruction from B‐scans is achieved. To combat the degradation problem due to physical focus of finite‐sized ultrasonic transducer and incompleteness of B‐scan data, a profile‐oriented prior knowledge about the object being detected is incorporated in the image reconstruction, in the form of weighted summation of specific basis functions. Each basis function is characterized by an image of coherent illumination pattern associated with a specific measuring time and a specific measuring position. From the demonstrations with both simulated and experimental data values, this technique proves a great potential in improving the image quality. © 2012 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 22, 185–193, 2012  相似文献   

18.
Sum‐modified‐Laplacian (SML) plays an important role in medical image fusion. However, fused rules based on larger SML always lead to fusion image distortion in transform domain image fusion or image information loss in spatial domain image fusion. Combined with average filter and median filter, a new medical image fusion method based on improved SML (ISML) is proposed. First, a basic fused image is gained by ISML, which is used for evaluation of the selection map of medical images. Second, difference images can be obtained by subtracting average image of all sources of medical images. Finally, basic fused image can be refined by difference images. The algorithm can both preserve the information of the source images well and suppress pixel distortion. Experimental results demonstrate that the proposed method outperforms the state‐of‐the‐art medical image fusion methods. © 2015 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 25, 206–212, 2015  相似文献   

19.
The general framework of super resolution in computed tomography (CT) system is introduced. Two data acquisition ways before or after the reconstruction respectively are described. Three models including the sinogram model, the in‐plane model and the z‐axis model, are addressed to adapt super resolution to CT system. The improved iterative back projection algorithm is used in this work. Experimental results based on simulated data, GE performance phantom scanned by GE LightSpeed VCT system, one patient volunteer scanned by TOSHIBA Aquilion system, and a special experimental apparatus demonstrate that super resolution is effective to improve the resolution of CT images. The sinogram model is suitable for future CT system; the in‐plane model is restricted to some special clinical diagnoses; and the z‐axis model is practicable for current general clinical CT images. © 2015 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 25, 92–101, 2015  相似文献   

20.
In this paper, we formulate a semi‐implicit time‐stepping model for multibody mechanical systems with frictional, distributed compliant contacts. Employing a polyhedral pyramid model for the friction law and a distributed, linear, viscoelastic model for the contact, we obtain mixed linear complementarity formulations for the discrete‐time, compliant contact problem. We establish the existence and finite multiplicity of solutions, demonstrating that such solutions can be computed by Lemke's algorithm. In addition, we obtain limiting results of the model as the contact stiffness tends to infinity. The limit analysis elucidates the convergence of the dynamic models with compliance to the corresponding dynamic models with rigid contacts within the computational time‐stepping framework. Finally, we report numerical simulation results with an example of a planar mechanical system with a frictional contact that is modelled using a distributed, linear viscoelastic model and Coulomb's frictional law, verifying empirically that the solution trajectories converge to those obtained by the more traditional rigid‐body dynamic model. Copyright © 2004 John Wiley Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号