Fusion of infrared and visible image is a technology which combines information from two different sensors for the same scene. It also gives extremely effective information complementation, which is widely used for the monitoring systems and military fields. Due to limited field depth in an imaging device, visible images can’t identify some targets that may not be apparent due to poor lighting conditions or because that the background color is similar to the target. To deal with this problem, a simple and efficient image fusion approach of infrared and visible images is proposed to extract target’s details from infrared images and enhance the vision in order to improve the performance of monitoring systems. This method depends on maximum and minimum operations in neutrosophic fuzzy sets. Firstly, the image is transformed from its spatial domain to the neutrosophic domain which is described by three membership sets: truth membership, indeterminacy membership, and falsity membership. The indeterminacy in the input data is handled to provide a comprehensive fusion result. Finally, deneutrosophicised process is made which means that the membership values are retransformed into a normal image space. At the end of the study, experimental results are applied to evaluate the performance of this approach and compare it to the recent image fusion methods using several objective evaluation criteria. These experiments demonstrate that the proposed method achieves outstanding visual performance and excellent objective indicators.
In the image fusion field, the design of deep learning-based fusion methods is far from routine. It is invariably fusion-task specific and requires a careful consideration. The most difficult part of the design is to choose an appropriate strategy to generate the fused image for a specific task in hand. Thus, devising learnable fusion strategy is a very challenging problem in the community of image fusion. To address this problem, a novel end-to-end fusion network architecture (RFN-Nest) is developed for infrared and visible image fusion. We propose a residual fusion network (RFN) which is based on a residual architecture to replace the traditional fusion approach. A novel detail-preserving loss function, and a feature enhancing loss function are proposed to train RFN. The fusion model learning is accomplished by a novel two-stage training strategy. In the first stage, we train an auto-encoder based on an innovative nest connection (Nest) concept. Next, the RFN is trained using the proposed loss functions. The experimental results on public domain data sets show that, compared with the existing methods, our end-to-end fusion network delivers a better performance than the state-of-the-art methods in both subjective and objective evaluation. The code of our fusion method is available at https://github.com/hli1221/imagefusion-rfn-nest. 相似文献
Most present research of gender recognition focuses on visible facial images, which are sensitive to illumination changes. In this paper, we proposed hybrid methods for gender recognition by fusing visible and thermal infrared images. First, the active appearance model is used to extract features from visible images, as well as local binary pattern features and several statistical temperature features are extracted from thermal infrared images. Then, feature selection is performed by using the F-test statistic. Third, we propose using Bayesian Networks to perform explicit and implicit fusion of visible and thermal infrared image features. For explicit fusion, we propose two Bayesian Networks to perform decision-level and feature-level fusion. For implicit fusion, we propose using features from one modality as privileged information to improve gender recognition by another modality. Finally, we evaluate the proposed methods on the Natural Visible and Infrared facial Expression spontaneous database and the Equinox face database. Experimental results show that both feature-level and decision-level fusion improve the gender recognition performance, compared to that achieved from one modality. The proposed implicit fusion methods successfully capture the role of privileged information of one modality, thus enhance the gender recognition from another modality. 相似文献
We propose a general object localization and retrieval scheme based on object shape using deformable templates. Prior knowledge of an object shape is described by a prototype template which consists of the representative contour/edges, and a set of probabilistic deformation transformations on the template. A Bayesian scheme, which is based on this prior knowledge and the edge information in the input image, is employed to find a match between the deformed template and objects in the image. Computational efficiency is achieved via a coarse-to-fine implementation of the matching algorithm. Our method has been applied to retrieve objects with a variety of shapes from images with complex background. The proposed scheme is invariant to location, rotation, and moderate scale changes of the template 相似文献
Multimedia Tools and Applications - Conventional saliency detection algorithms usually achieve good detection performance at the cost of high computational complexity, and most of them focus on... 相似文献
A simple method for the correction of the relative shift between the visible and thermal infrared GOES sensor images is introduced. It makes use of the variance operator and the cross-correlation between two patterns. Results indicate that the proposed method is very promising. 相似文献
Multimedia Tools and Applications - Conventional panorama techniques create a wide-angle image by stitching images taken from the same viewpoint. In contrast, the method proposed in this work... 相似文献