首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Li  Zhen  Li  Qilei  Wu  Wei  Wu  Zongjun  Lu  Lu  Yang  Xiaomin 《Multimedia Tools and Applications》2020,79(13-14):9019-9035

Since the limitation of optical sensors, it’s often hard to obtain an image with the ideal resolution. Image super-resolution (SR) technology can generate a high-resolution image from the corresponding low-resolution image. Recently, deep learning (DL) based SR methods draw much attention due to their satisfying reconstruction results. However, these methods often neglect the diversity of image patches. Therefore, the reconstruction effect is limited. To fully exploit the texture variability across different image patches, we propose a universal, flexible, and effective framework. The proposed framework can be adopted to any DL based methods. It can significantly improve the SR accuracy while maintaining the running time. In the proposed framework, K-means is employed to cluster image patches into different categories. Multiple CNN branches are designed for these different categories to reconstruct the SR image. Each branch is weighted in accordance with the Euclidean distance to the cluster centers. Experimental results demonstrate that by applying the proposed framework, performance of the DL based SR method can be significantly improved.

  相似文献   

2.
Recent researches have shown that the sparse representation based technology can lead to state of art super-resolution image reconstruction (SRIR) result. It relies on the idea that the low-resolution (LR) image patches can be regarded as down sampled version of high-resolution (HR) images, whose patches are assumed to have a sparser presentation with respect to a dictionary of prototype patches. In order to avoid a large training patches database and obtain more accurate recovery of HR images, in this paper we introduce the concept of examples-aided redundant dictionary learning into the single-image super-resolution reconstruction, and propose a multiple dictionaries learning scheme inspired by multitask learning. Compact redundant dictionaries are learned from samples classified by K-means clustering in order to provide each sample a more appropriate dictionary for image reconstruction. Compared with the available SRIR methods, the proposed method has the following characteristics: (1) introducing the example patches-aided dictionary learning in the sparse representation based SRIR, in order to reduce the intensive computation complexity brought by enormous dictionary, (2) using the multitask learning and prior from HR image examples to reconstruct similar HR images to obtain better reconstruction result and (3) adopting the offline dictionaries learning and online reconstruction, making a rapid reconstruction possible. Some experiments are taken on testing the proposed method on some natural images, and the results show that a small set of randomly chosen raw patches from training images and small number of atoms can produce good reconstruction result. Both the visual result and the numerical guidelines prove its superiority to some start-of-art SRIR methods.  相似文献   

3.
Gao  Min  Han  Xian-Hua  Li  Jing  Ji  Hui  Zhang  Huaxiang  Sun  Jiande 《Multimedia Tools and Applications》2020,79(7-8):4831-4846

In recent years, CNN has been used for single image super-resolution (SR) with its success of in the field of computer vision. However, in the recovery process, there are always some high-frequency components that cant be recovered from low-resolution images to high-resolution ones by using existing CNN-based methods. In this paper, we propose an image super-resolution method based on CNN, which uses a two-level residual learning network to learn residual components, i.e., high-frequency components. We use the Super-Resolution Convolutional Neural Network (SRCNN) as the network structure in each level so that our proposed method can achieve the high-resolution images with high-frequency components that cant be obtained by the existing methods. In addition, we analyze the proposed method with considering three kinds of residual learning networks, which are different in the structure and superimposed layers of the residual learning network. In the experiments, we investigate the performance of the proposed method with various residual learning networks and the effect of image super-resolution to image captioning task.

  相似文献   

4.
目的 针对基于图像3维重建中纹理映射存在缝隙的问题,提出一种多参数加权的无缝纹理映射算法。方法 算法根据图像的标定信息对三角格网进行聚类分割,将重建模型聚类成不同参考图像的网格贴片,并对贴片排序生成纹理图像,加权融合重建顶点的法线角度、图像视点、模型深度等信息生成纹理贴片像素,最后采用多分辨率分解融合技术消除纹理贴片缝隙,实现无缝的纹理映射。结果 对不同的测试数据进行了验证,本文算法在保持一定清晰度的前提下消除了纹理的缝隙,即使对于构网误差较大的区域也能得到较为满意的结果,同时本文算法支持大数据的3维纹理映射。结论 提出了一种无缝的纹理映射算法,算法通过构造一个平滑的加权方程融合多源信息消除纹理的接缝,实验结果表明了本文算法的有效性及实用性,得到了高保真的无缝纹理映射效果,可应用到城市级别的大场景3维重建领域。  相似文献   

5.
Yan  Jianqiang  Zhang  Kaibing  Luo  Shuang  Xu  Jian  Lu  Jian  Xiong  Zenggang 《Applied Intelligence》2022,52(10):10867-10884

Learning cascade regression has been shown an effective strategy to further enhance the perceptual quality of resulted high-resolution (HR) images. However, previous cascade regression-based SR methods have two obvious weaknesses: (1)edge structures cannot be preserved well when applying texture features to represent low-resolution (LR) images, and (2)the local manifold structures spanned by the LR-HR feature spaces cannot be revealed by the learned local linear mappings. To alleviate the aforementioned problems, a novel example regression-based super-resolution (SR) approach called learning graph-constrained cascade regressors (LGCCR) is presented, which learns a group of multi-round residual regressors in a unique way. Specifically, we improve the edge preservation capability by synthesizing the whole HR image rather than local image patches, which facilitates to extract the edge features to represent LR images. Moreover, we utilize a graph-constrained regression model to build the local linear regressors, where each local linear regressor responds to an anchored atom in the learned over-complete dictionary. Both quantitative and qualitative quality evaluations on seven benchmark databases indicate the superiority of the proposed LGCCR-based SR approach in comparing with other state-of-the-art SR predecessors.

  相似文献   

6.
The human visual system (HSV) is quite adept at swiftly detecting objects of interest in complex visual scene. Simulating human visual system to detect visually salient regions of an image has been one of the active topics in computer vision. Inspired by random sampling based bagging ensemble learning method, an ensemble dictionary learning (EDL) framework for saliency detection is proposed in this paper. Instead of learning a universal dictionary requiring a large number of training samples to be collected from natural images, multiple over-complete dictionaries are independently learned with a small portion of randomly selected samples from the input image itself, resulting in more flexible multiple sparse representations for each of the image patches. To boost the distinctness of salient patch from background region, we present a reconstruction residual based method for dictionary atom reduction. Meanwhile, with the obtained multiple probabilistic saliency responses for each of the patches, the combination of them is finally carried out from the probabilistic perspective to achieve better predictive performance on saliency region. Experimental results on several open test datasets and some natural images demonstrate that the proposed EDL for saliency detection is much more competitive compared with some existing state-of-the-art algorithms.  相似文献   

7.
目的 现实中的纹理往往具有类型多样、形态多变、结构复杂等特点,直接影响到纹理图像分割的准确性。传统的无监督纹理图像分割算法具有一定的局限性,不能很好地提取稳定的纹理特征。本文提出了基于Gabor滤波器和改进的LTP(local ternary pattern)算子的针对复杂纹理图像的纹理特征提取算法。方法 利用Gabor滤波器和扩展LTP算子分别提取相同或相似纹理模式的纹理特征和纹理的差异性特征,并将这些特征融入到水平集框架中对纹理图像进行分割。结果 通过实验表明,对纹理方向及尺度变化较大的图像、复杂背景下的纹理图像以及弱纹理模式的图像,本文方法整体分割结果明显优于传统的Gabor滤波器、结构张量、拓展结构张量、局部相似度因子等纹理分割方法得到的结果。同时,将本文方法与基于LTP的方法进行对比,分割结果依然更优。在量化指标方面,将本文方法与各种无监督的纹理分割方法就分割准确度进行对比,结果表明,在典型的纹理图像上,本文方法准确度达到97%以上,高于其他方法的分割准确度。结论 提出了一种结合Gabor滤波器和扩展LTP算子的无监督多特征的纹理图像分割方法,能够较好地提取相似纹理模式的特征和纹理的差异性特征,且这些纹理特征可以很好地融合到水平集框架中,对真实世界复杂纹理图像能够得到良好的分割效果。  相似文献   

8.
Jia  Lingyao  Shi  Xueyu  Sun  Qiule  Tang  Xingqiang  Li  Peihua 《Applied Intelligence》2022,52(10):11273-11287

Iris recognition in less constrained environments is challenging as the images taken therein contain severe noisy factors. How to represent iris texture for accurate and robust recognition in such environments is still an open issue. Towards addressing this problem, this paper proposes a novel convolutional network (ConvNet) for effective iris texture representation. The key of the proposed ConvNet is an interaction block which computes an affinity matrix among all pairwise high-level features for learning second-order relationships. The interaction block can model relationships of neighboring and long-range features, and is architecture-agnostic, suitable for different deep network architectures. To further improve the robustness of iris representation, we encode the affinity matrix based on ordinal measure. In addition, we develop a mask network corresponding to the feature learning network, which can exclude the noisy factors during iris matching. We perform thorough ablation studies to evaluate the effectiveness of the proposed networks. Experiments have shown that the proposed networks outperform state-of-the-art (SOTA) methods, achieving a false reject rate (FRR) of 5.49%, 10.41% and 5.80% at 10??6 false accept rate (FAR) on ND-IRIS-0405, CASIA-IrisV4-Thousand and CASIA-IrisV4-Lamp respectively. And the improvements in equal error rates (EERs) are 0.41%, 0.72% and 0.40%, respectively, as compared with the SOTA methods.

  相似文献   

9.
小波域中双稀疏的单幅图像超分辨   总被引:1,自引:1,他引:0       下载免费PDF全文
目的 过去几年,基于稀疏表示的单幅图像超分辨获得了广泛的研究,提出了一种小波域中双稀疏的图像超分辨方法。方法 由小波域中高频图像的稀疏性及高频图像块在空间冗余字典下表示系数的稀疏性,建立了双稀疏的超分辨模型,恢复出高分辨率图像的细节系数;然后利用小波的多尺度性及低分辨率图像可作为高分辨率图像低频系数的逼近的假设,超分辨图像由低分辨率图像的小波分解和估计的高分辨率图像的高频系数经过二层逆小波变换来重构。结果 通过大量的实验发现,双稀疏的方法不仅较好地恢复了图像的局部纹理与边缘,且在噪声图像的超分辨上也获得了不错的效果。结论 与现在流行的使用稀疏表示的超分辨方法相比,双稀疏的方法对噪声图像的超分辨效果更好,且计算复杂度减小。  相似文献   

10.
Sparse Modeling of Textures   总被引:3,自引:0,他引:3  
  相似文献   

11.
12.
Chen  Yanfang  Wang  Liwei  Li  Chuankun  Hou  Yonghong  Li  Wanqing 《Multimedia Tools and Applications》2020,79(3-4):1707-1725

With the advance of deep learning, deep learning based action recognition is an important research topic in computer vision. The skeleton sequence is often encoded into an image to better use Convolutional Neural Networks (ConvNets) such as Joint Trajectory Maps (JTM). However, this encoding method cannot effectively capture long temporal information. In order to solve this problem, This paper presents an effective method to encode spatial-temporal information into color texture images from skeleton sequences, referred to as Temporal Pyramid Skeleton Motion Maps (TPSMMs), and Convolutional Neural Networks (ConvNets) are applied to capture the discriminative features from TPSMMs for human action recognition. The TPSMMs not only capture short temporal information, but also embed the long dynamic information over the period of an action. The proposed method has been verified and achieved the state-of-the-art results on the widely used UTD-MHAD, MSRC-12 Kinect Gesture and SYSU-3D datasets.

  相似文献   

13.

In this paper we propose a distributed locality sensitive hashing based framework for image super resolution exploiting computational and storage efficiency of cloud. Now days huge multimedia data is available on the cloud which can be utilized using store anywhere and excess anywhere model. It may be noted that super resolution is required for consumer electronics display devices due to various reasons. The propose framework exploits the image correlation for image super resolution using locality sensitive hashing (LSH) for manifold learning. In our work we have exploited the benefits of manifold learning for image super resolution, which in-turn is a highly time complex operation. The time complexity is involved due to finding the approximate nearest neighbors from trillion of image patches for locally linear embedding (LLE) operation. In our approach it is mitigated by using a distributed framework which internally uses hash tables for mapping of patches in the target image from a database of internet picture collection. The proposed framework for super resolution provides promising results in comparison to existing approaches.

  相似文献   

14.
High resolution (HR) infrared (IR) images play an important role in many areas. However, it is difficult to obtain images at a desired resolution level because of the limitation of hardware and image environment. Therefore, improving the spatial resolution of infrared images has become more and more urgent. Methods based on sparse coding have been successfully used in single-image super-resolution (SR) reconstruction. However, the existing sparse representation-based SR method for infrared (IR) images usually encounter three problems. First, IR images always lack detailed information, which leads to unsatisfying IR image reconstruction results with conventional method. Second, the existing dictionary learning methods in SR aim at learning a universal and over-complete dictionary to represent various image structures. A large number of different structural patterns exist in an image, whereas one dictionary is not capable of capturing all of the different structures. Finally, the optimization for dictionary learning and image reconstruction requires a highly intensive computation, which restricts the practical application in real-time systems. To overcome these problems, we propose a fast IR image SR scheme. Firstly, we integrate the information from visible (VI) images and IR images to improve the resolution of IR images because images acquired by different sensors provide complementary information for the same scene. Second, we divide the training patches into several clusters, then the multiple dictionaries are learned for each cluster in order to provide each patch with a more accurate dictionary. Finally, we propose an method of Soft-assignment based Multiple Regression (SMR). SMR reconstructs the high resolution (HR) patch by the dictionaries corresponding to its K nearest training patch clusters. The method has a low level of computational complexity and may be readily suitable for real-time processing applications. Numerous experiments validate that this scheme brings better results in terms of quantization and visual perception than many state-of-the-art methods, while at the same time maintains a relatively low level of time complexity. Since the main computation of this scheme is matrix multiplication, it will be easily implemented in FPGA system.  相似文献   

15.
16.
Li  Si-Qi  Gao  Yue  Dai  Qiong-Hai 《国际自动化与计算杂志》2022,19(4):307-318

Seeing through dense occlusions and reconstructing scene images is an important but challenging task. Traditional frame-based image de-occlusion methods may lead to fatal errors when facing extremely dense occlusions due to the lack of valid information available from the limited input occluded frames. Event cameras are bio-inspired vision sensors that record the brightness changes at each pixel asynchronously with high temporal resolution. However, synthesizing images solely from event streams is ill-posed since only the brightness changes are recorded in the event stream, and the initial brightness is unknown. In this paper, we propose an event-enhanced multi-modal fusion hybrid network for image de-occlusion, which uses event streams to provide complete scene information and frames to provide color and texture information. An event stream encoder based on the spiking neural network (SNN) is proposed to encode and denoise the event stream efficiently. A comparison loss is proposed to generate clearer results. Experimental results on a large-scale event-based and frame-based image de-occlusion dataset demonstrate that our proposed method achieves state-of-the-art performance.

  相似文献   

17.
The goal of example‐based texture synthesis methods is to generate arbitrarily large textures from limited exemplars in order to fit the exact dimensions and resolution required for a specific modeling task. The challenge is to faithfully capture all of the visual characteristics of the exemplar texture, without introducing obvious repetitions or unnatural looking visual elements. While existing non‐parametric synthesis methods have made remarkable progress towards this goal, most such methods have been demonstrated only on relatively low‐resolution exemplars. Real‐world high resolution textures often contain texture details at multiple scales, which these methods have difficulty reproducing faithfully. In this work, we present a new general‐purpose and fully automatic self‐tuning non‐parametric texture synthesis method that extends Texture Optimization by introducing several key improvements that result in superior synthesis ability. Our method is able to self‐tune its various parameters and weights and focuses on addressing three challenging aspects of texture synthesis: (i) irregular large scale structures are faithfully reproduced through the use of automatically generated and weighted guidance channels; (ii) repetition and smoothing of texture patches is avoided by new spatial uniformity constraints; (iii) a smart initialization strategy is used in order to improve the synthesis of regular and near‐regular textures, without affecting textures that do not exhibit regularities. We demonstrate the versatility and robustness of our completely automatic approach on a variety of challenging high‐resolution texture exemplars.  相似文献   

18.
目的 将高光谱图像和多光谱图像进行融合,可以获得具有高空间分辨率和高光谱分辨率的光谱图像,提升光谱图像的质量。现有的基于深度学习的融合方法虽然表现良好,但缺乏对多源图像特征中光谱和空间长距离依赖关系的联合探索。为有效利用图像的光谱相关性和空间相似性,提出一种联合自注意力的Transformer网络来实现多光谱和高光谱图像融合超分辨。方法 首先利用联合自注意力模块,通过光谱注意力机制提取高光谱图像的光谱相关性特征,通过空间注意力机制提取多光谱图像的空间相似性特征,将获得的联合相似性特征用于指导高光谱图像和多光谱图像的融合;随后,将得到的融合特征输入到基于滑动窗口的残差Transformer深度网络中,探索融合特征的长距离依赖信息,学习深度先验融合知识;最后,特征通过卷积层映射为高空间分辨率的高光谱图像。结果 在CAVE和Harvard光谱数据集上分别进行了不同采样倍率下的实验,实验结果表明,与对比方法相比,本文方法从定量指标和视觉效果上,都取得了更好的效果。本文方法相较于性能第二的方法EDBIN (enhanced deep blind iterative network),在CAVE数据集上峰值信噪比提高了0.5 dB,在Harvard数据集上峰值信噪比提高了0.6 dB。结论 本文方法能够更好地融合光谱信息和空间信息,显著提升高光谱融合超分图像的质量。  相似文献   

19.
目的 随着遥感影像空间分辨率的提升,相同地物的空间纹理表现形式差异变大,地物特征更加复杂多样,传统的变化检测方法已很难满足需求。为提高高分辨率遥感影像的变化检测精度,尤其对相同地物中纹理差异较大的区域做出有效判别,提出结合深度学习和超像元分割的高分辨率遥感影像变化检测方法。方法 将有限带标签数据分割成切片作训练样本,按照样本形式设计一个多切片尺度特征融合网络并对其训练,获得测试图像的初步变化检测结果;利用超像元分割算法将测试图像分割成许多无重叠的同质性区域,并将分割结果与前述检测结果叠合,得到带分割标记的变化检测结果;用举手表决算法统计带分割标记的变化检测结果中超像元的变化状况,得到最终变化检测结果。结果 在变化检测实验结果中,本文提出的多切片尺度特征融合卷积网络模型在广东数据集和香港数据集上,优于单一切片尺度下卷积神经网络模型,并且结合超像元的多切片尺度特征融合卷积网络模型得到的Kappa系数分别达到80%和82%,比相应的非超像元算法分别提高了6%和8%,在两个测试集上表现均优于长短时记忆网络、深度置信网络等对比算法。结论 本文提出的卷积神经网络变化检测方法可以充分学习切片的空间信息和其他有效特征,避免过拟合现象;多层尺度切片特征融合的方法优于单一切片尺度训练神经网络的方法;结合深度学习和超像元分割算法,检测单元实现了由切片到超像元的转变,能对同物异谱的区域做出有效判决,有利于提升变化检测精度。  相似文献   

20.

In machine learning, image classification accuracy generally depends on image segmentation and feature extraction methods with the extracted features and its qualities. The main focus of this paper is to determine the defected area of mangoes using image segmentation algorithm for improving the classification accuracy. The Enhanced Fuzzy based K-means clustering algorithm is designed for increasing the efficiency of segmentation. Proposed segmentation method is compared with K-means and Fuzzy C-means clustering methods. The geometric, texture and colour based features are used in the feature extraction. Process of feature selection is done by Maximally Correlated Principal Component Analysis (MCPCA). Finally, in the classification step, severe portions of the affected area are analyzed by Backpropagation Based Discriminant Classifier (BBDC). Proposed classifier is compared with BPNN and Naive Bayes classifiers. The images are classified into three classes in final output like Class A –good quality mango, Class B-average quality mango, and Class C-poor quality mango. Finally, the evaluated results of the proposed model examine various defected and healthy mango images and prove that the proposed method has the highest accuracy when compared with existing methods.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号