首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
结合基于图像块的显著性信息随机采样和基于投影Landweber的压缩感知重构算法,提出了一种新的图像压缩感知编码与重构方法.该方法在编码端通过图像显著性信息来分配不同的测量维数以实现测量维数的自适应,在重构端,通过在投影Landweber重构算法中用不同的方向变换来得到重构图像.与同类方法相比,在测量维数相同的前提下,重构图像的峰值信噪比和主观视觉效果都有很大的提高.  相似文献   

2.
Liu  Yizhi  Gu  Xiaoyan  Huang  Lei  Ouyang  Junlin  Liao  Miao  Wu  Liangran 《Multimedia Tools and Applications》2020,79(7-8):4729-4745
Multimedia Tools and Applications - Content-based adult video detection plays an important role in preventing pornography. However, existing methods usually rely on single modality and seldom focus...  相似文献   

3.
Location information, i.e., the position of content in image plane, is considered as an important supplement in saliency detection. The effect of location information is usually evaluated by integrating it with the selected saliency detection methods and measuring the improvement, which is highly influenced by the selection of saliency methods. In this paper, we provide direct and quantitative analysis of the importance of location information for saliency detection in natural images. We firstly analyze the relationship between content location and saliency distribution on four public image datasets, and validate the distribution by simply treating location based Gaussian distribution as saliency map. To further validate the effectiveness of location information, we propose a location based saliency detection approach, which completely initializes saliency maps with location information and propagate saliency among patches based on color similarity, and discuss the robustness of location information’s effect. The experimental results show that location information plays a positive role in saliency detection, and the proposed method can outperform most state-of-the-art saliency detection methods and handle natural images with different object positions and multiple salient objects.  相似文献   

4.
Text data present in images and video contain useful information for automatic annotation, indexing, and structuring of images. Extraction of this information involves detection, localization, tracking, extraction, enhancement, and recognition of the text from a given image. However, variations of text due to differences in size, style, orientation, and alignment, as well as low image contrast and complex background make the problem of automatic text extraction extremely challenging. While comprehensive surveys of related problems such as face detection, document analysis, and image & video indexing can be found, the problem of text information extraction is not well surveyed. A large number of techniques have been proposed to address this problem, and the purpose of this paper is to classify and review these algorithms, discuss benchmark data and performance evaluation, and to point out promising directions for future research.  相似文献   

5.
ABSTRACT

The requirements of spectral and spatial quality differ from region to region in remote sensing images. The employment of saliency in pan-sharpening methods is an effective approach to fulfil this kind of demands. Common saliency feature analysis, which considers the mutual information between multiple images, can ensure the consistency and accuracy when assigning saliency to regions in different images. Thus, we propose a pan-sharpening method based on common saliency feature analysis and multiscale spatial information extraction for multiple remote sensing images. Firstly, we extract spatial information by the guided filter and accurate intensity component estimation. Then, a common saliency feature analysis method based on global contrast calculation and intensity feature extraction is designed to obtain preliminary pixel-wise saliency estimation, which is subsequently integrated with text-featured based compensation to generate adaptive injection gains. The introduction of common saliency feature analysis guarantees that the same pan-sharpening strategy will be applied to regions with similar features in multiple images. Finally, the injection gains are used to implement the detail injection. Our proposal satisfies diverse needs of spatial and spectral information for different regions in the single image and guarantees that regions with similar features in different images are treated consistently in the process of pan-sharpening. Both visual and quantitative results demonstrate that our method has better performance in guaranteeing consistency in multiple images, improving spatial quality and preserving spectral fidelity.  相似文献   

6.
Multimedia Tools and Applications - The segmentation of moving objects become challenging when the object motion is small, the shape of object changes, and there is global background motion in...  相似文献   

7.
Zhang  Xufan  Wang  Yong  Yan  Jun  Chen  Zhenxing  Wang  Dianhong 《Multimedia Tools and Applications》2020,79(25-26):17331-17348
Multimedia Tools and Applications - Conventional saliency detection algorithms usually achieve good detection performance at the cost of high computational complexity, and most of them focus on...  相似文献   

8.
Niu  Yuzhen  Lin  Lening  Chen  Yuzhong  Ke  Lingling 《Multimedia Tools and Applications》2017,76(24):26329-26353
Multimedia Tools and Applications - Visual saliency detection is useful in carrying out image compression, image segmentation, image retrieval, and other image processing applications. Majority of...  相似文献   

9.
袁野  田中旭 《计算机应用》2012,32(11):3182-3184
为了适应视频后处理芯片低成本的需求,提出一种仅需用两行缓存的新的保持边缘的图像放大算法。该方法寻找代表点代替插值点来确定相关方向。找到相关方向后,对应方向上寻找四个邻域点及其对应位置,进行插值。实验结果表明该算法能实现图像的放大,并能消除图像边缘模糊和锯齿效应,可应用于低成本的数字视频后处理芯片中。  相似文献   

10.
Multimedia Tools and Applications - Human visual system is endowed with an innate capability of distinguishing the salient regions of an image. It do so even in the presence of noise and other...  相似文献   

11.
This paper presents a new attention model for detecting visual saliency in news video. In the proposed model, bottom-up (low level) features and top-down (high level) factors are used to compute bottom-up saliency and top-down saliency respectively. Then, the two saliency maps are fused after a normalization operation. In the bottom-up attention model, we use quaternion discrete cosine transform in multi-scale and multiple color spaces to detect static saliency. Meanwhile, multi-scale local motion and global motion conspicuity maps are computed and integrated into motion saliency map. To effectively suppress the background motion noise, a simple histogram of average optical flow is adopted to calculate motion contrast. Then, the bottom-up saliency map is obtained by combining the static and motion saliency maps. In the top-down attention model, we utilize high level stimulus in news video, such as face, person, car, speaker, and flash, to generate the top-down saliency map. The proposed method has been extensively tested by using three popular evaluation metrics over two widely used eye-tracking datasets. Experimental results demonstrate the effectiveness of our method in saliency detection of news videos compared to several state-of-the-art methods.  相似文献   

12.
This article addresses the use of stereoscopic images in teleoperated tasks. Depth perception is a key point in the ability to skillfully manipulate in remote environments. Displaying three‐dimensional images is a complex process but it is possible to design a teleoperation interface that displays stereoscopic images to assist in manipulation tasks. The appropriate interface for image viewing must be chosen and the stereoscopic video cameras must be calibrated so that the image disparity is natural for the observer. Attention is given to the calculation of stereoscopic image disparity, and suggestions are made as to the limits within which adequate stereoscopic image perception takes place. The authors have designed equipment for image visualization in teleoperated systems. These devices are described and their performance evaluated. Finally, an architecture for the transmission of stereoscopic video images via network is proposed, which in the future will substitute for current image processing devices. © 2005 Wiley Periodicals, Inc.  相似文献   

13.
Pedestrian detection is a fundamental problem in video surveillance and has achieved great progress in recent years. However the performance of a generic pedestrian detector trained on some public datasets drops significantly when it is applied to some specific scenes due to the difference between source training samples and pedestrian samples in target scenes. We propose a novel transfer learning framework, which automatically transfers a generic detector to a scene-specific pedestrian detector without manually labeling training samples from target scenes. In our method, we get initial detected results and several cues are used to filter target templates whose labels we are sure about from the initial detected results. Gaussian mixture model (GMM) is used to get the motion areas in each video frame and some other target samples. The relevancy between target samples and target templates and the relevancy between source samples and target templates are estimated by sparse coding and later used to calculate the weights for source samples and target samples. Saliency detection is an essential work before the relevancy computing between source samples and target templates for eliminating interference of non-salient region. We demonstrate the effectiveness of our scene-specific detector on a public dataset, and compare with the generic detector. Detection rates improves significantly, and also it is comparable with the detector trained by a lot of manually labeled samples from the target scene.  相似文献   

14.
In spite of the ever-increasing prevalence of low-cost, color printing devices, gray-scale printers remain in widespread use. Authors producing documents with color images for any venue must account for the possibility that the color images might be reduced to gray scale before they are viewed. Because conversion to gray scale reduces the number of color dimensions, some loss of visual information is generally unavoidable. Ideally, we can restrict this loss to features that vary minimally within the color image. Nevertheless, with standard procedures in widespread use, this objective is not often achieved, and important image detail is often lost. Consequently, algorithms that convert color images to gray scale in a way that preserves information remain important. Human observers with color-deficient vision may experience the same problem, in that they may perceive distinct colors to be indistinguishable and thus lose image detail. The same strategy that is used in converting color images to gray scale provides a method for recoloring the images to deliver increased information content to such observers.  相似文献   

15.
一种人体运动重定向方法*   总被引:1,自引:0,他引:1  
提出人体下肢向量的概念,通过分析人体运动指出下肢向量能保持运动的主要特征,由此提出基于下肢向量特征不变性的人体运动重定向方法,以此提高运动捕获数据的可重用性。该方法面向人体下肢的运动重定向,能够将运动数据从原始骨骼模型重定向到具有不同骨骼长度比例的目标骨骼模型,同时保持原始运动的主要特征。实验结果表明,该方法具有较好的运动重定向效果和较快的计算效率。  相似文献   

16.
In this paper, we propose a method to jointly transfer the color and detail of multiple source images to a target video or image. Our method is based on a probabilistic segmentation scheme using Gaussian mixture model (GMM) to divide each source image as well as the target video frames or image into soft regions and determine the relevant source regions for each target region. For detail transfer, we first decompose each image as well as the target video frames or image into base and detail components. Then histogram matching is performed for detail components to transfer the detail of matching regions from source images to the target. We propose a unified framework to perform both color and detail transforms in an integrated manner. We also propose a method to maintain consistency for video targets, by enforcing consistent region segmentations for consecutive video frames using GMM-based parameter propagation and adaptive scene change detection. Experimental results demonstrate that our method automatically produces consistent color and detail transferred videos and images from a set of source images.  相似文献   

17.
Li  Yongjun  Li  Yunsong 《Multimedia Tools and Applications》2017,76(24):26273-26295
Multimedia Tools and Applications - Research and application of human fixations detection in video compressed-domain have gained an increasing attention in the latest years. However, both...  相似文献   

18.
Effective annotation and content-based search for videos in a digital library require a preprocessing step of detecting, locating and classifying scene transitions, i.e., temporal video segmentation. This paper proposes a novel approach—spatial-temporal joint probability image (ST-JPI) analysis for temporal video segmentation. A joint probability image (JPI) is derived from the joint probabilities of intensity values of corresponding points in two images. The ST-JPT, which is a series of JPIs derived from consecutive video frames, presents the evolution of the intensity joint probabilities in a video. The evolution in a ST-JPI during various transitions falls into one of several well-defined linear patterns. Based on the patterns in a ST-JPI, our algorithm detects and classifies video transitions effectively.Our study shows that temporal video segmentation based on ST-JPIs is distinguished from previous methods in the following way: (1) It is effective and relatively robust not only for video cuts but also for gradual transitions; (2) It classifies transitions on the basis of predefined evolution patterns of ST-JPIs during transitions; (3) It is efficient, scalable and suitable for real-time video segmentation. Theoretical analysis and experimental results of our method are presented to illustrate its efficacy and efficiency.  相似文献   

19.
Qian  Shenyi  Shi  Yongsheng  Wu  Huaiguang  Liu  Jinhua  Zhang  Weiwei 《Applied Intelligence》2022,52(2):1770-1792
Applied Intelligence - In order to improve the brightness and contrast of low illumination color images and avoid over enhancement, an adaptive image enhancement algorithm based on visual saliency...  相似文献   

20.
Pattern Analysis and Applications - This paper presents a novel compressed domain saliency estimation method based on analyzing block motion vectors and transform residuals extracted from the...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号