共查询到20条相似文献,搜索用时 46 毫秒
1.
Gloria Haro Marcelo Bertalmío Vicent Caselles 《International Journal of Computer Vision》2006,69(1):109-117
In film production, it is sometimes not convenient or directly impossible to shoot some night scenes at night. The film budget,
schedule or location may not allow it. In these cases, the scenes are shot at daytime, and the ‘night look’ is achieved by
placing a blue filter in front of the lens and under-exposing the film. This technique, that the American film industry has
used for many decades, is called ‘Day for Night’ (or ‘American Night’ in Europe.) But the images thus obtained don’t usually
look realistic: they tend to be too bluish, and the objects’ brightness seems unnatural for night-light. In this article we
introduce a digital Day for Night algorithm that achieves very realistic results. We use a set of very simple equations, based
on real physical data and visual perception experimental data. To simulate the loss of visual acuity we introduce a novel
diffusion Partial Differential Equation (PDE) which takes luminance into account and respects contrast, produces no ringing,
is stable, very easy to implement and fast. The user only provides the original day image and the desired level of darkness
of the result. The whole process from original day image to final night image is implemented in a few seconds, computations
being mostly local. 相似文献
2.
A fast algorithm for computing moments of gray images based on NAM and extended shading approach 总被引:1,自引:0,他引:1
Computing moments on images is very important in the fields of image processing and pattern recognition. The non-symmetry
and anti-packing model (NAM) is a general pattern representation model that has been developed to help design some efficient
image representation methods. In this paper, inspired by the idea of computing moments based on the S-Tree coding (STC) representation
and by using the NAM and extended shading (NAMES) approach, we propose a fast algorithm for computing lower order moments
based on the NAMES representation, which takes O(N) time where N is the number of NAM blocks. By taking three idiomatic standard
gray images ‘Lena’, ‘F16’, and ‘Peppers’ in the field of image processing as typical test objects, and by comparing our proposed
algorithm with the conventional algorithm and the popular STC representation algorithm for computing the lower order moments,
the theoretical and experimental results presented in this paper show that the average execution time improvement ratios of
the proposed NAMES approach over the STC approach, and also the conventional approach are 26.63%, and 82.57% respectively
while maintaining the image quality. 相似文献
3.
In this paper, we demonstrate how craft practice in contemporary jewellery opens up conceptions of ‘digital jewellery’ to
possibilities beyond merely embedding pre-existing behaviours of digital systems in objects, which follow shallow interpretations
of jewellery. We argue that a design approach that understands jewellery only in terms of location on the body is likely to
lead to a world of ‘gadgets’, rather than anything that deserves the moniker ‘jewellery’. In contrast, by adopting a craft
approach, we demonstrate that the space of digital jewellery can include objects where the digital functionality is integrated
as one facet of an object that can be personally meaningful for the holder or wearer. 相似文献
4.
Image Fusion for Enhanced Visualization: A Variational Approach 总被引:3,自引:0,他引:3
Gemma Piella 《International Journal of Computer Vision》2009,83(1):1-11
We present a variational model to perform the fusion of an arbitrary number of images while preserving the salient information
and enhancing the contrast for visualization. We propose to use the structure tensor to simultaneously describe the geometry
of all the inputs. The basic idea is that the fused image should have a structure tensor which approximates the structure
tensor obtained from the multiple inputs. At the same time, the fused image should appear ‘natural’ and ‘sharp’ to a human
interpreter. We therefore propose to combine the geometry merging of the inputs with perceptual enhancement and intensity
correction. This is performed through a minimization functional approach which implicitly takes into account a set of human
vision characteristics. 相似文献
5.
Direct linear sub-pixel correlation by incorporation of neighbor pixels' information and robust estimation of window transformation 总被引:1,自引:0,他引:1
Standard methods for sub-pixel matching are iterative and nonlinear; they are also sensitive to false initialization and
window deformation. In this paper, we present a linear method that incorporates information from neighboring pixels. Two algorithms
are presented: one ‘fast’ and one ‘robust’. They both start from an initial rough estimate of the matching. The fast one is
suitable for pairs of images requiring negligible window deformation. The robust method is slower but more general and more
precise. It eliminates false matches in the initialization by using robust estimation of the local affine deformation. The
first algorithm attains an accuracy of 0.05 pixels for interest points and 0.06 for random points in the translational case.
For the general case, if the deformation is small, the second method gives an accuracy of 0.05 pixels; while for large deformation,
it gives an accuracy of about 0.06 pixels for points of interest and 0.10 pixels for random points. They are very few false
matches in all cases, even if there are many in the initialization.
Received: 24 July 1997 / Accepted: 4 December 1997 相似文献
6.
Stefan Klink Thomas Kieninger 《International Journal on Document Analysis and Recognition》2001,4(1):18-26
Document image processing is a crucial process in office automation and begins at the ‘OCR’ phase with difficulties in document
‘analysis’ and ‘understanding’. This paper presents a hybrid and comprehensive approach to document structure analysis. Hybrid
in the sense that it makes use of layout (geometrical) as well as textual features of a given document. These features are
the base for potential conditions which in turn are used to express fuzzy matched rules of an underlying rule base. Rules
can be formulated based on features which might be observed within one specific layout object. However, rules can also express
dependencies between different layout objects. In addition to its rule driven analysis, which allows an easy adaptation to
specific domains with their specific logical objects, the system contains domain-independent markup algorithms for common
objects (e.g., lists).
Received June 19, 2000 / Revised November 8, 2000 相似文献
7.
Marcin Grzegorzek 《Pattern Analysis & Applications》2010,13(3):333-348
This article presents a system for texture-based probabilistic classification and localisation of three-dimensional objects
in two-dimensional digital images and discusses selected applications. In contrast to shape-based approaches, our texture-based
method does not rely on object features extracted using image segmentation techniques. Rather, the objects are described by
local feature vectors computed directly from image pixel values using the wavelet transform. Both gray level and colour images
can be processed. In the training phase, object features are statistically modelled as normal density functions. In the recognition
phase, the system classifies and localises objects in scenes with real heterogeneous backgrounds. Feature vectors are calculated
and a maximisation algorithm compares the learned density functions with the extracted feature vectors and yields the classes
and poses of objects found in the scene. Experiments carried out on a real dataset of over 40,000 images demonstrate the robustness
of the system in terms of classification and localisation accuracy. Finally, two important real application scenarios are
discussed, namely recognising museum exhibits from visitors’ own photographs and classification of metallography images. 相似文献
8.
Remote sensing imaging techniques make use of data derived from high resolution satellite sensors. Image classification identifies
and organises pixels of similar spatial distribution or similar statistical characteristics into the same spectral class (theme).
Contextual data can be incorporated, or ‘fused’, with spectral data to improve the accuracy of classification algorithms.
In this paper we use Dempster–Shafer’s theory of evidence to achieve this data fusion. Incorporating a Knowledge Base of evidence
within the classification process represents a new direction for the development of reliable systems for image classification
and the interpretation of remotely sensed data. 相似文献
9.
Natural image categorisation and retrieval is the main challenge for image indexing. With the increase of available images
and video databases, there is a real need to, first, organise the database automatically according to different semantic groups,
and secondly, to take into account these large databases where most of the data is stored in a compressed form. The global
distribution of orientation features is a very powerful tool to semantically organise the database into groups, such as outdoor
urban scenes, indoor scenes, ‘closed’ landscapes (valleys, mountains, forests, etc.) and ‘open’ landscapes (deserts, fields,
beaches, etc.). The constraint of a JPEG compressed database is completely integrated with an efficient implementation of
an orientation estimator in the DCT (Discrete Cosinus Transform) domain. The proposed estimator is analysed from different
points of view (accuracy and discrimination power). The images are then globally characterised by a set of a few parameters
(two or three), allowing a fast scenes categorisation and organisation which is very robust to the quantisation effect, up
to a quality factor of 10 in the JPEG format. 相似文献
10.
《Information Fusion》2005,6(3):235-241
Radiometric normalization is often required in remote sensing image analysis particularly in land change analysis. The normalization minimizes different imaging condition effects in analysis and rectifies radiometry of images in such a way as if they have been acquired at the same imaging conditions. Relative radiometric normalization which is normally applied in image preprocessing stage does not remove all unwanted effects. In this paper, an automatic normalization method has been developed based on regression applied on unchanged pixels within urban areas. The proposed method is based on efficient selection of unchanged pixels through image difference histogram modeling using available spectral bands and calculation of relevant coefficients for dark, gray and bright pixels in each band. The coefficients are applied to produce the normalized image. The idea has been implemented on two TM image datasets. The capability of the approach in taking into account the imaging condition differences and effectively excluding real land change pixels from the normalization process has shown better performance in the evaluation stage. 相似文献
11.
The classic image processing method for flaw detection uses one image of the scene, or multiple images without correspondences
between them. To improve this scheme, automated inspection using multiple views has been developed in recent years. This strategy’s
key idea is to consider as real flaws those regions that can be tracked in a sequence of multiple images because they are
located in positions dictated by geometric conditions. In contrast, false alarms (or noise) can be successfully eliminated
in this manner, since they do not appear in the predicted places in the following images, and thus cannot be tracked. This
paper presents a method to inspect aluminum wheels using images taken from different positions using a method called automatic
multiple view inspection. Our method can be applied to uncalibrated image sequences, therefore, it is not necessary to determine
optical and geometric parameters normally present in the calibrated systems. In addition, to improve the performance, we designed
a false alarm reduction method in two and three views called intermediate classifier block (ICB). The ICB method takes advantage
of the classifier ensemble methodology by making use of feature analysis in multiple views. Using this method, real flaws
can be detected with high precision while most false alarms can be discriminated. 相似文献
12.
Stuart Jackson Nuala Brady Fred Cummins Kenneth Monaghan 《Artificial Intelligence Review》2006,26(1-2):141-154
Recent findings in neuroscience suggest an overlap between those brain regions involved in the control and execution of movement
and those activated during the perception of another’s movement. This so called ‘mirror neuron’ system is thought to underlie
our ability to automatically infer the goals and intentions of others by observing their actions. Kilner et al. (Curr Biol
13(6):522–525, 2003) provide evidence for a human ‘mirror neuron’ system by showing that the execution of simple arm movements
is affected by the simultaneous perception of another’s movement. Specifically, observation of ‘incongruent’ movements made
by another human, but not by a robotic arm, leads to greater variability in the movement trajectory than observation of movements
in the same direction. In this study we ask which aspects of the observed motion are crucial to this interference effect by
comparing the efficacy of real human movement to that of sparse ‘point-light displays’. Eight participants performed whole
arm movements in both horizontal and vertical directions while observing either the experimenter or a virtual ‘point-light’
figure making arm movements in the same or in a different direction. Our results, however, failed to show an effect of ‘congruency’
of the observed movement on movement variability, regardless of whether a human actor or point-light figure was observed.
The findings are discussed, and future directions for studies of perception-action coupling are considered. 相似文献
13.
Ching-Liang Su 《Journal of Intelligent and Robotic Systems》2006,45(4):295-305
This research uses the ring to line mapping technique to map the object image to the straight-line signals. The ‘vector magnitude invariant transform’ technique is used to transfer the object signal to an invariant vector magnitude quantity for object-identification. The ‘vector magnitude invariant transform’ technique can solve the image rotation problem. Various vertical magnitude quantity strips are generated to cope with the image-shifting problem. In this research, 105 comparisons are conducted to find the accuracy-rate of the developed algorithm. Within those 105 comparisons, 15 comparisons are conducted for self-comparison. The other 90 comparisons are conducted for comparisons between two different object images. The algorithm developed in this research can precisely classify the object image. 相似文献
14.
Fingerprint classification is a challenging pattern recognition problem which plays a fundamental role in most of the large
fingerprint-based identification systems. Due to the intrinsic class ambiguity and the difficulty of processing very low quality
images (which constitute a significant proportion), automatic fingerprint classification performance is currently below operating
requirements, and most of the classification work is still carried out manually or semi-automatically. This paper explores
the advantages of combining the MASKS and MKL-based classifiers, which we have specifically designed for the fingerprint classification
task. In particular, a combination at the ‘abstract level’ is proposed for exclusive classification, whereas a fusion at the
‘measurement level’ is introduced for continuous classification. The advantages of coupling these distinct techniques are
well evident; in particular, in the case of exclusive classification, the FBI challenge, requiring a classification error
≤ 1% at 20% rejection, was met on NIST-DB14.
Received: 06 November 2000, Received in revised form: 25 October 2001, Accepted: 03 January 2002 相似文献
15.
Ivan Bajla František Rublík Barbora Arendacká Igor Farkaš Klára Hornišová Svorad Štolc Viktor Witkovský 《Machine Vision and Applications》2009,20(4):243-259
A software system Gel Analysis System for Epo (GASepo) has been developed within an international WADA project. As recent
WADA criteria of rEpo positivity are based on identification of each relevant object (band) in Epo images, development of
suitable methods of image segmentation and object classification were needed for the GASepo system. In the paper we address
two particular problems: segmentation of disrupted bands and classification of the segmented objects into three or two classes.
A novel band projection operator is based on convenient object merging measures and their discrimination analysis using specifically
generated training set of segmented objects. A weighted ranks classification method is proposed, which is new in the field
of image classification. It is based on ranks of the values of a specific criterial function. The weighted ranks classifiers
proposed in our paper have been evaluated on real samples of segmented objects of Epo images and compared to three selected
well-known classifiers: Fisher linear classifier, Support Vector Machine, and Multilayer Perceptron.
相似文献
Svorad Štolc (Corresponding author)Email: |
16.
Meihong Shi Rong Fu Yong Guo Shixian Bai Bugao Xu 《Multimedia Tools and Applications》2011,52(1):147-157
Defect inspection is a vital step for quality assurance in fabric production. The development of a fully automated fabric
defect detection system requires robust and efficient fabric defect detection algorithms. The inspection of real fabric defects
is particularly challenging due to delicate features of defects complicated by variations in weave textures and changes in
environmental factors (e.g., illumination, noise, etc.). Based on characteristics of fabric structure, an approach of using
local contrast deviation (LCD) is proposed for fabric defect detection in this paper. LCD is a parameter used to describe features of the contrast
difference in four directions between the analyzed image and a defect-free image of the same fabric, and is used with a bilevel
threshold function for defect segmentation. The validation tests on the developed algorithms were performed with fabric images
from TILDA’s Textile Texture Database and captured by a line-scan camera on an inspection machine. The experimental results
show that the proposed method has robustness and simplicity as opposed to the approach of using modified local binary patterns
(LBP). 相似文献
17.
目的 为解决户外视觉系统在恶劣环境下捕捉图像存在细节模糊、对比度较低等问题,提出一种基于变差函数和形态学滤波的图像去雾算法(简称IDA_VAM)。方法 该算法首先利用变差函数获取较准确的全局环境光值,然后对最小通道图采用多结构元形态学开闭滤波器获取粗略的大气散射图,进而估计大气透射率并进行修正,接着采用双边滤波对其进行平滑操作,最后通过物理模型得到复原图像并进行色调调整,获取明亮、清晰无雾的图像。结果 本文算法与多种图像去雾算法进行对比,在含有雾气的近景图像、远景图像以及有明亮区域的图像均能很好地去除雾气,图像的信息熵值相对提高了38.0%,对比度值相对提高了34.1%,清晰度值相对提高了134.5%,得到较好的复原效果,获取一幅自然明亮的无雾图像。结论 大量仿真实验结果证实,IDA_VAM能够很好地恢复非复杂场景下的近景图像、远景图像以及含有明亮区域图像的色彩和清晰度,获得清晰明亮的无雾图像,细节可见度较高,且算法的时间复杂度与图像像素点个数呈线性关系,具有较好的实时性。 相似文献
18.
This paper describes and analyzes a new architecture for file systems in which ‘metadata’, lock control, etc., are distributed
among diverse resources. The basic data structure is a segment, viz. a logical group of files, folders, or other objects.
The file system requires only one root, and can be non-hierarchical without a complete tree structure within segments. For
‘embarrassingly parallel’ data distributions, scalability is trivially perfect for all N,where N is the number of servers. Even for random file access, a new extreme statistical mechanics is used to show that data I/O is ‘perfectly’ scalable with probability 1, with degradation from perfect scaling that is small
and bounded by f ln N/ ln (ln N). Here f is the fraction of data that is metadata. In contrast, earlier solutions degrade much faster, like Nf. No structural changes in classical metadata are required. 相似文献
19.
In this paper we present a system to enhance the performance of feature correspondence based alignment algorithms for laser
scan data. We show how this system can be utilized as a new approach for evaluation of mapping algorithms. Assuming a certain
a priori knowledge, our system augments the sensor data with hypotheses (‘Virtual Scans’) about ideal models of objects in
the robot’s environment. These hypotheses are generated by analysis of the current aligned map estimated by an underlying
iterative alignment algorithm. The augmented data is used to improve the alignment process. Feedback between data alignment
and data analysis confirms, modifies, or discards the Virtual Scans in each iteration. Experiments with a simulated scenario
and real world data from a rescue robot scenario show the applicability and advantages of the approach. By replacing the estimated
‘Virtual Scans’ with ground truth maps our system can provide a flexible way for evaluating different mapping algorithms in
different settings. 相似文献
20.
Todd Zickler Satya P. Mallick David J. Kriegman Peter N. Belhumeur 《International Journal of Computer Vision》2008,79(1):13-30
Complex reflectance phenomena such as specular reflections confound many vision problems since they produce image ‘features’
that do not correspond directly to intrinsic surface properties such as shape and spectral reflectance. A common approach
to mitigate these effects is to explore functions of an image that are invariant to these photometric events. In this paper
we describe a class of such invariants that result from exploiting color information in images of dichromatic surfaces. These
invariants are derived from illuminant-dependent ‘subspaces’ of RGB color space, and they enable the application of Lambertian-based
vision techniques to a broad class of specular, non-Lambertian scenes. Using implementations of recent algorithms taken from
the literature, we demonstrate the practical utility of these invariants for a wide variety of applications, including stereo,
shape from shading, photometric stereo, material-based segmentation, and motion estimation. 相似文献