首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 22 毫秒
1.
In this article, we present an adaptive color similarity function defined in a modified hue‐saturation‐intensity color space, which can be used directly as a metric to obtain pixel‐wise segmentation of color images among other applications. The color information of every pixel is integrated as a unit by an adaptive similarity function thus avoiding color information scattering. As a direct application we present an efficient interactive, supervised color segmentation method with linear complexity respect to the number of pixels of the input image. The process has three steps: (1) Manual selection of few pixels in a sample of the color to be segmented. (2) Automatic generation of the so called color similarity image (CSI), which is a gray level image with all the gray level tonalities associated with the selected color. (3) Automatic threshold of the CSI to obtain the final segmentation. The proposed technique is direct, simple and computationally inexpensive. The evaluation of the efficiency of the color segmentation method is presented showing good performance in all cases of study. A comparative study is made between the behavior of the proposed method and two comparable segmentation techniques in color images using (1) the Euclidean metric of the a* and b* color channels rejecting L* and (2) a probabilistic approach on a* and b* in the CIE L*a*b* color space. Our testing system can be used either to explore the behavior of a similarity function (or metric) in different color spaces or to explore different metrics (or similarity functions) in the same color space. It was obtained from the results that the color parameters a* and b* are not independent of the luminance parameter L* as one might initially assume in the CIE L*a*b* color space. We show that our solution improves the quality of the proposed color segmentation technique and its quick result is significant with respect to other solutions found in the literature. The method also gives a good performance in low chromaticity, gray level and low contrast images. © 2016 Wiley Periodicals, Inc. Col Res Appl, 42, 156–172, 2017  相似文献   

2.
In digital image reproduction, it is often desirable to compute image difference of reproductions and the original images. The traditional CIE color difference formula, designed for simple color patches in controlled viewing conditions, is not adequate for computing image difference for spatially complex image stimuli. Zhang and Wandell [Proceedings of the SID Symposium, 1996; p 731–734] introduced the S‐CIELAB model to account for complex color stimuli using spatial filtering as a preprocessing stage. Building on S‐CIELAB, iCAM was designed to serve as both a color appearance model and also an image difference metric for complex color stimuli [IS&T/SID 10th Color Imaging Conference, 2002; p 33–38]. These image difference models follow a similar image processing path to approximate the behavior of human observers. Generally, image pairs are first converted into device‐independent coordinates such as CIE XYZ tristimulus values or approximate human cone responses (LMS), and then further transformed into opponent‐color channels approximating white‐black, red‐green, and yellow‐blue color perceptions. Once in the opponent space, the images are filtered with approximations of human contrast sensitivity functions (CSFs) to remove information that is invisible to the human visual system. The images are then transformed back to a color difference space such as CIELAB, and pixel‐by‐pixel color differences are calculated. The shape and effectiveness of the CSF spatial filters used in this type of modeling is highly dependent on the choice of opponent color space. For image difference calculations, the ideal opponent color space would be both linear and orthogonal such that the linear filtering is correct and any spatial processing on one channel does not affect the others. This article presents a review of historical opponent color spaces and an experimental derivation of a new color space and corresponding spatial filters specifically designed for image color difference calculations. © 2010 Wiley Periodicals, Inc. Col Res Appl, 2010  相似文献   

3.
The major issues of using less storage space and wanting higher transmission rates for information in the form of high quality color images was taken into consideration. Two experiments were conducted in order to investigate and compare performance of compression standard including JPEG 1992 and JPEG 2000, and a newly developed CSI‐JPEG. The CSI‐JPEG is an amalgamation of Cubic Spline Interpolation (CSI) with baseline JPEG 1992 algorithm. The performance of different image compression algorithms was evaluated using different color models/spaces in terms of compression rate, color accuracy, and visual quality. The results from three assessment methods consistently showed that JPEG 2000 and CSI‐JPEG performed significantly better compared with JPEG 1992 for small color differences (in the range of acceptability). Moreover, the CAM02‐UCS performed best among other selected models in terms of compression rate and image performance for all three image compression algorithms. The results from the visual assessment also confirmed this. It was also found that CIEDE2000 can be reliably used for assessing quality of compressed images with low levels of distortion. © 2016 Wiley Periodicals, Inc. Col Res Appl, 42, 460–473, 2017  相似文献   

4.
Colour is the most widely used attribute in image retrieval and object recognition. A technique known as histogram intersection has been widely studied and is considered to be effective for color‐image indexing. The key issue of this algorithm is the selection of an appropriate color space and optimal quantization of the selected color space. The goal of this article is to measure the model performance in predicting human judgment in similarity measurement for various images, to explore the capability of the model with a wide set of color spaces, and to find the optimal quantization of the selected color spaces. Six color spaces and twelve quantization levels are involved in evaluating the performance of histogram intersection. The categorical judgment and rank order experiments were conducted to measure image similarity. The CIELAB color space was found to perform at least as good as or better than the other color spaces tested, and the ability to predict image similarity increased with the number of bins used in the histograms, for up to 512 bins (8 per channel). With more than 512 bins, further improvement was negligible for the image datasets used in this study. © 2005 Wiley Periodicals, Inc. Col Res Appl, 30, 265–274, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/col.20122  相似文献   

5.
The color image space with one kind of merit color image scales (WIP) is derived using the psychophysical method of magnitude estimation method, usually used in visual assessment of color appearance, which can be used to describe the color image meanings of colors of works of art in parallel with those of people. The results show that a new color image space HRU is developed, and the relativity between this space and the CIEL*a*b* color space is also discussed. And, a good relationship between the HRU color image space and the CIEL*a*b* color space can be found. This may be of great advantage to the new color image space HRU in predicting the color image for single color based on the color space CIEL*a*b*. © 2009 Wiley Periodicals, Inc. Col Res Appl, 34, 452–457, 2009  相似文献   

6.
Image sources, such as digital camera captures and photographic negatives, typically have more information than can be reproduced on a photographic print or a video display. The information that is lost during the tone/color rendering process relates to both the extended dynamic range and color gamut of the original scene. In conventional photographic systems, most of this additional information is archived on the photographic negative and can be accessed by adjusting the way the negative is printed. However, most digital imaging systems have traditionally archived only a rendered video RGB image. As a result, it is not possible to make the same sorts of image manipulations that historically have been possible with conventional photographic systems. This suggests that there would be an advantage to storing images using an extended dynamic range/color gamut color encoding. However, because of file compatibility issues, digital imaging systems that store images using color encoding other than a standard video RGB representation (e.g., sRGB) would be significantly disadvantaged in the marketplace. In this article, we describe a solution that has been developed to maintain compatibility with existing file formats and software applications, while simultaneously retaining the extended dynamic range and color gamut information associated with the original scenes. With this approach, the input raw digital camera image or film scan is first transformed to the scene‐referred ERIMM RGB color encoding. Next, a rendered sRGB image is formed in the usual way and stored in a conventional image file (e.g., a standard JPEG file). A residual image representing the difference between the original extended dynamic range image and the final rendered image is formed and stored in the image file using proprietary metadata tags. This provides a mechanism for archiving the extended dynamic range/color gamut information, which is normally discarded during the rendering process, without sacrificing interoperability. Appropriately enabled applications can decode the residual image metadata and use it to reconstruct the ERIMM RGB image, whereas applications that are not aware of the metadata will ignore it and only have access to the sRGB image. The residual image is formed such that it will have negligible pixel values for those portions of the image that lie within the sRGB gamut, and will therefore be highly compressible. Tests on a population of 950 real customer images have demonstrated that the extended dynamic range scene information can be stored with an average file size overhead of about 8% compared to the sRGB images alone. © 2003 Wiley Periodicals, Inc. Col Res Appl, 28, 251–266, 2003; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/col.10160  相似文献   

7.
Although a number of methods have been developed for image adjustment in various applications, very little work has been done in the context of visual design. In this regard, this article introduces a novel and practical context of image color adjustment and develops a method to adjust an image for harmony with a target color. The experiment with designers revealed that designers made significant changes in hue dimension, and preferred to promote color similarity between the image and the target color. Based on insights from designers, we proposed a method to achieve a harmonious combination of an image and a color element by increasing the hue similarity between them. The result of a user test revealed that our method is particularly useful for images with nonliving objects but less effective for images involving human skin, foods, and so on. It is expected that the practical context investigated in this study can promote a variety of related studies that satisfy the tangible needs of industries and academia.  相似文献   

8.
In recent years, new display technologies have emerged that are capable of producing colors that exceed the color gamut of broadcast standards. On the other hand, most video content currently remains compliant with the EBU standard and as such, there is a need for color mapping algorithms that make optimal use of the wider gamut of these new displays. To identify appropriate color mapping strategies, we have developed, implemented, and evaluated several approaches to gamut extension. The color rendering performance and robustness to different image content of these algorithms were evaluated against a reference (true‐color) mapping. To this end, two psychophysical experiments were conducted using a simulated and actual wide‐gamut display. Results show that the preferred algorithm had a dependency on image content, especially for images with skin tones. In both experiments, however, there was preference shown for the algorithm that balances chroma and lightness modulations as a function of the input lightness. The newly designed extension algorithms consistently outperformed true‐color mapping, thus confirming the benefit of appropriate mapping on wide‐gamut displays. © 2009 Wiley Periodicals, Inc. Col Res Appl, 34, 443–451, 2009  相似文献   

9.
10.
It is widely believed that children will choose furniture that has the same color as their preferred color. Furthermore, for different categories of furniture, the color they preferred for furniture is consistent. A study of 508 adolescent Chinese children between the ages of 12 and 16 has been carried out to explore whether color preference influences their choice of furniture when they are provided with various color options (16 chromatic and five achromatic colors). This work tested six items of furniture in two functional spaces (study and bedroom space). Findings indicate that adolescent children's color preferences did indeed affect their furniture choice, but the extent varies with the categories of furniture. Furthermore, this study reveals that children's preference for furniture in different functional spaces is slightly different. Some effects of gender and age were also explored. This work discusses the implications of adolescent color preference and color choice for children's furniture color design.  相似文献   

11.
Pattern‐driven design method is an important data‐driven design method for printed fabric motif design in textiles and clothing industry. We introduce a novel framework for automatic design of color patterns in real‐world fabric motif images. The novelty of our work is to formulate the recognition of an underlying color pattern element as a spatial, multi‐target tracking, classification, segmentation and similarity association process using a new and efficient color feature encoding method. The proposed design method is based on pattern‐driven color pattern recognition and indexing from the element image database. A series of color pattern recognition algorithms are used for color and pattern feature extraction. The local statistical corner features and Markov random field model are used for motif unit tiling detection and conversion. The color feature encoding problem is modeled in a gray‐scale color difference optimization problem, which can be solved quickly by existing algorithms. Color pattern feature matching, segmentation and indexing techniques are then used to locate and replace the elements in the motif unit image with similar elements in the database. Experiments show that the approach proposed in this study is effective for color pattern recognition and printed fabric motif design.  相似文献   

12.
The CIECAM02 color‐appearance model enjoys popularity in scientific research and industrial applications since it was recommended by the CIE in 2002. However, it has been found that computational failures can occur in certain cases such as during the image processing of cross‐media color reproduction applications. Some proposals have been developed to repair the CIECAM02 model. However, all the proposals developed have the same structure as the original CIECAM02 model and solve the problems concerned at the expense of losing accuracy of predicted visual data compared with the original model. In this article, the structure of the CIECAM02 model is changed and the color and luminance adaptations to the illuminant are completed in the same space rather than in two different spaces, as in the original CIECAM02 model. It has been found that the new model (named CAM16) not only overcomes the previous problems, but also the performance in predicting the visual results is as good as if not better than that of the original CIECAM02 model. Furthermore the new CAM16 model is simpler than the original CIECAM02 model. In addition, if considering only chromatic adaptation, a new transformation, CAT16, is proposed to replace the previous CAT02 transformation. Finally, the new CAM16‐UCS uniform color space is proposed to replace the previous CAM02‐UCS space. A new complete solution for color‐appearance prediction and color‐difference evaluation can now be offered.  相似文献   

13.
14.
Digital tongue images are usually acquired by a camera under specific illumination environments. In order to guarantee better color representation of the tongue body, we propose a novel tongue Color Rendition Chart acting as a color reference to be used in color calibration algorithms to standardize the captured tongue images. First, based on a large tongue image database captured with our digital tongue image acquisition system, we establish a statistical tongue color gamut. Then, from the first step, different quantities of colors in the Color Rendition Chart are determined via experimentation. Afterwards, results using X‐Rite's ColorChecker® Color Rendition Chart (a standard in the color calibration field) are compared with the proposed tongue Color Rendition Chart by applying the color difference calculation formula of CIELAB and CIEDE2000 as a reference for the mean color calibration error. The results show that the proposed tongue Color Rendition Chart, which has 24 colors, produces a much smaller error (CIELAB —8.0755/CIEDE 2000—6.3482) compared with X‐Rite's ColorChecker® Color Rendition Chart (CIELAB 1976—14.7836/CIEDE 2000—11.7686). This demonstrates the effectiveness of the novel tongue Color Rendition Chart.  相似文献   

15.
Color image is one of the most important factors in art and design. In general, artists and designers apply their own personal image meanings into their work. However, the image meaning for a specific work is frequently in conflict with those of the general observer. Thus it is necessary and important to derive one set of merit color image scales which can be utilized to predict the color image meanings of works in parallel with the average person's perception and which can also serve as a guide for artists and designers. In this study, the psychophysical method (magnitude estimation method), usually used in visual measurement of color appearance, was employed to attempt to establish new color image scales to evaluate the color image meanings of works matching those of the average person. The results show that new color image scales WIP are developed, and the relativity between the latest color image scales WIP and the color attributes (say Lightness L*, Hue h, and Chroma C*) of the CIELAB color space is also discussed. The total mean value of coefficient of variation for the results of visual assessment in the experiment of evaluating the color image meanings of the 207 color specimens used, in general, is about 36, similar to that for those experiments conducted using the psychological method. Also, a good relationship between the new color image scales and the color attributes of the CIELAB color space can be found. © 2007 Wiley Periodicals, Inc. Col Res Appl, 32, 463–468, 2007  相似文献   

16.
High dynamic range imaging (HDRI) by bracketing of low dynamic range (LDR) images is demanding, as the sensor is deliberately operated at saturation. This exacerbates any crosstalk, interpixel capacitance, blooming and smear, all causing interpixel correlations (IC) and a deteriorated modulation transfer function (MTF). Established HDRI algorithms exclude saturated pixels, but generally overlook IC. This work presents a calibration method to estimate the affected region from saturated pixels for a color filter array (CFA) sensor, using the native CFA as a matched filter. The method minimizes color crosstalk given a set of candidates for proximity regions, and requires no special setup. Results are shown for a 21‐bit HDR output image with improved color fidelity and reduced noise. The calibration reduces IC in the LDR images and is performed only once for a given sensor. The improvement is applicable to any HDRI algorithm based on CFA image bracketing, irrespective of sensor technology. Generalizations to subsaturated and supersaturated pixels are described, facilitating a suggested irradiance‐exposure dependent point spread function charge repatriation strategy.  相似文献   

17.
Recent break‐throughs in retinal imaging have raised new questions for color vision research, and the existing color vision models should be re‐evaluated. Many color vision models are based on an assumption that there are no differences in the detection phase, neither in the spatial configuration nor in the spectral sensitivities of cells. In this article, we have run experiments with four different color vision models. This evaluation gives us more knowledge about the essential properties of the models. We show how the tested color vision models are able to replicate the behaviour of human color vision by evaluating their performance in Farnsworth‐Munsell 100‐Hue color vision test. Also, the wavelength discrimination power of each model is presented and the properties of color spaces spanned by models are examined using samples from Munsell Book of Color. Our experiments show that there are large differences in the properties of different models. © 2009 Wiley Periodicals, Inc. Col Res Appl, 34, 341–350, 2009  相似文献   

18.
Since the adoption of the color spaces CIELAB and CIELUV by the CIE in 1976, several other uniform spaces have been developed. We studied most of these spaces and evaluated their uniformity for small as well as larger color differences. Therefore, several criteria have been defined based on color discrimination data and appearance systems. The main difference between color spaces based on discrimination data and spaces that model appearance systems is reflected in a different size of the chroma distance unit compared with the lightness unit. If spaces based on the same kind of data (discrimination data or appearance systems) are compared with each other, they are all almost equally uniform. BFD (l:c), for example, is said to be more uniform than CMC(l:c), but, based on confidence intervals of 65%, there is no significant difference between them. If the proposed color difference formula of the CIE is compared with these distance functions, it also performs equally well. The SVF space and OSA 90 space on the other hand should be better than OSA 74. However, as opposed to what was expected, OSA 74 is slightly better; but, also in this case, the difference between the spaces is insignificant.  相似文献   

19.
The present article describes a color classification method that partitions a color image into a set of uniform color regions. The input image data are first mapped from device coordinates into the CIE L*a*b* color space, an approximately uniform perceptual color space. Colors used to represent a natural color image are classified by means of cluster detection in the uniform color space. The basic process of color classification is based on histogram analysis to detect color clusters sequentially. The principal components of a color distribution are extracted for effective discrimination of clusters. We present an algorithm for sequential detection of color clusters in the uniform color space, and the related algorithms for region processing and color computation. The performance of the method is discussed in an experiment using three kinds of natural color images.  相似文献   

20.
Comparing colour histograms of images has been shown to be a powerful technique for discriminating among large sets of images. However, these histograms depend not only on the properties of imaged objects but also on the illumination under which the objects are captured. If this illumination dependence is not accounted for prior to constructing the colour histograms, colour‐based image indexing will fail when illumination changes. This failure can be addressed by correcting the RGBs in an image to corresponding RGBs representing the same scene but under a standard reference illuminant prior to constructing the histograms. To perform this correction of RGBs, it is necessary to have a measurement or, more commonly, an estimate of the illumination in the original scene. Many authors have proposed illuminant estimation (or colour constancy) algorithms to obtain such an estimate. Unfortunately, the results of colour histogram matching experiments under varying illumination conditions have shown that existing estimation algorithms do not provide a sufficiently good estimate of the scene illuminant to enable this approach to work. In this article we report on the results of our repetition of those experiments, but this time using a new illuminant estimation algorithm—the so‐called color by correlation approach, which has been shown to afford significantly better performance than previous algorithms. The results of this new experiment show that when this new algorithm is used to preprocess images, a significant improvement in colour histogram matching performance is achieved. Indeed, performance is close to the theoretically optimal level of performance, that is, close to that which can be obtained using actual measurements of the scene illumination. © 2002 Wiley Periodicals, Inc. Col Res Appl, 27, 260–270, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/col.10064  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号