首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到4条相似文献,搜索用时 2 毫秒
1.
Progressive addition lenses are a relatively new approach to compensate for defects of the human visual system. While traditional spectacles use rotationally symmetric lenses, progressive lenses require the specification of free-form surfaces. This poses difficult problems for the optimal design and its visual evaluation.
This paper presents two new techniques for the visualization of optical systems and the optimization of progressive lenses. Both are based on the same wavefront tracing approach to accurately evaluate the refraction properties of complex optical systems.
We use the results of wavefront tracing for continuously re-focusing the eye during rendering. Together with distribution ray tracing, this yields high-quality images that accurately simulate the visual quality of an optical system. The design of progressive lenses is difficult due to the trade-off between the desired properties of the lens and unavoidable optical errors, such as astigmatism and distortions. We use wavefront tracing to derive an accurate error functional describing the desired properties and the optical error across a lens. Minimizing this error yields optimal free-form lens surfaces.
While the basic approach is much more general, in this paper, we describe its application to the particular problem of designing and evaluating progressive lenses and demonstrate the benefits of the new approach with several example images.  相似文献   

2.
Image quantization and digital halftoning are fundamental problems in computer graphics, which arise when displaying high-color images on non-truecolor devices. Both steps are generally performed sequentially and, in most cases, independent of each other. Color quantization with a pixel-wise defined distortion measure and the dithering process with its local neighborhood optimize different quality criteria or, frequently, follow a heuristic without reference to any quality measure.
In this paper we propose a new method to simultaneously quantize and dither color images. The method is based on a rigorous cost-function approach which optimizes a quality criterion derived from a generic model of human perception. A highly efficient algorithm for optimization based on a multiscale method is developed for the dithered color quantization cost function. The quality criterion and the optimization algorithms are evaluated on a representative set of artificial and real-world images as well as on a collection of icons. A significant image quality improvement is observed compared to standard color reduction approaches.  相似文献   

3.
A good image metric is often needed in digital image synthesis. It can be used to check the convergence behavior in progressive methods, to compare images rendered using various rendering methods etc. Since images are rendered to be observed by humans, an image metric should correspond to human perception as well. We propose here a new algorithm which operates in the original image space. There is no need for Fourier or wavelet transforms. Furthermore, the new metric is view distance dependent. The new method uses the contrast sensitivity function. The main idea is to place a number of various rectangles in images, and to compute the CIE LUV average color difference between corresponding rectangles. Errors are then weighted according to the rectangle size and the contrast sensitivity function.  相似文献   

4.
Computing the visibility of out-door scenes is often much harder than of in-door scenes. A typical urban scene, for example, is densely occluded, and it is effective to precompute its visibility space, since from a given point only a small fraction of the scene is visible. The difficulty is that although the majority of objects are hidden, some parts might be visible at a distance in an arbitrary location, and it is not clear how to detect them quickly. In this paper we present a method to partition the viewspace into cells containing a conservative superset of the visible objects. For a given cell the method tests the visibility of all the objects in the scene. For each object it searches for a strong occluder which guarantees that the object is not visible from any point within the cell. We show analytically that in a densely occluded scene, the vast majority of objects are strongly occluded, and the overhead of using conservative visibility (rather than visibility) is small. These results are further supported by our experimental results. We also analyze the cost of the method and discuss its effectiveness.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号