首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   22篇
  免费   1篇
机械仪表   1篇
无线电   2篇
自动化技术   20篇
  2022年   1篇
  2021年   1篇
  2017年   2篇
  2015年   2篇
  2014年   4篇
  2013年   1篇
  2011年   2篇
  2010年   2篇
  2008年   2篇
  2005年   2篇
  2004年   2篇
  2002年   1篇
  2000年   1篇
排序方式: 共有23条查询结果,搜索用时 12 毫秒
1.
We advocate the use of quickly‐adjustable, computer‐controlled color spectra in photography, lighting and displays. We present an optical relay system that allows mechanical or electronic color spectrum control and use it to modify a conventional camera and projector. We use a diffraction grating to disperse the rays into different colors, and introduce a mask (or LCD/DMD) in the optical path to modulate the spectrum. We analyze the trade‐offs and limitations of this design, and demonstrate its use in a camera, projector and light source. We propose applications such as adaptive color primaries, metamer detection, scene contrast enhancement, photographing fluorescent objects, and high dynamic range photography using spectrum modulation.  相似文献   
2.
We describe a novel multiplexing approach to achieve tradeoffs in space, angle and time resolution in photography. We explore the problem of mapping useful subsets of time‐varying 4D lightfields in a single snapshot. Our design is based on using a dynamic mask in the aperture and a static mask close to the sensor. The key idea is to exploit scene‐specific redundancy along spatial, angular and temporal dimensions and to provide a programmable or variable resolution tradeoff among these dimensions. This allows a user to reinterpret the single captured photo as either a high spatial resolution image, a refocusable image stack or a video for different parts of the scene in post‐processing. A lightfield camera or a video camera forces a‐priori choice in space‐angle‐time resolution. We demonstrate a single prototype which provides flexible post‐capture abilities not possible using either a single‐shot lightfield camera or a multi‐frame video camera. We show several novel results including digital refocusing on objects moving in depth and capturing multiple facial expressions in a single photo.  相似文献   
3.
Analyzing the polarimetric properties of reflected light is a potential source of shape information. However, it is well-known that polarimetric information contains fundamental shape ambiguities, leading to an underconstrained problem of recovering 3D geometry. To address this problem, we use additional geometric information, from coarse depth maps, to constrain the shape information from polarization cues. Our main contribution is a framework that combines surface normals from polarization (hereafter polarization normals) with an aligned depth map. The additional geometric constraints are used to mitigate physics-based artifacts, such as azimuthal ambiguity, refractive distortion and fronto-parallel signal degradation. We believe our work may have practical implications for optical engineering, demonstrating a new option for state-of-the-art 3D reconstruction.  相似文献   
4.
We present a novel algorithm for generating the mean structure of non-rigid stretchable shapes. Following an alignment process, which supports local affine deformations, we translate the search of the mean shape into a diagonalization problem where the structure is hidden within the kernel of a matrix. This is the first step required in many practical applications, where one needs to model bendable and stretchable shapes from multiple observations.  相似文献   
5.
A method for capturing geometric features of real-world scenes relies on a simple capture setup modification. The system might conceivably be packaged into a portable self-contained device. The multiflash imaging method bypasses 3D geometry acquisition and directly acquires depth edges from images. In the place of expensive, elaborate equipment for geometry acquisition, we use a camera with multiple strategically positioned flashes. Instead of having to estimate the full 3D coordinates of points in the scene (using, for example, 3D cameras) and then look for depth discontinuities, our technique reduces the general 3D problem of depth edge recovery to one of 2D intensity edge detection. Our method could, in fact, help improve current 3D cameras, which tend to produce incorrect results near depth discontinuities. Exploiting the imaging geometry for rendering provides a simple and inexpensive solution for creating stylized images from real scenes. We believe that our camera will be a useful tool for professional artists and photographers, and we expect that it will also let the average user easily create stylized imagery. This article is available with a short video documentary on CD-ROM.  相似文献   
6.
Interaction using a handheld projector   总被引:1,自引:0,他引:1  
Handheld projection lets users create opportunistic displays on any suitable nearby surface. This projection method is a practical and useful addition to mobile computing. Incorporating a projector into a handheld device raises the issue of how to interact with a projection. The focus of the article is interactive projection: a technique for moving a cursor across a projection, allowing all the familiar mouse interactions from the desktop to be transposed to a handheld projector, and requiring only natural, one-handed pointing motion by the user. This article is available with a short video documentary on CD-ROM.  相似文献   
7.
We present a real‐time framework which allows interactive visualization of relativistic effects for time‐resolved light transport. We leverage data from two different sources: real‐world data acquired with an effective exposure time of less than 2 picoseconds, using an ultra‐fast imaging technique termed femto‐photography, and a transient renderer based on ray‐tracing. We explore the effects of time dilation, light aberration, frequency shift and radiance accumulation by modifying existing models of these relativistic effects to take into account the time‐resolved nature of light propagation. Unlike previous works, we do not impose limiting constraints in the visualization, allowing the virtual camera to explore freely a reconstructed 3D scene depicting dynamic illumination. Moreover, we consider not only linear motion, but also acceleration and rotation of the camera. We further introduce, for the first time, a pinhole camera model into our relativistic rendering framework, and account for subsequent changes in focal length and field of view as the camera moves through the scene.  相似文献   
8.
We introduce a new approach to capturing refraction in transparent media, which we call light field background oriented Schlieren photography. By optically coding the locations and directions of light rays emerging from a light field probe, we can capture changes of the refractive index field between the probe and a camera or an observer. Our prototype capture setup consists of inexpensive off-the-shelf hardware, including inkjet-printed transparencies, lenslet arrays, and a conventional camera. By carefully encoding the color and intensity variations of 4D light field probes, we show how to code both spatial and angular information of refractive phenomena. Such coding schemes are demonstrated to allow for a new, single image approach to reconstructing transparent surfaces, such as thin solids or surfaces of fluids. The captured visual information is used to reconstruct refractive surface normals and a sparse set of control points independently from a single photograph.  相似文献   
9.
Ray–based representations can model complex light transport but are limited in modeling diffraction effects that require the simulation of wavefront propagation. This paper provides a new paradigm that has the simplicity of light path tracing and yet provides an accurate characterization of both Fresnel and Fraunhofer diffraction. We introduce the concept of a light field transformer at the interface of transmissive occluders. This generates mathematically sound, virtual, and possibly negative‐valued light sources after the occluder. From a rendering perspective the only simple change is that radiance can be temporarily negative. We demonstrate the correctness of our approach both analytically, as well by comparing values with standard experiments in physics such as the Young's double slit. Our implementation is a shader program in OpenGL that can generate wave effects on arbitrary surfaces.  相似文献   
10.
Traditional stereo matching algorithms are limited in their ability to produce accurate results near depth discontinuities, due to partial occlusions and violation of smoothness constraints. In this paper, we use small baseline multi-flash illumination to produce a rich set of feature maps that enable acquisition of discontinuity preserving point correspondences. First, from a single multi-flash camera, we formulate a qualitative depth map using a gradient domain method that encodes object relative distances. Then, in a multiview setup, we exploit shadows created by light sources to compute an occlusion map. Finally, we demonstrate the usefulness of these feature maps by incorporating them into two different dense stereo correspondence algorithms, the first based on local search and the second based on belief propagation. Experimental results show that our enhanced stereo algorithms are able to extract high quality, discontinuity preserving correspondence maps from scenes that are extremely challenging for conventional stereo methods. We also demonstrate that small baseline illumination can be useful to handle specular reflections in stereo imagery. Different from most existing active illumination techniques, our method is simple, inexpensive, compact, and requires no calibration of light sources.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号