首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Riza  Mehdi  Hao  Guangbo 《Microsystem Technologies》2019,25(8):3185-3191

This paper presents the design of an improved tip-tilt-piston compliant/flexure motion stage for steering light beam. The motion stage is actuated by three linear stepper motors in an open-loop control. Using a laser and optical setup, the completed device was tested by making it steer a laser beam, effectively demonstrating its range of movement and level of precision. The testing has proved that the new motion stage system has a maximum bidirectional rotation range of at least 2.89° with a precision and repeatability of 0.0213°, demonstrating a micro-positioning ability.

  相似文献   

2.
畸变是鱼眼镜头的最大的问题,针对这一情况,提出一种利用双椭圆模型对鱼眼镜头进行畸变校正的算法,在改善鱼眼畸变的情况下,同时能够保障实时输出;对鱼眼图像进行边缘扫描和检测,采用线性拟合的方法获取鱼眼图像的光心和半径,经过双椭圆模型寻找校正前和校正后图像的映射关系,调用GPU加速处理,达到实时输出的效果,经过实验对比,针对鱼眼镜头引起的畸变问题进行校正并且能够实时输出;  相似文献   

3.
Safety is undoubtedly the most fundamental requirement for any aerial robotic application. It is essential to equip aerial robots with omnidirectional perception coverage to ensure safe navigation in complex environments. In this paper, we present a light‐weight and low‐cost omnidirectional perception system, which consists of two ultrawide field‐of‐view (FOV) fisheye cameras and a low‐cost inertial measurement unit (IMU). The goal of the system is to achieve spherical omnidirectional sensing coverage with the minimum sensor suite. The two fisheye cameras are mounted rigidly facing upward and downward directions and provide omnidirectional perception coverage: 360° FOV horizontally, 50° FOV vertically for stereo, and whole spherical for monocular. We present a novel optimization‐based dual‐fisheye visual‐inertial state estimator to provide highly accurate state‐estimation. Real‐time omnidirectional three‐dimensional (3D) mapping is combined with stereo‐based depth perception for the horizontal direction and monocular depth perception for upward and downward directions. The omnidirectional perception system is integrated with online trajectory planners to achieve closed‐loop, fully autonomous navigation. All computations are done onboard on a heterogeneous computing suite. Extensive experimental results are presented to validate individual modules as well as the overall system in both indoor and outdoor environments.  相似文献   

4.
This paper proposes an approximate model of fisheye camera based on the optical refraction. The model of fisheye lens is firstly derived from the optical refraction and the structure of fisheye lenses. Secondly, a suitable linearization of the fisheye model is developed in order to obtain an approximate model, and the approximate model including two parameters is constructed from the linearization of the fisheye model. Finally, the estimation algorithm on the model parameters is presented using the epipolar constraint between two fisheye images. Furthermore, we provide lots of experiments with synthetic data and real fisheye images. To start with, the feasibility of the approximate model is tested through fitting the five common designed model of fisheye lens with synthetic data. Two groups of experiments with real fisheye image are then performed to estimate the model parameters. In practical situation, this method can automatically establish image correspondences using an improved random sample consensus algorithm without calibration objects.  相似文献   

5.
6.
The miniaturization and integration of in-plane micro-lenses into microfluidic networks for improving fluorescence detection has been widely investigated recently. This article describes the design and demonstration of an optofluidic in-plane bi-concave lens to perform both light focusing and diverging. The concave lens is hydrodynamically formed in a rectangular chamber with a liquid core liquid cladding (L 2) configuration. In the focusing mode, an auxiliary cladding stream is introduced to sandwich the L 2 configuration for protecting the light rays from scattering at the rough chamber wall. In the diverging mode, the auxiliary cladding liquid changes its role from avoiding light-scattering to being the low-refractive-index cladding of the lens. The focal length in the focusing mode and the divergent angle of light beam in the diverging mode can be tuned by adjusting the flow rate ratio between core and cladding streams.  相似文献   

7.
《Advanced Robotics》2013,27(8-9):947-967
Abstract

A wide field of view is required for many robotic vision tasks. Such an aperture may be acquired by a fisheye camera, which provides a full image compared to catadioptric visual sensors, and does not increase the size and the weakness of the imaging system with respect to perspective cameras. While a unified model exists for all central catadioptric systems, many different models, approximating the radial distortions, exist for fisheye cameras. It is shown in this paper that the unified projection model proposed for central catadioptric cameras is also valid for fisheye cameras in the context of robotic applications. This model consists of a projection onto a virtual unitary sphere followed by a perspective projection onto an image plane. This model is shown equivalent to almost all the fisheye models. Calibration with four cameras and partial Euclidean reconstruction are done using this model, and lead to persuasive results. Finally, an application to a mobile robot navigation task is proposed and correctly executed along a 200-m trajectory.  相似文献   

8.
In this paper, we propose a liquid prism based on electrowetting‐on‐dielectric and gravity effect. The device is filled with two immiscible liquids. The conductive liquid is filled in the upper part of the device, and the oil is filled in the bottom of the device. Different from the conventional liquid prism, the density of the conductive liquid is larger than that of the oil. The liquid–liquid interface can form a tilt shape when applied different voltages to the opposite sidewalls. The beam will be deflected when the light passes through the device. Our results show that the tilt angle of the liquid–liquid interface is ~45°, and the light beam can be deflected by 22.2° (?10.9° to +11.3°). The actuation time of the device is ~240 ms. The proposed liquid prism for light beam steering and tracking has potential application in laser directions, telecommunications, and scanners.  相似文献   

9.
Abstract— Microdisplays, whether they are of the liquid‐crystal‐on‐silicon (LCOS) or organic light‐emitting diode (OLED) type, have been, up until now, mainly used in multimedia applications or head‐mounted displays. Due to their interesting possibilities, these displays open more and more alternative applications; for example, in optical metrology. Projection lenses for this application area need to be specially designed because the requirements on these systems differ completely from those for multimedia applications. The lenses must have very low geometrical image distortion and they have to be adapted to small objects and/or image distances. On the other hand, they often work with light sources with small spectral bandwidths; consequently, they do not need to be corrected for chromatic aberrations. In addition, the numerical aperture has to be large enough to collect and transfer as much light as possible, but also the size of the projection lens has to be as small as possible to ensure compact measurement systems. All these requirements lead to a compromise in optical lens design. Three optical system designs and realizations — one with an OLED microdisplay and two with an LCOS microdisplay — are presented.  相似文献   

10.
在嵌入式系统中,使用鱼眼镜头实现全景视觉,完成实时目标跟踪工作.采用FPGA+DSP+PC的硬件系统架构作为识别器,使用SAA7113H芯片采集图像,再由FPGA将采集后的图像存入到SRAM,因为使用鱼眼镜头采集后的图像是畸变的,所以采用FPGA对图像进行矫正,运用Freeman链码进行目标识别,尤其是对直线的识别,这些工作都需要大量的计算,高效的FPGA和Altera公司提供的IP核,加快目标识别速度,以达到目标跟踪的目的.  相似文献   

11.
Abstract— The modification of the properties of existing LCs by doping them with ferroelectric micro‐ and nano‐particles will be reported. This approach, in contrast to the conventional time‐consuming and expensive chemical synthetic methods, enriches and enhances the electro‐optical performance of many liquid‐crystal materials. The effect of the ferroelectric particles on the nematic, smectic, and cholesteric phases will be discussed. The performance of these new composite systems in various devices, including displays, light modulators, and beam‐steering devices, will be reported.  相似文献   

12.
视觉环境感知在自动驾驶汽车发展中起着关键作用,在智能后视镜、倒车雷达、360°全景、行车记录仪、碰撞预警、红绿灯识别、车道偏移、并线辅助和自动泊车等领域也有着广泛运用。传统的环境信息获取方式是窄角针孔摄像头,视野有限有盲区,解决这个问题的方法是环境信息感知使用鱼眼镜头,广角视图能够提供整个180°的半球视图,理论上仅需两个摄像头即可覆盖360°,为视觉感知提供更多信息。处理环视图像目前主要有两种途径:一是对图像先纠正,去失真,缺点是图像去失真会损害图像质量,并导致信息丢失;二是直接对形变的鱼眼图像进行建模,但目前还没有效果比较好的建模方法。此外,环视鱼眼图像数据集的缺乏也是制约相关研究的一大难题。针对上述挑战,本文总结了环视鱼眼图像的相关研究,包括环视鱼眼图像的校正处理、环视鱼眼图像中的目标检测、环视鱼眼图像中的语义分割、伪环视鱼眼图像数据集生成方法和其他鱼眼图像建模方法等,结合自动驾驶汽车的环境感知应用背景,分析了这些模型的效率和这些处理方法的优劣,并对目前公开的环视鱼眼图像通用数据集进行了详细介绍,对环视鱼眼图像中待解决的问题与未来研究方向做出预测和展望。  相似文献   

13.
Abstract— By using a single‐lens digital camera with an attached 180° fisheye lens, the incident light at the surface of a mobile display in 112 different environments, including outdoor and indoor environments and inside an automobile, were measured. The data were analyzed for some typical environments in which mobile displays are used. The results of this research can be used in the design of reflective and transflective LCDs, making maximum use of ambient light.  相似文献   

14.
The sensorial system developed is based on the emission of an infrared beam, recovering the reflected beam and measuring distances to significant points in the environment. This system is able to detect and model obstacles in unknown environments. Features of the capture system allow large fields of view to be caught at short distances, aiding the robot's mobility. Several algorithms are used to find the formation centre of the image and so model the distortion introduced by the wide-angle lens. Parameters of the optical model are inserted into the calibration matrix to obtain the camera model. We present also an algorithm which extracts the points on the image that belong to the laser beam. All of the above work is in unknown environments with variable conditions of illumination. The robot's trajectory is obtained and modified in real time, with a spline function, using four different reference systems. Finally, empirical tests have been carried out on a mobile platform, using a CCD camera with a wide-angle lens of 65° and 110°, a 15 mW laser emitter and a frame grabber for image processing.  相似文献   

15.
Optics technology is being increasingly used in mainstream industrial and research domains such as terrestrial telescopes, biomedical imaging and optical communication. One of the most widely used modeling approaches for such systems is Gaussian optics, which describes light as a beam. In this paper, we propose to use higher-order-logic theorem proving for the analysis of Gaussian optical systems. In particular, we present the formalization of Gaussian beams and verify the corresponding properties such as beam transformation, beam waist radius and location. Consequently, we build formal reasoning support for the analysis of quasi-optical systems. In order to demonstrate the effectiveness of our approach, we present a case study about the receiver module of a real-world Atacama Pathfinder Experiment (APEX) telescope.  相似文献   

16.
Abstract

A calculation of the Field Of View (FOV) profile of the Non-Imaging Tubular Flux Collector (NITFC) is presented and the optical efficiency of the system is examined with reference to its application in hand-portable ground-based radiometers. FOVs are calculated for both circular and square detectors and non-intuitive results are obtained. The effect of the optical properties upon the choice of filters is also considered.  相似文献   

17.
Many computational imaging applications involve manipulating the incoming light beam in the aperture and image planes. However, accessing the aperture, which conventionally stands inside the imaging lens, is still challenging. In this paper, we present an approach that allows access to the aperture plane and enables dynamic control of its transmissivity, position, and orientation. Specifically, we present two kinds of compound imaging systems (CIS), CIS1 and CIS2, to reposition the aperture in front of and behind the imaging lens respectively. CIS1 repositions the aperture plane in front of the imaging lens and enables the dynamic control of the light beam coming to the lens. This control is quite useful in panoramic imaging at the single viewpoint. CIS2 uses a rear-attached relay system (lens) to replace the aperture plane behind the imaging lens, and enables the dynamic control of the imaging light jointly formed by the imaging lens and the relay lens. In this way, the common imaging beam can be coded or split in the aperture plane to achieve many imaging functions, such as coded aperture imaging, high dynamic range (HDR) imaging and light field sampling. In addition, CIS2 repositions the aperture behind, instead of inside, the relay lens, which allows the employment of the optimized relay lens to preserve the high imaging quality. Finally, we present the physical implementations of CIS1 and CIS2, to demonstrate (1) their effectiveness in providing access to the aperture and (2) the advantages of aperture manipulation in computational imaging applications.  相似文献   

18.

In a conventional steering system for a multi-axle crane, the steering angle of each axle is determined according to Ackermann’s steering principle, which minimizes the slip angles of the tires. The role of optimal steering control in improving a driver’s steering efficiency is hardly considered in Ackermann’s principle. To address this problem, this paper proposes a control strategy for determining the optimal steering angles for a multi-axle crane and thereby improving a driver’s steering efficiency by applying the model predictive control (MPC) algorithm and defining a driver’s intentions. A simplified crane model for the steering system was developed using a bicycle model, and a comparative study was carried out via simulation to analyze steering performance for the conventional (Ackermann) and proposed steering control systems for the cases of all-wheel steering and road steering modes. The simulation results show that both the minimum turning radius and the driver’s steering effort are decreased more by the proposed steering control system than by conventional system and that the proposed control strategy therefore yields better steering performance.

  相似文献   

19.
We demonstrate a tunable in-plane optofluidic microlens with a 9× light intensity enhancement at the focal point. The microlens is formed by a combination of a tunable divergent air–liquid interface and a static polydimethylsiloxane lens, and is fabricated using standard soft lithography procedures. When liquid flows through a straight channel with a side opening (air reservoir) on the sidewall, the sealed air in the side opening bends into the liquid, forming an air–liquid interface. The curvature of this air–liquid interface can be conveniently and predictably controlled by adjusting the flow rate of the liquid stream in the straight channel. This change in the interface curvature generates a tunable divergence in the incident light beam, in turn tuning the overall focal length of the microlens. The tunability and performance of the lens are experimentally examined, and the experimental data match well with the results from a ray-tracing simulation. Our method features simple fabrication, easy operation, continuous and rapid tuning, and a large tunable range, making it an attractive option for use in lab-on-a-chip devices, particularly in microscopic imaging, cell sorting, and optical trapping/manipulating of microparticles.  相似文献   

20.
Yang  H.  Shyu  R. F.  Huang  J.-W. 《Microsystem Technologies》2006,12(10):907-912

A new method for producing microlens array with large sag heights is proposed for integrated fluorescence microfluidic detection systems. Three steps in this production technique are included for concave microlens array formations to be integrated into microfluidic systems. First, using the photoresist SU-8 to produce hexagonal microchannel array is required. Second, UV curable glue is injected into the hexagonal microchannel array. Third, the surplus glue is rotated by a spinner at high velocity and exposed to a UV lamp to harden the glue. The micro concave lens molds are then finished and ready to produce convex microlens in poly methsiloxane (PDMS) material. This convex microlens in PDMS can be used for detecting fluorescence in microfluidic channels because a convex microlens plays the light convergence role for optical fiber detection.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号