首页 | 本学科首页   官方微博 | 高级检索  
     

聚焦性检测与彩色信息引导的光场图像深度提取
引用本文:胡良梅,姬长动,张旭东,张骏,王丽娟.聚焦性检测与彩色信息引导的光场图像深度提取[J].中国图象图形学报,2016,21(2):155-164.
作者姓名:胡良梅  姬长动  张旭东  张骏  王丽娟
作者单位:合肥工业大学计算机与信息学院, 合肥 230009;合肥工业大学计算机与信息学院, 合肥 230009;合肥工业大学计算机与信息学院, 合肥 230009;合肥工业大学计算机与信息学院, 合肥 230009;合肥工业大学计算机与信息学院, 合肥 230009
基金项目:国家自然科学基金项目(61403116,61273237, 61271121, 61471154);中央高校基本科研业务费专项基金项目(2013HGBH0045);中国博士后基金项目(2014M560507);合肥工业大学研究生教学改革研究资助项目(YJG2014Y13)
摘    要:目的 光场相机可以通过一次拍摄,获取立体空间中的4D光场数据,渲染出焦点堆栈图像,然后采用聚焦性检测函数从中提取深度信息。然而,不同聚焦性检测函数响应特性不同,不能适应于所有的场景,且现有多数方法提取的深度信息散焦误差较大,鲁棒性较差。针对该问题,提出一种新的基于光场聚焦性检测函数的深度提取方法,获取高精度的深度信息。方法 设计加窗的梯度均方差聚焦性检测函数,提取焦点堆栈图像中的深度信息;利用全聚焦彩色图像和散焦函数标记图像中的散焦区域,使用邻域搜索算法修正散焦误差。最后利用马尔可夫随机场(MRF)将修正后的拉普拉斯算子提取的深度图与梯度均方差函数得到的深度图融合,得到高精确度的深度图像。结果 在Lytro数据集和自行采集的测试数据上,相比于其他先进的算法,本文方法提取的深度信息噪声较少。精确度平均提高约9.29%,均方误差平均降低约0.056。结论 本文方法提取的深度信息颗粒噪声更少;结合彩色信息引导,有效修正了散焦误差。对于平滑区域较多的场景,深度提取效果较好。

关 键 词:深度提取  光场相机  焦点堆栈图像  聚焦性检测  散焦误差
收稿时间:2015/8/10 0:00:00
修稿时间:2015/10/12 0:00:00

Color-guided depth map extraction from light field based on focusness detection
Hu Liangmei,Ji Changdong,Zhang Xudong,Zhang Jun and Wang Lijuan.Color-guided depth map extraction from light field based on focusness detection[J].Journal of Image and Graphics,2016,21(2):155-164.
Authors:Hu Liangmei  Ji Changdong  Zhang Xudong  Zhang Jun and Wang Lijuan
Affiliation:School of Computer and Information, Hefei University of Technology, Hefei 230009, China;School of Computer and Information, Hefei University of Technology, Hefei 230009, China;School of Computer and Information, Hefei University of Technology, Hefei 230009, China;School of Computer and Information, Hefei University of Technology, Hefei 230009, China;School of Computer and Information, Hefei University of Technology, Hefei 230009, China
Abstract:Objective :A light field camera can obtain 4D light field data from stereoscopic space and generate focal stacks with one shot. Depth information can be extracted by using a focus detection function. However, generalizing varying scenes is difficult because of the distinct response characteristics of focus detection functions. Furthermore, most of the existing methods lead to large defocusing errors, which are not robust in practical usage. In this paper, we present a new depth extraction method based on light field images (i.e., focal slices and all-focus images) to obtain high-accuracy depth information. Method We develop a windowed focusing detection function based on gradient mean square error to extract depth information. Thereafter, we correct the defocusing errors by using the local search method for the area marked with the defocusing function. Finally, we synthesize the depth images to improve accuracy. Result The experiments on the Lytro dataset and our own data show that our approach achieves higher accuracy with less noise than other state-of-the-art methods. Precision increases by approximately 9.29%, and MSE decreases by approximately 0.056 compared with other advanced methods. Conclusion The use of the windowed gradient mean square error function to extract depth information produces less speck noise. By using the color information from all-focus images, our approach can correct the defocusing error. Finally, the depth edges fused in an MRF framework are clear and maintain good consistency with the color image.The depth estimation by our approach is better than other methods for low texture images.
Keywords:depth extraction  light field camera  stack of refocusing images  focusness detection  defocusing error
本文献已被 万方数据 等数据库收录!
点击此处可从《中国图象图形学报》浏览原始摘要信息
点击此处可从《中国图象图形学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号