首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 453 毫秒
1.
根据织物纹理图像自身的特点,提出一种基于纹理边缘周期性与局部方向性的织物疵点检测新方法。通过正常纹理边缘的周期性与方向性特征剔除同类有疵点纹理图像的背景纹理信息,突出疵点信息,进而快速有效地检测出无特定方向的织物疵点。经过对大量常见织物疵点图像的检测实验,表明该方法对于纹理边缘清晰、方向一致性较好的织物疵点图像具有较好的检测效果。  相似文献   

2.
考虑到视觉信息流在视通路多级结构中的处理方式,提出一种图像轮廓检测的新模型。首先,根据初级视皮层(V1区)4B层的简单细胞具有三重感受野结构并对朝向敏感的特性,感知图像方位信息,并经复杂细胞提取获得边缘轮廓响应;其次,根据V1区2/3层细胞的抑制特性,引入稀疏性度量指标和神经元突触动态编码机制对边缘轮廓响应进行抑制,得到纹理抑制响应;最后,利用高级视皮层的融合修正机制,对边缘轮廓响应和纹理抑制响应进行优势互补,得到最终的轮廓检测结果。在RuG40和BSDS500图像数据集上进行实验,结果表明所提算法能够有效地区分图像的轮廓与纹理信息,凸显主体轮廓。所构建的基于信息流多级结构响应的轮廓检测模型对后续基于生物视觉机制的图像分析具有一定的参考价值。  相似文献   

3.
基于视通路结构分级响应与动态传递的方式,本文提出了一种图像轮廓检测的新方法.针对视网膜感光细胞的暗视觉特性,建立亮度自适应的暗视野调节模型,利用多尺度经典感受野的方位选择性,构建高级轮廓与全局轮廓的检测路径;模拟外侧膝状体(Lateral geniculate nucleus,LGN)细胞特性对信息进行纹理稀疏编码,并结合非经典感受野的侧抑制作用抑制背景强纹理;另外在LGN区提出微动整合机制,减少纹理冗余信息,再经适应性突触实现信息关联传递;最后将初级轮廓响应跨视区前馈至V1区并经全局轮廓修正后,与高级轮廓响应实现快速融合.分别以RuG40、BSDS500图像库中的自然图像作为实验数据,检测结果与基准轮廓图的平均最优P指标分别为0.50、0.32,结果表明本方法能更有效地区分轮廓与纹理边缘,凸显主体轮廓.本文利用视神经细胞的内在机制以及神经信息的动态传递过程实现图像轮廓信息的编码与检测,也为研究后续高级视皮层的视觉感知提供了新思路.  相似文献   

4.
边缘是物体的基础特征,传统边缘检测方法具有一定的局限性。鉴于人类视觉系统能高效准确地感知物体的边缘信息,根据大脑侧膝体(Lateral Geniculate Nucleus,LGN)和初级视皮层(primary visual cortex,V1)简单细胞的感受野特性,提出一种脑启发式的前馈LGN-V1(Feedforward LGN-V1,FLV)视觉感知模型。首先用高斯函数之差模拟单个LGN细胞的同心圆式感受野,再通过同类LGN细胞的联合构建细胞组,最后将两类细胞组分别共线排列并平行放置模拟得到特定偏好朝向V1简单细胞。通过多简单细胞响应的整合获取全体V1简单细胞的响应。实验结果表明,FLV模型能体现真实简单细胞的生物特性。较传统的边缘检测方法而言,所提模型效果更优,具有更好的鲁棒性。  相似文献   

5.
根据织物图像纹理自身特点,从图像纹理的周期性这个重要的视觉特征入手,提出了基于纹理周期性分析的织物疵点检测方法。通过对大量不同疵点图像检测实验,证明提出方法对织物疵点检测具有较好的有效性和可靠性,而且具有检测的疵点种类多、实用性好的特点。  相似文献   

6.
基于混合基稀疏图像表示的压缩传感图像重构   总被引:4,自引:1,他引:4  
单一基函数不能对同时包含边缘和纹理信息的自然图像进行最优压缩传感图像重构. 本文根据Meyer的卡通--纹理图像模型和生物视觉原理, 用拉普拉斯塔式分解和圆对称轮廓波分别表示图像的光滑成分和边缘成分, 并构造了窄带轮廓波变换实现纹理成分的稀疏表示. 三种稀疏变换的基函数分别与视觉皮层中的侧膝体、简单细胞及栅格细胞的感受野类似. 结合三种图像稀疏表示方法和凸集交替投影算法提出了基于混合基稀疏表示的压缩传感图像重构算法. 实验结果表明,与基于块匹配三维变换迭代收缩的图像重构算法比较, 本文算法能获得更高的图像重构质量.  相似文献   

7.
针对均衡边缘检测精度和抗噪性能难度大的问题,借鉴初级视皮层(V1)细胞的动静态感知特性,建立具有方位选择性的V1细胞模型应用于图像边缘检测。采用时空滤波器来模拟简单细胞的感受野,通过使用能量模型和归一化来整合简单细胞的响应得到V1细胞模型,从而利用V1细胞静态感知特性来检测自然图像边缘。仿真结果表明,所提V1细胞模型能够基本拟合生物数据,具有生物上的普适性;与传统的边缘检测算子相比,该模型的性能更优,鲁棒性更强。依据生物实验结论来构建生物视觉模型并用于图像处理,对生物视觉和计算机视觉的融合进行了有益的探索。  相似文献   

8.
目的 为了提高轮廓检测的综合性能,特别是增强弱轮廓边缘的提取能力,在结合视觉机制的基础上提出了本文方法。方法 模拟视觉信息在视通路中的传递和处理过程,首先根据神经节细胞的中心周边拮抗机制,实现初级轮廓信息的快速提取;接着利用高斯函数与高斯差函数之间的差异性来模拟外膝体非经典感受野的调制作用,实现纹理背景的抑制;然后构建了一种V1区多朝向简单细胞感受野模型,提出了一种基于负值效应的DOG(difference of Gaussians)响应改进评价模式;最后考虑V1区复杂细胞在表征视觉高级特征的能力,给出了一种基于并行处理的视通路视觉响应融合模型,实现目标轮廓的检测与增强。结果 为了验证本文方法对自然场景图像的轮廓检测具备有效性,本文选取RuG轮廓检测数据库中的40幅自然场景图进行轮廓检测实验,并与二维高斯导函数模型(DG)、组合感受野模型(CORF)和空间稀疏约束纹理抑制模型(SSC)等3种典型的自然图像轮廓检测方法进行了分析比较。结果表明,本文方法检测提取到的主体轮廓更加完整,具有较高的图像纯净度,整体上反映了本文所提轮廓检测方法所具备的生物智能性。本文方法的平均P指标为0.45,相较于对比方法具有更好的轮廓检测性能。结论 本文方法具有较好的自然轮廓检测提取能力,尤其对于图像包含部分弱轮廓边缘的检测。本文构建的新模型将有助于对视通路中各层级功能和内在机制的理解,也将为基于视觉机制的图像分析和理解提供一种新的思路。  相似文献   

9.
视觉感受野(Visual receptive field)模型作为生物视觉感知计算的基础单元,在整个生物视觉信息加工过程中发挥着重要作用.借鉴具有运动视觉特长的生物感受野特性研究高效的运动视觉计算技术,是一种潜在可行的方法.本文基于蛙眼R3细胞感受野,在高斯差分模型(Difference of Gaussians, DOG)的基础上引入时间和空间各向异性的运动视觉表达方式, 提出一种基于蛙眼R3细胞的不对称各向异性感受野(Asymmetric anisotropy receptive field, AARF)模型,表达蛙类视觉系统对运动目标敏感的视觉时空特征.基于该运动视觉模型,进一步提出了一种面向序列图像运动目标分析的蛙眼时空运动滤波算子(Frog-based spatio-temporal motion filter, FSTMF),以实现运动目标准确检测与分析.实验结果表明,该方法具有使序列图像背景模糊、动态目标突显的滤波效果,既符合蛙眼视觉背景模糊而前景清晰的特性,也为下一步运动目标的准确检测实现了高效的预处理.  相似文献   

10.
当前流行的基于深度神经网络的图像修复方法,通常使用大感受野的特征提取器,在修复局部图案和纹理时,会产生伪影或扭曲的纹理,从而无法恢复图像的整体语义和视觉结构。为了解决这个问题,提出了一种基于优化感受野策略的图像修复方法(optimized receptive field,ORFNet),将粗糙修复与精细修复相结合。首先,使用具有大感受野的生成对抗网络获得初始的粗略修复结果;然后,使用具有小感受野的模型来细化局部纹理细节;最后,使用基于注意力机制的编码器-解码器网络进行全局精炼修复。在CelebA、Paris StreetView和Places2数据集上进行验证,结果表明,ORFNet与现有具有代表性的修复方法进行对比,PSNR和SSIM分别平均提升1.98 dB和2.49%,LPIPS平均下降2.4%。实验证明,所提图像修复方法在不同感受野的引导下,在修复指标上表现更好,在视觉上也更加真实自然,验证了该修复方法有效性。  相似文献   

11.
This paper addresses the raw textile defect detection problem using independent components approach with insights from human vision system. Human vision system is known to have specialized receptive fields that respond to certain type of input signals. Orientation-selective bar cells and grating cells are examples of receptive fields in the primary visual cortex that are selective to periodic- and aperiodic-patterns, respectively. Regularity and anisotropy are two high-level features of texture perception, and we can say that disruption in regularity and/or orientation field of the texture pattern causes structural defects. In our research, we observed that independent components extracted from texture images give bar or grating cell like results depending on the structure of the texture. For those textures having lower regularity and dominant local anisotropy (orientation or directionality), independent components look similar to bar cells whereas textures with high regularity and lower anisotropy have independent components acting like grating cells. Thus, we will expect different bar or grating cell like independent components to respond to defective and defect-free regions. With this motivation, statistical analysis of the structure of the texture by means of independent components and then extraction of the disturbance in the structure can be a promising approach to understand perception of local disorder of texture in human vision system. In this paper, we will show how to detect regions of structural defects in raw textile data that have certain regularity and local orientation characteristics with the application of independent component analysis (ICA), and we will present results on real textile images with detailed discussions.  相似文献   

12.
We have developed a computational model for texture perception which has physiological relevance and correlates well with human performance. The model attempts to simulate the visual processing characteristics by incorporating mechanisms tuned to detect luminance-polarity, orientation, spatial frequency and color, which are characteristic features of any textural image. We obtained a very good correlation between the model's simulation results and data from psychophysical experiments with a systematically selected set of visual stimuli with texture patterns defined by spatial variations in color, luminance, and orientation. In addition, the model predicts correctly texture segregation performance with key benchmarks and natural textures. This represents a first effort to incorporate chromatic signals in texture segregation models of psychophysical relevance, most of which have treated grey-level images so far. Another novel feature of the model is the extension or the concept of spatial double opponency to domains beyond color, such as orientation and spatial frequency. The model has potential applications in the areas of image processing, machine vision and pattern recognition, and scientific visualization.  相似文献   

13.
Many interesting real‐world textures are inhomogeneous and/or anisotropic. An inhomogeneous texture is one where various visual properties exhibit significant changes across the texture's spatial domain. Examples include perceptible changes in surface color, lighting, local texture pattern and/or its apparent scale, and weathering effects, which may vary abruptly, or in a continuous fashion. An anisotropic texture is one where the local patterns exhibit a preferred orientation, which also may vary across the spatial domain. While many example‐based texture synthesis methods can be highly effective when synthesizing uniform (stationary) isotropic textures, synthesizing highly non‐uniform textures, or ones with spatially varying orientation, is a considerably more challenging task, which so far has remained underexplored. In this paper, we propose a new method for automatic analysis and controlled synthesis of such textures. Given an input texture exemplar, our method generates a source guidance map comprising: (i) a scalar progression channel that attempts to capture the low frequency spatial changes in color, lighting, and local pattern combined, and (ii) a direction field that captures the local dominant orientation of the texture. Having augmented the texture exemplar with this guidance map, users can exercise better control over the synthesized result by providing easily specified target guidance maps, which are used to constrain the synthesis process.  相似文献   

14.
Image analysis in the visual system is well adapted to the statistics of natural scenes. Investigations of natural image statistics have so far mainly focused on static features. The present study is dedicated to the measurement and the analysis of the statistics of optic flow generated on the retina during locomotion through natural environments. Natural locomotion includes bouncing and swaying of the head and eye movement reflexes that stabilize gaze onto interesting objects in the scene while walking. We investigate the dependencies of the local statistics of optic flow on the depth structure of the natural environment and on the ego-motion parameters. To measure these dependencies we estimate the mutual information between correlated data sets. We analyze the results with respect to the variation of the dependencies over the visual field, since the visual motions in the optic flow vary depending on visual field position. We find that retinal flow direction and retinal speed show only minor statistical interdependencies. Retinal speed is statistically tightly connected to the depth structure of the scene. Retinal flow direction is statistically mostly driven by the relation between the direction of gaze and the direction of ego-motion. These dependencies differ at different visual field positions such that certain areas of the visual field provide more information about ego-motion and other areas provide more information about depth. The statistical properties of natural optic flow may be used to tune the performance of artificial vision systems based on human imitating behavior, and may be useful for analyzing properties of natural vision systems.  相似文献   

15.
Texture based image analysis techniques have been widely employed in the interpretation of earth cover images obtained using remote sensing techniques, seismic trace images, medical images and in query by content in large image data bases. The development in multi-resolution analysis such as wavelet transform leads to the development of adequate tools to characterize different scales of textures effectively. But, the wavelet transform lacks in its ability to decompose input image into multiple orientations and this limits their application to rotation invariant image analysis. This paper presents a new approach for rotation invariant texture classification using Gabor wavelets. Gabor wavelets are the mathematical model of visual cortical cells of mammalian brain and using this, an image can be decomposed into multiple scales and multiple orientations. The Gabor function has been recognized as a very useful tool in texture analysis, due to its optimal localization properties in both spatial and frequency domain and found widespread use in computer vision. Texture features are found by calculating the mean and variance of the Gabor filtered image. Rotation normalization is achieved by the circular shift of the feature elements, so that all images have the same dominant direction. The texture similarity measurement of the query image and the target image in the database is computed by minimum distance criterion.  相似文献   

16.
Is the early visual system optimised to be energy efficient?   总被引:2,自引:0,他引:2  
This paper demonstrates that a representation which balances natural image encoding with metabolic energy efficiency shows many similarities to the neural organisation observed in the early visual system. A simple linear model was constructed that learned receptive fields by optimally balancing information coding with metabolic expense for an entire visual field in a 2-stage visual system. The input to the model consists of a space variant retinal array of photoreceptors. Natural images were then encoded through a bottleneck such as the retinal ganglion cells that form the optic nerve. The natural images represented by the activity of retinal ganglion cells were then encoded by many more 'cortical' cells in a divergent representation. Qualitatively, the system learnt by optimising information coding and energy expenditure and matched (1) the centre surround organisation of retinal ganglion cells; (2) the Gabor-like organisation of cortical simple cells; (3) higher densities of receptive fields in the fovea decreasing in the periphery; (4) smaller receptive fields in the fovea increasing in size in the periphery; (5) spacing ratios of retinal cells; and (6) aspect ratios of cortical receptive fields. Quantitatively, however, there are small but significant discrepancies between density slopes which may be accounted for by taking optic blur and fixation induced image statistics into account. In addition, the model cortical receptive fields are more broadly tuned than biological cortical neurons; this may be accounted for by the computational limitation of modelling a relatively low number of neurons. This paper shows that retinal receptive field properties can be understood in terms of balancing coding with synaptic energy expenditure and cortical receptive fields with firing rate energy expenditure, and provides a sound biological explanation of why 'sparse' distributions are beneficial.  相似文献   

17.
目的 轮廓是对图像目标的一种稀疏表达方式,从图像中提取出有效物体轮廓可以更好地完成后续的视觉认知任务,所以轮廓检测在计算机视觉领域具有较好的应用。本文考虑到初级视通路中视觉信息传递和处理流程中的特点,提出了一种基于初级视通路计算模型的轮廓检测方法。方法 在视网膜神经节环节,提出一种体现方向选择特性的经典感受野(CRF)改进模型,利用多尺度特征融合策略来模拟视网膜神经节细胞对图像目标的初级轮廓响应;在视网膜神经节到神经节-外膝体(LGN)的视通路中,提出一种反映视觉信息时空尺度特征的时空编码机制,模拟神经节-外膝体通路对初级轮廓响应的去冗余处理;利用非下采样轮廓波变换和Gabor变换协同作用,模拟非经典感受野(NCRF)的侧向抑制特性。最后利用初级视皮层对整体轮廓的前馈机制,实现对轮廓局部细节信息的完整性融合。结果 选择将RuG40图库的所有图像作为测试集合进行模型性能测试,对检测结果进行非极大值抑制和阈值处理,最终将得到的二值轮廓图与基准图比较,整个数据集和单张图的最优平均P指标分别为0.49和0.56。对于单个图像最优参数条件下的检测结果均值,将本文方法与非经典感受野抑制模型(ISO)和多特征外周抑制模型(MCI)比较,较两者分别提高了19.1%和7.7%。结果表明本文方法能有效突出主体轮廓并抑制纹理背景。结论 面向图像处理应用的初级视通路计算模型,将为后续图像理解和分析提供一种新的思路。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号