首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A 3D time‐of‐flight camera was applied to develop a crop plant recognition system for broccoli and green bean plants under weedy conditions. The developed system overcame the previously unsolved problems caused by occluded canopy and illumination variation. An efficient noise filter was developed to remove the sparse noise points in 3D point cloud space. Both 2D and 3D features including the gradient of amplitude and depth image, surface curvature, amplitude percentile index, normal direction, and neighbor point count in 3D space were extracted and found effective for recognizing these two types of plants. Separate segmentation algorithms were developed for each of the broccoli and green bean plant in accordance with their 3D geometry and 2D amplitude characteristics. Under the experimental condition where the crops were heavily infested by various types of weed plants, detection rates over 88.3% and 91.2% were achieved for broccoli and green bean plant leaves, respectively. Additionally, the crop plants were segmented out with nearly complete shape. Moreover, the algorithms were computationally optimized, resulting in an image processing speed of over 30 frames per second.  相似文献   

2.
Conventional farming still relies on large quantities of agrochemicals for weed management which have several negative side‐effects on the environment. Autonomous robots offer the potential to reduce the amount of chemicals applied, as robots can monitor and treat each plant in the field individually and thereby circumventing the uniform chemical treatment of the whole field. Such agricultural robots need the ability to identify individual crops and weeds in the field using sensor data and must additionally select effective treatment methods based on the type of weed. For example, certain types of weeds can only be effectively treated mechanically due to their resistance to herbicides, whereas other types can be treated trough selective spraying. In this article, we present a novel system that provides the necessary information for effective plant‐specific treatment. It estimates the stem location for weeds, which enables the robots to perform precise mechanical treatment, and at the same time provides the pixel‐accurate area covered by weeds for treatment through selective spraying. The major challenge in developing such a system is the large variability in the visual appearance that occurs in different fields. Thus, an effective classification system has to robustly handle substantial environmental changes including varying weed pressure, various weed types, different growth stages, changing visual appearance of the plants and the soil. Our approach uses an end‐to‐end trainable fully convolutional network that simultaneously estimates plant stem positions as well as the spatial extent of crop plants and weeds. It jointly learns how to detect the stems and the pixel‐wise semantic segmentation and incorporates spatial information by considering image sequences of local field strips. The jointly learned feature representation for both tasks furthermore exploits the crop arrangement information that is often present in crop fields. This information is considered even if it is only observable from the image sequences and not a single image. Such image sequences, as typically provided by robots navigating over the field along crop rows, enable our approach to robustly estimate the semantic segmentation and stem positions despite the large variations encountered in different fields. We implemented and thoroughly tested our approach on images from multiple farms in different countries. The experiments show that our system generalizes well to previously unseen fields under varying environmental conditions—a key capability to deploy such systems in the real world. Compared to state‐of‐the‐art approaches, our approach generalizes well to unseen fields and not only substantially improves the stem detection accuracy, that is, distinguishing crop and weed stems, but also improves the semantic segmentation performance.  相似文献   

3.
针对植株深度图像的像素错误和缺失、常见的滤波方法无法准确修复植株深度图 像的问题,提出一种基于目标特征的植株深度图像修复方法。首先基于颜色和空间信息的图像 分割算法对植株彩色图像进行目标分割,再检索每个目标的外轮廓,并对外轮廓进行多边形拟 合;其次,基于目标区域搜索深度图像中具有正确深度值的像素作为目标区域采样点,并对叶 片区域的图像进行归一化;最后,利用空间拟合法计算各目标区域的方程,修复区域内小面积 错误和缺失的深度值,同时采用支持向量机和空间变换运算对大面积错误和缺失深度值的叶片 区域进行修复。实验结果表明,该方法能够准确地修复植株深度图像中错误、缺失的深度数据, 且能够有效地保护目标区域的边缘信息。  相似文献   

4.
This paper presents a weed/crop classification method using computer vision and morphological analysis. Subsequent supervised and unsupervised learning methods are applied to extract dominant morphological characteristics of weeds present in corn and soybean fields. The novelty of the presented technique resides in the feature extraction process that is based on spatial localization of vegetation in fields. Features from the weed leaf area distribution are extracted from the cultivation inter-rows, then features from the crop are inferred from the mixture model equation. Those extracted features are then passed to a naive bayesian classifier and a gaussian mixture clustering algorithm to discriminate weed from crop plant. The presented technique correctly classifies an average of 94 % of corn and soybean plants and 85 % of the weed (multiple species) without any prior knowledge on the species present in the field.  相似文献   

5.
综合颜色和形态特征的小麦田杂草识别方法   总被引:1,自引:0,他引:1  
利用机器视觉技术把杂草精确识别出来是精细农业领域研究的热点问题之一。针对杂草与小麦叶子交叠的情况,提出一种综合颜色和形态特征的方法进行杂草识别。在L*a*b*颜色空间,选取a*作为特征量并用改进的最大类间方差法进行阈值分割获得植物图像;在HSI颜色空间,利用多层的同质性分割算法分离小麦与杂草;结合形态学特征开闭运算滤波及二值逻辑与运算获得杂草图像;模拟化学除草系统,从理论上评价整个系统的识别效率。实验结果表明:杂草正确识别率高达92.6%以上,且除草剂的使用量减少超过72.4%。  相似文献   

6.
This work presents several developed computer-vision-based methods for the estimation of percentages of weed, crop and soil present in an image showing a region of interest of the crop field. The visual detection of weed, crop and soil is an arduous task due to physical similarities between weeds and crop and to the natural and therefore complex environments (with non-controlled illumination) encountered. The image processing was divided in three different stages at which each different agricultural element is extracted: (1) segmentation of vegetation against non-vegetation (soil), (2) crop row elimination (crop) and (3) weed extraction (weed). For each stage, different and interchangeable methods are proposed, each one using a series of input parameters which value can be changed for further refining the processing. A genetic algorithm was then used to find the best value of parameters and method combination for different sets of images. The whole system was tested on several images from different years and fields, resulting in an average correlation coefficient with real data (bio-mass) of 84%, with up to 96% correlation using the best methods on winter cereal images and of up to 84% on maize images. Moreover, the method’s low computational complexity leads to the possibility, as future work, of adapting them to real-time processing.  相似文献   

7.
牛杰  卜雄洙  钱堃 《计算机应用》2014,34(5):1463-1466
针对基于单一颜色信息的目标分割算法易受光线因素影响的问题,提出一种颜色及深度信息融合进行前景分割的目标实时检测方法。采用Kinect传感器采集低成本深度(RGB-D)图像,利用改进的ViBe算法及多帧差分法分别对于RGB以及深度图像进行建模。前景分割后,利用选取基准(SC)融合策略优化目标结果,然后通过rg Chromaticity颜色模型计算前景区域直方图信息并与模板匹配完成目标标记。实验结果表明,该方法对于环境光线及噪声干扰具有一定的鲁棒性,对于ViBe算法中背景前景同色误检及“鬼影”现象,对于深度图像分割中前景背景距离过近而造成误检现象都有很好的识别效果。  相似文献   

8.
Computer vision based methods for detecting weeds in lawns   总被引:4,自引:0,他引:4  
In this paper, two methods for detecting weeds in lawns using computer vision techniques are proposed. The first is based on an assumption about the differences in statistical values between the weed and grass areas in edge images and using Bayes classifier to discriminate them. The second also uses the differences in texture between both areas in edge images but instead applies only simple morphology operators. Correct weed detection rates range from 77.70 to 82.60% for the first method and from 89.83 to 91.11% for the second method. From the results, the methods show the robustness against lawn color change. In addition, the proposed methods together with a chemical weeding system as well as a non-chemical weeding system based on pulse high voltage discharge are simulated and the efficiency of the overall systems are evaluated theoretically. With a chemical based system, more than 72% of the weeds can be destroyed with a herbicide reduction rate of 90–94% for both methods. For the latter weeding system, killed weed rate varies from 58 to 85%.  相似文献   

9.
显著检测是计算机视觉的重要组成部分,但大部分的显著检测工作着重于2D图像的分析,并不能很好地应用于RGB-D图片的显著检测。受互补的显著关系在2D图像检测中取得的优越效果的启发,并考虑RGB-D图像包含的深度特征,提出多角度融合的RGB-D显著检测方法。此方法主要包括三个部分,首先,构建颜色深度特征融合的图模型,为显著计算提供准确的相似度关系;其次,利用区域的紧密度进行全局和局部融合的显著计算,得到相对准确的初步显著图;最后,利用边界连接权重和流形排序进行背景和前景融合的显著优化,得到均匀平滑的最终显著图。在RGBD1000数据集上的实验对比显示,所提出的方法超越了当前流行的方法,表明多个角度互补关系的融合能够有效提高显著检测的准确率。  相似文献   

10.
王书朋  赵瑶 《计算机应用》2020,40(1):252-257
针对传统多曝光图像融合存在颜色和细节信息保留不完整的问题,提出了一种新的基于自适应分割的多曝光图像融合算法。首先,采用超像素分割将输入图像分割为颜色一致的图像块,再利用结构分解将图像块分解为三个独立分量。根据各分量特点设计不同融合规则,以保留源图像中的颜色和细节信息。然后,采用引导滤波平滑各分量的权重图以及信号强度分量和亮度分量,有效地克服块效应缺陷,保留源图像中的边缘信息,减少伪影。最后,重构融合后的三个分量,得到最终的融合图像。实验结果表明,与传统的融合算法相比,所提算法在互信息(MI)上平均提升了53.6%、标准差(SD)上平均提升了24.0%。该算法能够有效地保留输入图像的颜色和细节纹理信息。  相似文献   

11.
RGB-D 图像在提供场景 RGB 信息的基础上添加了 Depth 信息,可以有效地描述场景的色彩及 三维几何信息。结合 RGB 图像及 Depth 图像的特点,提出一种将高层次的语义特征反向融合到低层次的边缘 细节特征的反向融合实例分割算法。该方法通过采用不同深度的特征金字塔网络(FPN)分别提取 RGB 与 Depth 图像特征,将高层特征经上采样后达到与最底层特征同等尺寸,再采用反向融合将高层特征融合到低层,同时 在掩码分支引入掩码优化结构,从而实现 RGB-D 的反向融合实例分割。实验结果表明,反向融合特征模型能 够在 RGB-D 实例分割的研究中获得更加优异的成绩,有效地融合了 Depth 图像与彩色图像 2 种不同特征图像 特征,在使用 ResNet-101 作为骨干网络的基础上,与不加入深度信息的 Mask R-CNN 相比平均精度提高 10.6%, 比直接正向融合 2 种特征平均精度提高 4.5%。  相似文献   

12.
Among many applications of machine vision, plant image analysis has recently began to gain more attention due to its potential impact on plant visual phenotyping, particularly in understanding plant growth, assessing the quality/performance of crop plants, and improving crop yield. Despite its importance, the lack of publicly available research databases containing plant imagery has substantially hindered the advancement of plant image analysis. To alleviate this issue, this paper presents a new multi-modality plant imagery database named “MSU-PID,” with two distinct properties. First, MSU-PID is captured using four types of imaging sensors, fluorescence, infrared, RGB color, and depth. Second, the imaging setup and the variety of manual labels allow MSU-PID to be suitable for a diverse set of plant image analysis applications, such as leaf segmentation, leaf counting, leaf alignment, and leaf tracking. We provide detailed information on the plants, imaging sensors, calibration, labeling, and baseline performances of this new database.  相似文献   

13.
针对高分辨率高层建筑物遥感影像噪声干扰大、阴影检测困难的问题,本文提出了一种改进阈值分割和注意力残差网络结合的高层建筑物遥感影像阴影检测方法.首先,利用改进最大类间和最小类内阈值分割算法建立阈值分割模型,并基于轮廓间的连通域特性和端点位置约束关系利用欧几里得度量算法对断裂轮廓进行修补得到阴影轮廓;然后,利用生成对抗网络模型对误判数据集进行扩充;最后,对残差网络进行改进,在特征图中加入注意力机制进行全局特征融合.在不同场景下,分别与辐射模型、直方图阈值分割、彩色模型阴影检测方法,支持向量机、视觉几何群网络、Inception和残差网络分类网络进行了对比实验,本文方法综合误判率和漏检率分别为2.1%、1.5%.结果表明,本文提出的高层建筑遥感阴影检测算法能较好地完成阴影区域的分割和检测,有利于节约人力物力资源、协助工作人员进行遥感信息的解译、遥感档案建立等工作,具有实用价值.  相似文献   

14.

Agriculture is the primary source of livelihood for about 70% of the rural population in India. The crop variety cultivated in India is very diverse. There are more than 500 crop varieties grown in India. Despite the technological advances, the agricultural practices are still manual and involve less automation than western countries. Most of the diseases affecting a plant will reflect the damage in the leaves. The diseases affecting the plant can thus be identified from the leaf images. This paper presents an automatic plant leaf damage detection and disease identification system. The first stage of the proposed method identifies the type of the disease based on the plant leaf image using DenseNet. The DenseNet model is trained on images categorized according to their nature, i.e., healthy and the type of the disease. This model is then used for testing new leaf images. The proposed DenseNet model produced a classification accuracy of 100%, with fewer images used during the training stage. The second stage identifies the damage in the leaf using deep learning-based semantic segmentation. Each RGB pixel value combination in the image is extracted, and supervised training is performed on the pixel values using the 1D Convolutional Neural Network (CNN). The trained model can detect the damage present in the leaves at a pixel level. Evaluation of the proposed semantic segmentation resulted in an accuracy of 97%. The third stage suggests a remedy for the disease based on the disease type and the damage state. The proposed method detects various defects in different plants in the experimental analysis, namely apple, grape, potato, and strawberry. The proposed model is compared with the existing techniques and obtained better performance in comparison with those methods.

  相似文献   

15.
目的 受光照变化、拍摄角度、物体数量和物体尺寸等因素的影响,室内场景下多目标检测容易出现准确性和实时性较低的问题。为解决此类问题,本文基于物体的彩色和深度图像组,提出了分步超像素聚合和多模态信息融合的目标识别检测方法。方法 在似物性采样(object proposal)阶段,依据人眼对显著性物体观察时先注意其色彩后判断其空间深度信息的理论,首先对图像进行超像素分割,然后结合颜色信息和深度信息对分割后的像素块分步进行多阈值尺度自适应超像素聚合,得到具有颜色和空间一致性的似物性区域;在物体识别阶段,为实现物体不同信息的充分表达,利用多核学习方法融合所提取的物体颜色、纹理、轮廓、深度多模态特征,将特征融合核输入支持向量机多分类机制中进行学习和分类检测。结果 实验在基于华盛顿大学标准RGB-D数据集和真实场景集上将本文方法与当前主流算法进行对比,得出本文方法整体的检测精度较当前主流算法提升4.7%,运行时间有了大幅度提升。其中分步超像素聚合方法在物体定位性能上优于当前主流似物性采样方法,并且在相同召回率下采样窗口数量约为其他算法的1/4;多信息融合在目标识别阶段优于单个特征和简单的颜色、深度特征融合方法。结论 结果表明在基于多特征的目标检测过程中本文方法能够有效利用物体彩色和深度信息进行目标定位和识别,对提高物体检测精度和检测效率具有重要作用。  相似文献   

16.
空间植物培养实验作为空间科学的一项重要研究,通常会获得大量的植物序列图像,传统的处理方法多采用人工观察,以供后续的进一步分析。本文提出一种基于多尺度深度特征融合的空间植物分割算法。该方法应用全卷积深度神经网络来提取多尺度特征,并分层次地融合由深层到浅层的特征,以达到对植物进行像素级的识别。分层次的特征融合了语义信息、中间层信息和几何特征,提高了分割的准确性。实验表明该方法在分割准确性方面表现良好,能够自动提取空间植物实验中的有效信息。  相似文献   

17.
In current practice, broccoli heads are selectively harvested by hand. The goal of our work is to develop a robot that can selectively harvest broccoli heads, thereby reducing labor costs. An essential element of such a robot is an image‐processing algorithm that can detect broccoli heads. In this study, we developed a deep learning algorithm for this purpose, using the Mask Region‐based Convolutional Neural Network. To be applied on a robot, the algorithm must detect broccoli heads from any cultivar, meaning that it can generalize on the broccoli images. We hypothesized that our algorithm can be generalized through network simplification and data augmentation. We found that network simplification decreased the generalization performance, whereas data augmentation increased the generalization performance. In data augmentation, the geometric transformations (rotation, cropping, and scaling) led to a better image generalization than the photometric transformations (light, color, and texture). Furthermore, the algorithm was generalized on a broccoli cultivar when 5% of the training images were images of that cultivar. Our algorithm detected 229 of the 232 harvestable broccoli heads from three cultivars. We also tested our algorithm on an online broccoli data set, which our algorithm was not previously trained on. On this data set, our algorithm detected 175 of the 176 harvestable broccoli heads, proving that the algorithm was successfully generalized. Finally, we performed a cost‐benefit analysis for a robot equipped with our algorithm. We concluded that the robot was more profitable than the human harvest and that our algorithm provided a sufficient basis for robot commercialization.  相似文献   

18.
This work presents a method for plant species identification using the images of flowers. It focuses on the stable feature extraction of flowers such as color, texture and shape features in addition to fractal dimension. Color based segmentation using K-means clustering and active contour model is used to extract the color features. Texture segmentation using texture filter is used to segment the image and obtain texture features. Sobel, Prewitt and Robert operators are used to extract the boundary of image and to obtain the shape features. Classification of the plants is done using Proximal Support Vector Machine (PSVM) and Adaptive Neuro Fuzzy Inference System (ANFIS) classifiers.  相似文献   

19.
基于双目视觉的关键点的检测方法及定位研究   总被引:2,自引:1,他引:1  
以双目立体视觉测量为背景,以显著性标志物中的关键点为目标,提出了基于颜色阈值分割的关键点的实时检测和定位方法;关键点的检测和定位主要分为3个过程,一是图像的预处理部分,为后续的检测和定位提供基础;二是对预处理后的图像进行关键点检测,检测的方法首先分别通过颜色阈值分割、轮廓提取、多边形逼近以及设置矩形轮廓提取出关键点所在的显著性标志物,其次根据该显著性标志物的特点,采用hough变换提取线段,并通过最小二乘法进行直线拟合,求出关键点的精确的像素坐标;三是利用立体视觉三角测量原理,对求取的关键点进行精确的位姿计算;该方法实时性好、精度高,为后续的机器人视觉避障提供了一定的理论依据。  相似文献   

20.
Information on which weed species are present within agricultural fields is a prerequisite when using robots for site‐specific weed management. This study proposes a method of improving robustness in shape‐based classifying of seedlings toward natural shape variations within each plant species. To do so, leaves are separated from plants and classified individually together with the classification of the whole plant. The classification is based on common, rotation‐invariant features. Based on previous classifications of leaves and plants, confidence in correct assignment is created for the plants and leaves, and this confidence is used to determine the species of the plant. By using this approach, the classification accuracy of eight plants species at early growth stages is increased from 93.9% to 96.3%.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号