首页 | 本学科首页   官方微博 | 高级检索  
     

改进YOLOv4框架的胃息肉检测
引用本文:吴宇杰,肖满生,范明凯,胡一凡.改进YOLOv4框架的胃息肉检测[J].计算机系统应用,2023,32(2):250-257.
作者姓名:吴宇杰  肖满生  范明凯  胡一凡
作者单位:湖南工业大学 计算机学院, 株洲 412007
基金项目:湖南省自然科学基金(2021JJ50049, 2022JJ50077); 湖南省教育厅重点项目(21A0607)
摘    要:在计算机视觉的内窥胃部息肉检测中, 高效提取小型息肉图像特征是设计深度学习的计算机视觉模型一个难点. 针对该问题, 提出了一种YOLOv4改进的YOLOv4-polyp检测模型. 首先在YOLOv4的基础上, 引入CBAM卷积注意力模块增强模型在复杂环境的特征提取能力; 其次设计出轻量级CSPDarknet-49网络模型, 在降低模型复杂度的同时提高检测精度和检测速度; 最后根据胃息肉数据集的特点, 采用K-means++聚类算法对胃息肉数据集进行聚类分析, 得到优化后的锚框. 实验对比结果表明, YOLOv4-polyp对于经典YOLOv4模型在保持检测速率不变的同时, 在两个数据集中平均检测精度分别提升了5.21%和2.05%, 表现出良好的检测性能.

关 键 词:YOLOv4  注意力机制  K-means++  目标检测
收稿时间:2022/6/17 0:00:00
修稿时间:2022/7/18 0:00:00

Improved YOLOv4 Framework for Gastric Polyp Detection
WU Yu-Jie,XIAO Man-Sheng,FAN Ming-Kai,HU Yi-Fan.Improved YOLOv4 Framework for Gastric Polyp Detection[J].Computer Systems& Applications,2023,32(2):250-257.
Authors:WU Yu-Jie  XIAO Man-Sheng  FAN Ming-Kai  HU Yi-Fan
Affiliation:School of Computer Science, Hunan University of Technology, Zhuzhou 412007, China
Abstract:In endoscopic gastric polyp detection based on computer vision, efficiently extracting the features of images of small polyps is a difficulty in the design of a deep learning-based computer vision model. To solve this problem, this study proposes a detection model based on an improved you only look once version 4 (YOLOv4), namely YOLOv4-polyp. Specifically, on the basis of YOLOv4, this study adds a convolutional block attention module (CBAM) to enhance the feature extraction capability of the model in complex environments. Then, a lightweight CSPDarknet-49 network model is designed to both reduce the complexity of the model and improve its detection accuracy and detection speed. Finally, considering the characteristics of the gastric polyp datasets, the K-means++ clustering algorithm is used for the cluster analysis of the gastric polyp datasets and the attainment of the optimized anchor box. The experimental comparison results show that compared with the classical YOLOv4 model, the proposed YOLOv4-polyp achieves favorable detection performance on the two datasets as it improves the average detection accuracy by 5.21% and 2.05%, respectively, without compromising the detection speed.
Keywords:YOLOv4  attention mechanism  K-means++  target detection
点击此处可从《计算机系统应用》浏览原始摘要信息
点击此处可从《计算机系统应用》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号