首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
《微型机与应用》2015,(21):43-46
织物瑕疵纹理特征复杂,单一特征不能很好地反映纹理信息。为此,本文提出一种基于局部二进制模式(Local Binary Pattern,LBP)算子和灰度共生矩阵(Gray Level Co-occurrence Matrix,GLCM)的多特征融合算法。首先,对LBP算子进行了改进,提出一种基于邻域像素中值的中心对称LBP算子;然后,将其提取出的纹理特征和灰度共生矩阵提取的纹理特征进行融合;最后,通过极速学习机和支持向量机做分类实验,验证融合特征描述织物瑕疵纹理特征的能力。实验表明,本文方法提高了织物物疵点检测率,并且具有很好的抗干扰能力。  相似文献   

2.
3.
Yan  Na  Chen  Aibin  Zhou  Guoxiong  Zhang  Zhiqiang  Liu  Xiangyong  Wang  Jianwu  Liu  Zhihua  Chen  Wenjie 《Multimedia Tools and Applications》2021,80(30):36529-36547
Multimedia Tools and Applications - The classification of birdsong has very important signification to monitor the bird population in the habitats. Aiming at the birdsong dataset with complex and...  相似文献   

4.
Recently, using old or irrelevant images in microblogs to spread false rumors has become increasingly rampant. Therefore, tracking and verifying the sources of images has become essential. In order to solve this problem, this paper provides a real-time, large-scale duplicate image detection method based on multi-feature fusion. This method firstly uses multi-feature fusion to improve retrieval accuracy. Then, by Hbase optimization, it uses a bloom filter and range query to improve retrieval efficiency. Experimental results show that, compared with existing algorithms, this method has higher precision and recall rates. Meanwhile, real-time responsiveness and scalability of the approach also meet real-world needs.  相似文献   

5.
交通信号灯检测和识别是无人驾驶和辅助驾驶领域的重要研究内容,能够避免在通过路口时由于交通信号灯判断失误导致的交通事故,提升驾驶的安全性。客观的复杂交通场景增加了检测识别算法难度。实现了基于Faster-RCNN的交通信号的检测识别,采集了交通场景数据进行标注,填充了国内交通信号灯公开数据集的空白。通过实验对比,选择最优的特征提取网络,并在智能车实验平台上验证了方法的有效性。  相似文献   

6.
驾驶员的疲劳驾驶是造成交通事故的重要因素,为了实时有效地检测驾驶员的驾驶状态,设计了一种融合多种疲劳特征进行疲劳状态判定的检测算法,并构建了车载的基于现场可编程门阵列(FPGA)的嵌入式检测平台。该多检测算法融合了眼睛和嘴巴的疲劳特征,当某一特征的检测受到影响时可以使用另外的特征进行疲劳状态的判定,较传统的单一特征疲劳检测算法拥有更高的检测效率。实验结果表明:系统的算法简单、可靠、实时性强。  相似文献   

7.
Zou  Wei  Zhang  Dong  Lee  Dah-Jye 《Applied Intelligence》2022,52(3):2918-2929
Applied Intelligence - Using lightweight networks for facial expression recognition (FER) is becoming an important research topic in recent years. The key to the success of FER with lightweight...  相似文献   

8.
《传感器与微系统》2019,(11):147-150
在无人驾驶和辅助驾驶领域,交通标志牌检测识别是重要的。针对目前基于YOLO的检测方法能够达到实时的检测效果,但在准确率方面有所降低的问题,提出了基于感兴趣区域(ROI)的交通标志牌检测方法。首先根据交通标志牌的颜色特性得到候选区域;再利用交通场景图像规则确定交通标志牌的ROI;最后在交通标志牌的ROI,基于YOLO v3实现对交通标志牌的检测识别。实验结果表明:由于本文提出的方法去除了图像中部分干扰因素,使得算法在检测精度上得到了提升,也能满足实时性的需求,并在无人驾驶车辆上进行了验证。  相似文献   

9.
Recent research emphasizes more on analyzing multiple features to improve face recognition (FR) performance. One popular scheme is to extend the sparse representation based classification framework with various sparse constraints. Although these methods jointly study multiple features through the constraints, they just process each feature individually such that they overlook the possible high-level relationship among different features. It is reasonable to assume that the low-level features of facial images, such as edge information and smoothed/low-frequency image, can be fused into a more compact and more discriminative representation based on the latent high-level relationship. FR on the fused features is anticipated to produce better performance than that on the original features, since they provide more favorable properties. Focusing on this, we propose two different strategies which start from fusing multiple features and then exploit the dictionary learning (DL) framework for better FR performance. The first strategy is a simple and efficient two-step model, which learns a fusion matrix from training face images to fuse multiple features and then learns class-specific dictionaries based on the fused features. The second one is a more effective model requiring more computational time that learns the fusion matrix and the class-specific dictionaries simultaneously within an iterative optimization procedure. Besides, the second model considers to separate the shared common components from class-specified dictionaries to enhance the discrimination power of the dictionaries. The proposed strategies, which integrate multi-feature fusion process and dictionary learning framework for FR, realize the following goals: (1) exploiting multiple features of face images for better FR performances; (2) learning a fusion matrix to merge the features into a more compact and more discriminative representation; (3) learning class-specific dictionaries with consideration of the common patterns for better classification performance. We perform a series of experiments on public available databases to evaluate our methods, and the experimental results demonstrate the effectiveness of the proposed models.  相似文献   

10.
融合多特征的均值漂移彩色图像分割方法   总被引:1,自引:1,他引:1  
针对均值漂移图像分割方法中只考虑图像颜色和空间信息,对纹理丰富的图像不能进行有效分割的情况,提出一种新的融合图像颜色、纹理和空间等低层特征信息的图像分割方法.用极性、各向异性和对比度来表示图像的纹理信息,并结合颜色和空间信息形成图像分割特征;然后用均值漂移进行图像滤波;最后,进行区域合并得到分割结果.实验结果表明,该方法对纹理丰富的自然风景图像有较好的分割效果.  相似文献   

11.
为解决在复杂背景条件下的跟踪不稳定问题,提高目标跟踪的鲁棒性和准确性,研究一种在传统核相关滤波算法的基础上对多特征进行线性融合和多峰值检测更新机制结合的核相关滤波目标跟踪算法,使用多个专家进行评估,充分结合各特征的优势,训练出最优的相关滤波器.通过O T B-2013公开数据集全部视频序列对算法进行验证,该算法准确度能达到81.7%,成功率达到69.2%,验证了该算法能够在旋转、运动模糊、快速运动、形变、光照变化和超出视野等场景下取得较好的结果,是一种稳定的目标跟踪算法.  相似文献   

12.
13.
针对微下击暴流、低空急流、顺逆风和侧风4种不同低空风切变的激光雷达扫描图像,提出一种基于形状特征和纹理特征相结合的识别方法。采用Zernike矩和旋转不变统一模式的局部二值模式(LBP),分别提取反映风场全局变化的形状特征和反映风场局部变化的纹理特征;将两种特征串联融合后,通过主成份分析(PC A )对其进行降维,提取有效特征;利用 k近邻分类器对4种低空风切变图像进行分类。实验结果表明,与其它多种算法相比,该算法平均识别率最高,识别效果更加稳定。  相似文献   

14.
Huan  Ruohong  Zhan  Ziwei  Ge  Luoqi  Chi  Kaikai  Chen  Peng  Liang  Ronghua 《Multimedia Tools and Applications》2021,80(30):36159-36182
Multimedia Tools and Applications - A hybrid convolutional neural network (CNN) and bidirectional long short-term memory (BLSTM) network for human complex activity recognition with multi-feature...  相似文献   

15.
针对在特殊领域中彩色图像边缘检测,不仅需要准确地检测到目标边缘而且需要去除非目标边缘,提出了一种新的支持向量机多特征彩色图像边缘检测方法.这种方法根据彩色图像边缘的特点,在图像亮度和色度通道上结合像素加权梯度值和像素邻域相关信息构建多维特征向量,通过训练的支持向量机可以准确识别出目标边缘.实验结果表明,该方法比传统边缘检测方法具有更好目标边缘识别能力.  相似文献   

16.
17.
Multimedia Tools and Applications - Intelligent Transportation System (ITS), including unmanned vehicles, has been gradually matured despite on road. How to eliminate the interference due to...  相似文献   

18.
Ears have rich structural features that are almost invariant with increasing age and facial expression variations. Therefore ear recognition has become an effective and appealing approach to non-contact biometric recognition. This paper gives an up-to date review of research works on ear recognition. Current 2D ear recognition approaches achieve good performance in constrained environments. However the recognition performance degrades severely under pose, lighting and occlusion. This paper proposes a 2D ear recognition approach based on local information fusion to deal with ear recognition under partial occlusion. Firstly, the whole 2D image is separated to sub-windows. Then, Neighborhood Preserving Embedding is used for feature extraction on each sub-window, and we select the most discriminative sub-windows according to the recognition rate. Each sub-window corresponds to a sub-classifier. Thirdly, a sub-classifier fusion approach is used for recognition with partially occluded images. Experimental results on the USTB ear dataset and UND dataset have illustrated that using only few sub-windows we can represent the most meaningful region of the ear, and the multi-classifier model gets higher recognition rate than using the whole image for recognition.  相似文献   

19.
为了探索人脸识别中有效的特征提取方法,提出了一种基于特征层融合的算法.该方法融合了保局投影(LPP)和最大间距准则(MMC)两种方法.首先对训练样本进行LPP判别分析,得到每个训练样本在LPP子空间上的投影,然后利用MMC方法对所有的投影进行鉴别分析,提取出更有效的样本判别特征;采用最小近邻分类器分类.在ORL人脸库的测试结果表明,在姿态、光照、表情、训练样本数目变化的情况下,该算法都具有较好的识别率.  相似文献   

20.
Matching objects across multiple cameras with non-overlapping views is a necessary but difficult task in the wide area video surveillance. Owing to the lack of spatio-temporal information, only the visual information can be used in some scenarios, especially when the cameras are widely separated. This paper proposes a novel framework based on multi-feature fusion and incremental learning to match the objects across disjoint views in the absence of space–time cues. We first develop a competitive major feature histogram fusion representation (CMFH1) to formulate the appearance model for characterizing the potentially matching objects. The appearances of the objects can change over time and hence the models should be continuously updated. We then adopt an improved incremental general multicategory support vector machine algorithm (IGMSVM2) to update the appearance models online and match the objects based on a classification method. Only a small amount of samples are needed for building an accurate classification model in our method. Several tests are performed on CAVIAR, ISCAPS and VIPeR databases where the objects change significantly due to variations in the viewpoint, illumination and poses. Experimental results demonstrate the advantages of the proposed methodology in terms of computational efficiency, computation storage, and matching accuracy over that of other state-of-the-art classification-based matching approaches. The system developed in this research can be used in real-time video surveillance applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号