首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
危险源识别是民用航空管理的重要环节之一,危险源识别结果必须高度准确才能确保飞行的安全。为此,提出了一种基于深度极限学习机的危险源识别算法HIELM(Hazard Identification Algorithm Based on Extreme Lear-ning Machine),设计了一种由多个深层栈式极限学习机(S-ELM)和一个单隐藏层极限学习机(ELM)构成的深层网络结构。算法中,多个深层S-ELM使用平行结构,各自可以拥有不同的隐藏结点个数,按照危险源领域分类接受危险源状态信息完成预学习,并结合识别特征改进网络输入权重的产生方式。在单隐藏层ELM中,深层ELM的预学习结果作为其输入,改进了反向传播算法,提高了网络识别的精确度。同时,分别训练各深层S-ELM,缓解了高维数据训练的内存压力和节点过多产生的过拟合现象。  相似文献   

2.
路面裂缝对行车安全有很大的潜在威胁,以往的人工检测方法效率不高.现有裂缝检测方法模型泛化能力低,在复杂背景下的裂缝分割能力差且效率不高.为了解决这些问题,文中提出了一种基于编码器-解码器结构的新改进型网络结构Crack U-Net,目的是提高路面裂缝检测的模型泛化性以及检测精度.首先,Crack U-Net用密集连接结...  相似文献   

3.
为提高车道线检测的准确性以增强无人驾驶车辆的安全驾驶性能,在传统车道线检测的边缘提取、霍夫变换、颜色空间阈值提取、透视变换等方法的基础上,利用深度学习技术,提出一种基于改进FCN的车道线检测网络模型。该模型能够准确提取出车道线的特征信息,并在车道线检测数据集上进行模型训练,以评估该车道线检测网络的性能。通过实验对比,结果表明改进FCN模型在检测精度上比传统FCN网络模型提高了1%,具有良好的分割有效性。  相似文献   

4.
赵鑫  强彦  葛磊 《计算机科学》2017,44(8):312-317
近年来,深度学习技术在肺癌诊断方面得到了广泛的应用,但现有的研究主要集中于肺部CT图像。为了有效提高肺结节的诊断性能,提出一种基于双模态深度降噪自编码的肺结节诊断方法。首先,分别从肺部CT和PET图像中得到肺结节区域的特征信息;然后,以候选结节的PET/CT图像作为整个深度自编码网络的输入,并对高层信息进行学习;最后,采用融合策略对多种特征进行融合并将其作为整个框架的输出。实验结果表明,提出的方法可以达到92.81%的准确率、91.75%的敏感度和1.58%的特异性,且优于其他方法的诊断性能,更适用于肺结节良/恶性的辅助诊断。  相似文献   

5.
    
In image segmentation and classification tasks, utilizing filters based on the target object improves performance and requires less training data. We use the Gabor filter as initialization to gain more discriminative power. Considering the mechanism of the error backpropagation procedure to learn the data, after a few updates, filters will lose their initial structure. In this paper, we modify the updating rule in Gradient Descent to maintain the properties of Gabor filters. We use the Left Ventricle (LV) segmentation task and handwritten digit classification task to evaluate our proposed method. We compare Gabor initialization with random initialization and transfer learning initialization using convolutional autoencoders and convolutional networks. We experimented with noisy data and we reduced the amount of training data to compare how different methods of initialization can deal with these matters. The results show that the pixel predictions for the segmentation task are highly correlated with the ground truth. In the classification task, in addition to Gabor and random initialization, we initialized the network using pre-trained weights obtained from a convolutional Autoencoder using two different data sets and pre-trained weights obtained from a convolutional neural network. The experiments confirm the out-performance of Gabor filters comparing to the other initialization method even when using noisy inputs and a lesser amount of training data.  相似文献   

6.
针对腹部CT多器官分割任务中器官边缘模糊、器官比例差异过大的问题,提出了基于边缘约束和改进Swin Unetr的复杂器官分割方法。为了在不同体素比例的器官上提取精细程度不同的特征,设计了掩码注意力模块,通过计算各个器官的掩码信息,提取对应特征。随后,以数据集先验和掩码信息为基础,在相应的窗口和块大小上进行特征提取,以获得小比例器官分割所需的精细化特征,并与编码器的输出特征融合;同时,输出初步预测的语义分割结果后,为了充分利用边界信息,增强模型对于边界信息的处理能力,输出的语义特征通过卷积层进一步提取出边界信息,通过边缘损失最小化使模型的语义分割结果受到边缘预测任务的约束。在BTCV和TCIA pancreas-CT数据集上对所提方法进行训练和测试,在基于卷积网络的UNet++和基于Transformer的Swin Unetr上加入了提出的改进模块并进行训练,与Unetr等经典网络进行了对比实验。在BTCV数据集上,所提模型Dice系数分别达到了0.847 9和0.840 6,HD距离分别为11.76和8.35,整体上优于其他对比方法,从而验证了所提方法的有效性和可行性。  相似文献   

7.
    
Image segmentation is an important issue in many industrial processes, with high potential to enhance the manufacturing process derived from raw material imaging. For example, metal phases contained in microstructures yield information on the physical properties of the steel. Existing prior literature has been devoted to develop specific computer vision techniques able to tackle a single problem involving a particular type of metallographic image. However, the field lacks a comprehensive tutorial on the different types of techniques, methodologies, their generalizations and the algorithms that can be applied in each scenario. This paper aims to fill this gap. First, the typologies of computer vision techniques to perform the segmentation of metallographic images are reviewed and categorized in a taxonomy. Second, the potential utilization of pixel similarity is discussed by introducing novel deep learning-based ensemble techniques that exploit this information. Third, a thorough comparison of the reviewed techniques is carried out in two openly available real-world datasets, one of them being a newly published dataset directly provided by ArcelorMittal, which opens up the discussion on the strengths and weaknesses of each technique and the appropriate application framework for each one. Finally, the open challenges in the topic are discussed, aiming to provide guidance in future research to cover the existing gaps.  相似文献   

8.
对复杂自然背景下的图像文字检测技术进行了研究,提出了一种基于双门限梯度模式的图像文字检测方法。首先,在文字粗检测阶段中,该方法抽取了最大极值稳定区域(Maximally Stable Extremal Regions,MSER)作为候选文字区域,避免了对整幅图像进行扫描,极大地提高了检测速度和实时性;其次,在文字精检测阶段的特征提取部分,为了克服文字区域颜色对比反转问题和自然图像 的噪声干扰问题,提出了一种双门限梯度模式特征来描述文字区域的纹理特征;最后,在文字精检测的检测器设计中,利用极限学习机构造新的级联型ELM(Extreme Learning Machine)检测器,极大地缩短了分类器的训练时间。实验结果表明,该方法不仅具有优良的检测性能,而且能极大地缩短分类器训练时间和检测时间。  相似文献   

9.
    
Many neural network methods such as ML-RBF and BP-MLL have been used for multi-label classification. Recently, extreme learning machine (ELM) is used as the basic elements to handle multi-label classification problem because of its fast training time. Extreme learning machine based auto encoder (ELM-AE) is a novel method of neural network which can reproduce the input signal as well as auto encoder, but it can not solve the over-fitting problem in neural networks elegantly. Introducing weight uncertainty into ELM-AE, we can treat the input weights as random variables following Gaussian distribution and propose weight uncertainty ELM-AE (WuELM-AE). In this paper, a neural network named multi layer ELM-RBF for multi-label learning (ML-ELM-RBF) is proposed. It is derived from radial basis function for multi-label learning (ML-RBF) and WuELM-AE. ML-ELM-RBF firstly stacks WuELM-AE to create a deep network, and then it conducts clustering analysis on samples features of each possible class to compose the last hidden layer. ML-ELM-RBF has achieved satisfactory results on single-label and multi-label data sets. Experimental results show that WuELM-AE and ML-ELM-RBF are effective learning algorithms.  相似文献   

10.
A study on effectiveness of extreme learning machine   总被引:7,自引:0,他引:7  
Extreme learning machine (ELM), proposed by Huang et al., has been shown a promising learning algorithm for single-hidden layer feedforward neural networks (SLFNs). Nevertheless, because of the random choice of input weights and biases, the ELM algorithm sometimes makes the hidden layer output matrix H of SLFN not full column rank, which lowers the effectiveness of ELM. This paper discusses the effectiveness of ELM and proposes an improved algorithm called EELM that makes a proper selection of the input weights and bias before calculating the output weights, which ensures the full column rank of H in theory. This improves to some extend the learning rate (testing accuracy, prediction accuracy, learning time) and the robustness property of the networks. The experimental results based on both the benchmark function approximation and real-world problems including classification and regression applications show the good performances of EELM.  相似文献   

11.
Ensemble of online sequential extreme learning machine   总被引:3,自引:0,他引:3  
Yuan  Yeng Chai  Guang-Bin   《Neurocomputing》2009,72(13-15):3391
Liang et al. [A fast and accurate online sequential learning algorithm for feedforward networks, IEEE Transactions on Neural Networks 17 (6) (2006), 1411–1423] has proposed an online sequential learning algorithm called online sequential extreme learning machine (OS-ELM), which can learn the data one-by-one or chunk-by-chunk with fixed or varying chunk size. It has been shown [Liang et al., A fast and accurate online sequential learning algorithm for feedforward networks, IEEE Transactions on Neural Networks 17 (6) (2006) 1411–1423] that OS-ELM runs much faster and provides better generalization performance than other popular sequential learning algorithms. However, we find that the stability of OS-ELM can be further improved. In this paper, we propose an ensemble of online sequential extreme learning machine (EOS-ELM) based on OS-ELM. The results show that EOS-ELM is more stable and accurate than the original OS-ELM.  相似文献   

12.
    
Deep Convolutional Neural Networks are finding their way into modern machine learning tasks and proved themselves to become one of the best contenders for future development in the field. Several proposed methods in image segmentation and classification problems are giving us satisfactory results and could even perform better than humans in image recognition tasks. But also at the cost of their performance, they also require a huge amount of images for training and huge amount of computing power and time that makes them unrealistic in some situations where obtaining a large dataset is not feasible. In this work, an attempt is made for segmentation of Synthetic Aperture Radar (SAR) images which are not usually abundant enough for training, and are heavily affected by a kind of multiplicative noise called speckle noise. For the segmentation task, pre-defined filters are first applied to the images and are fed to hybrid CNN that is resulted from the concept of Inception and U-Net. The outcome of our proposed method has been examined for their effectiveness of application in a complete set of SAR images that are not used for training. The accuracy has also been compared with the manually annotated SAR images.  相似文献   

13.
研究表明,端学习机和判别性字典学习算法在图像分类领域极具有高效和准确的优势.然而,这两种方法也具有各自的缺点,极端学习机对噪声的鲁棒性较差,判别性字典学习算法在分类过程中耗时较长.为统一这种互补性以提高分类性能,文中提出了一种融合极端学习机的判别性分析字典学习模型.该模型利用迭代优化算法学习最优的判别性分析字典和极端学...  相似文献   

14.
王长宝  李青雯  于化龙 《计算机科学》2017,44(12):221-226, 254
针对在样本类别分布不平衡场景下,现有的主动学习算法普遍失效及训练时间过长等问题,提出采用建模速度更快的极限学习机,即ELM(Extreme Learning Machine)作为主动学习的基分类器,并以加权ELM算法用于主动学习过程的平衡控制,进而在理论上推导了其在线学习的过程,大幅降低了主动学习的时间开销,并将最终的混合算法命名为AOW-ELM算法。通过12个基准的二类不平衡数据集验证了该算法的有效性与可行性。  相似文献   

15.
Collaborative filtering has been widely applied in many fields in recent years due to the increase in web-based activities such as e-commerce and online content distribution. Current collaborative filtering techniques such as correlation-based, SVD-based and supervised learning-based approaches provide good accuracy, but are computationally very expensive and can only be deployed in static off-line settings, where the known rating information does not change with time. However, a number of practical scenarios require dynamic adaptive collaborative filtering that can allow new users, items and ratings to enter the system at a rapid rate. In this paper, we consider a novel adaptive personalized recommendation based on adaptive learning. Fast adaptive learning runs through all the aspects of the proposed approach, including training, prediction and updating. Empirical evaluation of our approach on Movielens dataset demonstrates that it is possible to obtain accuracy comparable to that of the correlation-based, SVD-based and supervised learning-based approaches at a much lower computational cost.  相似文献   

16.
    
In statistical machine translation (SMT), re-ranking of huge amount of randomly generated translation hypotheses is one of the essential components in determining the quality of translation result. In this work, a novel re-ranking modelling framework called cascaded re-ranking modelling (CRM) is proposed by cascading a classification model and a regression model. The proposed CRM effectively and efficiently selects the good but rare hypotheses in order to alleviate simultaneously the issues of translation quality and computational cost. CRM can be partnered with any classifier such as support vector machines (SVM) and extreme learning machine (ELM). Compared to other state-of-the-art methods, experimental results show that CRM partnered with ELM (CRM-ELM) can raise at most 11.6% of translation quality over the popular benchmark Chinese–English corpus (IWSLT 2014) and French–English parallel corpus (WMT 2015) with extremely fast training time for huge corpus.  相似文献   

17.
This paper introduces a novel interactive framework for segmenting images using probabilistic hypergraphs which model the spatial and appearance relations among image pixels. The probabilistic hypergraph provides us a means to pose image segmentation as a machine learning problem. In particular, we assume that a small set of pixels, which are referred to as seed pixels, are labeled as the object and background. The seed pixels are used to estimate the labels of the unlabeled pixels by learning on a hypergraph via minimizing a quadratic smoothness term formed by a hypergraph Laplacian matrix subject to the known label constraints. We derive a natural probabilistic interpretation of this smoothness term, and provide a detailed discussion on the relation of our method to other hypergraph and graph based learning methods. We also present a front-to-end image segmentation system based on the proposed method, which is shown to achieve promising quantitative and qualitative results on the commonly used GrabCut dataset.  相似文献   

18.
    
X-ray computed tomography (CT) is one of the most commonly used medical imaging technologies for the evaluation of many diseases. Full-dose imaging for CT ensures the image quality but usually raises concerns about the potential health risks of radiation exposure, especially for cancer patients. The conflict between reducing the radiation exposure and remaining diagnostic performance can be addressed effectively via reconstructing the low-dose CT (L-CT) to the high-quality full-dose CT (F-CT) ones. In this paper, we propose a Spatial Deformable Generative Adversarial Network (SDGAN) to achieve efficient L-CT reconstruction and analysis. SDGAN consists of three modules: the conformer-based generator, the dual-scale discriminator and the spatial deformable fusion module (SDFM). A sequence of consecutive L-CT slices is first fed into the conformer-based generator with the dual-scale discriminator to generate F-CT images. These estimated F-CT images are then fed into SDFM, which fully explores the inter- and intra-slice spatial information, to synthesize the final F-CT images with high quality. Experimental results show that the proposed SDGAN achieves the state-of-the-art performances on commonly used metrics and satisfies the reconstruction needs for clinical standards.  相似文献   

19.
图像复原是图像处理领域的重要问题。图像复原过程可以看作是退化过程的逆过程,其主要的目的是去除图像中的退化现象,有效地提高图像的质量。图像复原是图像处理任务中一个基础性、前提性的课题,然而模糊是普遍存在的,复原过程有一定的困难,目前已有一些图像复原的方法发展十分迅速,且研究成果突出,如基于滤波的方法,基于正则化的方法和基于微分方程的方法等。目前很多方法对于合成图像的复原效果较好,对于真实图像的复原效果并不理想。本论文的研究重点是充分利用深度学习的特征表示的优点,分析了图像复原的过程通用的网络,这对于图像复原的发展是十分有意义的。  相似文献   

20.
在低照度条件下拍摄的图像具有对比度低,亮度低,细节缺失等质量缺陷,给图像处理带来困难。提出一种改进零参考深度曲线低照度图像增强算法,通过在空间一致性损失函数中引入与卷积核大小相关参数,统一了不同尺寸图像的增强效果;将颜色不变损失、照明平滑损失函数与输入图像类型关联,使其增强效果的峰值信噪比提高17.75%,对比度提高26.75%;通过使用对称式卷积结构,解决原算法计算量大的问题;通过使用MobileNetV2轻量化网络对零参考深度网络(Zero-DCE)进行了优化,减少网络模型计算复杂度的同时保证模型较好的增强效果。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号