首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
Li  Zhao  Lu  Wei  Sun  Zhanquan  Xing  Weiwei 《Neural computing & applications》2016,28(1):513-524

Text classification is a popular research topic in data mining. Many classification methods have been proposed. Feature selection is an important technique for text classification since it is effective in reducing dimensionality, removing irrelevant data, increasing learning accuracy, and improving result comprehensibility. In recent years, data have become increasingly larger in both the number of instances and the number of features in many applications. As a result, classical feature selection methods do not work well in processing large-scale dataset due to the expensive computational cost. To address this issue, in this paper, a parallel feature selection method based on MapReduce is proposed. Specifically, mutual information based on Renyi entropy is used to measure the relationship between feature variables and class variables. Maximum mutual information theory is then employed to choose the most informative combination of feature variables. We implemented the selection process based on MapReduce, which is efficient and scalable for large-scale problems. At last, a practical example well demonstrates the efficiency of the proposed method.

  相似文献   

2.

The prediction of stock price movement direction is significant in financial circles and academic. Stock price contains complex, incomplete, and fuzzy information which makes it an extremely difficult task to predict its development trend. Predicting and analysing financial data is a nonlinear, time-dependent problem. With rapid development in machine learning and deep learning, this task can be performed more effectively by a purposely designed network. This paper aims to improve prediction accuracy and minimizing forecasting error loss through deep learning architecture by using Generative Adversarial Networks. It was proposed a generic model consisting of Phase-space Reconstruction (PSR) method for reconstructing price series and Generative Adversarial Network (GAN) which is a combination of two neural networks which are Long Short-Term Memory (LSTM) as Generative model and Convolutional Neural Network (CNN) as Discriminative model for adversarial training to forecast the stock market. LSTM will generate new instances based on historical basic indicators information and then CNN will estimate whether the data is predicted by LSTM or is real. It was found that the Generative Adversarial Network (GAN) has performed well on the enhanced root mean square error to LSTM, as it was 4.35% more accurate in predicting the direction and reduced processing time and RMSE by 78 s and 0.029, respectively. This study provides a better result in the accuracy of the stock index. It seems that the proposed system concentrates on minimizing the root mean square error and processing time and improving the direction prediction accuracy, and provides a better result in the accuracy of the stock index.

  相似文献   

3.
Qi  Jinwei  Huang  Xin  Peng  Yuxin 《Multimedia Tools and Applications》2017,76(23):25109-25127

As a highlighting research topic in the multimedia area, cross-media retrieval aims to capture the complex correlations among multiple media types. Learning better shared representation and distance metric for multimedia data is important to boost the cross-media retrieval. Motivated by the strong ability of deep neural network in feature representation and comparison functions learning, we propose the Unified Network for Cross-media Similarity Metric (UNCSM) to associate cross-media shared representation learning with distance metric in a unified framework. First, we design a two-pathway deep network pretrained with contrastive loss, and employ double triplet similarity loss for fine-tuning to learn the shared representation for each media type by modeling the relative semantic similarity. Second, the metric network is designed for effectively calculating the cross-media similarity of the shared representation, by modeling the pairwise similar and dissimilar constraints. Compared to the existing methods which mostly ignore the dissimilar constraints and only use sample distance metric as Euclidean distance separately, our UNCSM approach unifies the representation learning and distance metric to preserve the relative similarity as well as embrace more complex similarity functions for further improving the cross-media retrieval accuracy. The experimental results show that our UNCSM approach outperforms 8 state-of-the-art methods on 4 widely-used cross-media datasets.

  相似文献   

4.
目的 城镇建成区是城镇研究重要的基础信息,也是实施区域规划、落实城镇功能空间布局的前提。但是遥感影像中城镇建成区的环境复杂,同时不同城镇建成区在坐落位置、发展规模等方面存在许多差异,导致其信息提取存在一定困难。方法 本文基于面向图像语义分割的深度卷积神经网络,使用针对特征图的强化模块和通道域的注意力模块,对原始DeepLab网络进行改进,并通过滑动窗口预测、全连接条件随机场处理方法,更准确地实现城镇建成区提取。同时,针对使用深度学习算法容易出现过拟合和鲁棒性不强的问题,采用数据扩充增强技术进一步提升模型能力。结果 实验数据是三亚和海口部分地区的高分二号遥感影像。结果表明,本文方法的正确率高于93%,Kappa系数大于0.837,可以有效地提取出大尺度高分辨率遥感影像中的城镇建成区,且提取结果最为接近实际情况。结论 针对高分辨率遥感卫星影像中城镇建成区的光谱信息多样化、纹理结构复杂化等特点,本文算法能在特征提取网络中获取更多特征信息。本文使用改进的深度学习方法,提出两种处理方法,显著提高了模型的精度,在实际大幅遥感影像的使用中表现优秀,具有重要的实用价值和广阔的应用前景。  相似文献   

5.
Yang  Xinbo  Zhang  Yan 《Multimedia Tools and Applications》2021,80(11):16537-16547

Multi-atlas segmentation is widely accepted as an essential image segmentation approach. Through leveraging on the information from the atlases instead of utilizing the model-based segmentation techniques, the multi-atlas segmentation could significantly enhance the accuracy of segmentation. However, label fusion, which plays an important role for multi-atlas segmentation still remains the primary challenge. Bearing this in mind, a deep learning-based approach is presented through integrating feature extraction and label fusion. The proposed deep learning architecture consists of two independent channels composing of continuous convolutional layers. To evaluate the performance our approach, we conducted comparison experiments between state-of-the-art techniques and the proposed approach on publicly available datasets. Experimental results demonstrate that the accuracy of the proposed approach outperforms state-of-the-art techniques both in efficiency and effectiveness.

  相似文献   

6.
文本分类任务作为文本挖掘的核心问题,已成为自然语言处理领域的一个重要课题.而短文本分类由于稀疏性、实时性和不规范性等特点,已经成为文本分类的亟待解决的问题之一.在某些特定的场景,短文本存在大量隐含语义,由此对挖掘有限文本内的隐含语义特征等任务带来挑战.已有的方法对短文本分类主要是采用传统机器学习或深度学习算法,但是该类算法的模型构建复杂且工作量大,效率不高.此外,短文本包含有效信息较少且口语化严重,对模型的特征学习能力要求较高.针对以上问题,本文提出了KAeRCNN模型,该模型在TextRCNN模型的基础上,融合了知识感知与双重注意力机制.知识感知包含了知识图谱实体链接和知识图谱嵌入,可以引入外部知识以获取语义特征,同时双重注意力机制可以提高模型对短文本中有效信息提取的效率.实验结果表明,KAeRCNN模型在分类准确度、F1值和实际应用效果等方面显著优于传统的机器学习算法.我们对算法的性能和适应性进行了验证,准确率达到95.54%,F1值达到0.901,对比四种传统机器学习算法,准确率平均提高了约14%,F1值提升了约13%.与TextRCNN相比,KAeRCNN模型在准确性方面提升了约3%.此外,与深度学习算法的对比实验结果也说明了我们的模型在其它领域的短文本分类中也有较好的表现.理论和实验结果都证明,提出的KAeRCNN模型对短文本分类效果更优.  相似文献   

7.
Liang  Yunji  Guo  Bin  Yu  Zhiwen  Zheng  Xiaolong  Wang  Zhu  Tang  Lei 《World Wide Web》2021,24(1):205-228

With the exponential growth of user-generated content, policies and guidelines are not always enforced in social media, resulting in the prevalence of deviant content violating policies and guidelines. The adverse effects of deviant content are devastating and far-reaching. However, the detection of deviant content from sparse and imbalanced textual data is challenging, as a large number of stakeholders are involved with different stands and the subtle linguistic cues are highly dependent on complex context. To address this problem, we propose a multi-view attention-based deep learning system, which combines random subspace and binary particle swarm optimization (RS-BPSO) to distill content of interest (candidates) from imbalanced data, and applies the context and view attention mechanisms in convolutional neural network (dubbed as SSCNN) for the extraction of structural and semantic features. We evaluate the proposed approach on a large-scale dataset collected from Facebook, and find that RS-BPSO is able to detect whether the content is associated with marijuana with an accuracy of 87.55%, and SSCNN outperforms baselines with an accuracy of 94.50%.

  相似文献   

8.
Huang  Ting  Zhang  Qiang  Tang  Xiaoan  Zhao  Shuangyao  Lu  Xiaonong 《Artificial Intelligence Review》2022,55(2):1289-1315

Fault diagnosis plays an important role in actual production activities. As large amounts of data can be collected efficiently and economically, data-driven methods based on deep learning have achieved remarkable results of fault diagnosis of complex systems due to their superiority in feature extraction. However, existing techniques rarely consider time delay of occurrence of faults, which affects the performance of fault diagnosis. In this paper, by synthetically considering feature extraction and time delay of occurrence of faults, we propose a novel fault diagnosis method that consists of two parts, namely, sliding window processing and CNN-LSTM model based on a combination of Convolutional Neural Network (CNN) and Long Short-Term Memory Network (LSTM). Firstly, samples obtained from multivariate time series by the sliding window processing integrates feature information and time delay information. Then, the obtained samples are fed into the proposed CNN-LSTM model including CNN layers and LSTM layers. The CNN layers perform feature learning without relying on prior knowledge. Time delay information is captured with the use of the LSTM layers. The fault diagnosis of the Tennessee Eastman chemical process is addressed, and it is verified that the predictive accuracy and noise sensitivity of fault diagnosis can be greatly improved when the proposed method is applied. Comparisons with five existing fault diagnosis methods show the superiority of the proposed method.

  相似文献   

9.
In the past decade, granular computing (GrC) has been an active topic of research in machine learning and computer vision. However, the granularity division is itself an open and complex problem. Deep learning, at the same time, has been proposed by Geoffrey Hinton, which simulates the hierarchical structure of human brain, processes data from lower level to higher level and gradually composes more and more semantic concepts. The information similarity, proximity and functionality constitute the key points in the original insight of granular computing proposed by Zadeh. Many GrC researches are based on the equivalence relation or the more general tolerance relation, either of which can be described by some distance functions. The information similarity and proximity depended on the samples distribution can be easily described by the fuzzy logic. From this point of view, GrC can be considered as a set of fuzzy logical formulas, which is geometrically defined as a layered framework in a multi-scale granular system. The necessity of such kind multi-scale layered granular system can be supported by the columnar organization of the neocortex. So the granular system proposed in this paper can be viewed as a new explanation of deep learning that simulates the hierarchical structure of human brain. In view of this, a novel learning approach, which combines fuzzy logical designing with machine learning, is proposed in this paper to construct a GrC system to explore a novel direction for deep learning. Unlike those previous works on the theoretical framework of GrC, our granular system is abstracted from brain science and information science, so it can be used to guide the research of image processing and pattern recognition. Finally, we take the task of haze-free as an example to demonstrate that our multi-scale GrC has high ability to increase the texture information entropy and improve the effect of haze-removing.  相似文献   

10.

Semantic segmentation has a wide array of applications such as scene understanding, autonomous driving, and robot manipulation tasks. While existing segmentation models have achieved good performance using bottom-up deep neural processing, this paper describes a novel deep learning architecture that integrates top-down and bottom-up processing. The resulting model achieves higher accuracy at a relatively low computational cost. In the proposed model, higher-level top-down information is transmitted to the lower layers through recurrent connections in an encoder and a decoder, and the recurrent connection weights are trained using backpropagation. Experiments on several benchmark datasets demonstrate that this use of top-down information improves the mean intersection over union by more than 3% compared with a state-of-the-art bottom-up only network using the CamVid, SUN-RGBD and PASCAL VOC 2012 benchmark datasets. Additionally, the proposed model is successfully applied to a dataset designed for robotic grasping tasks.

  相似文献   

11.
为提高雷达数据处理中航迹关联的智能性,充分利用目标的特征信息,并简化系统处理流程,提出了一种基于深度学习网络模型的端到端航迹关联算法。首先分析了基于神经网络的航迹关联存在样本细节少、处理流程繁杂的问题,然后提出了端到端的深度学习模型。该模型根据航迹关联数据的处理特征,改进了卷积神经网络结构用于特征提取,充分利用了长短期记忆网络对历史信息和将来信息的处理能力,并分析了前后航迹的关联性。在对原始数据进行卡尔曼滤波后,将全部航迹信息特征作为输入,并由基于卷积神经网络特征提取的长短期记忆深度神经网络模型直接输出航迹关联结果。仿真结果表明,提出的模型可以充分学习推演目标的多个特征信息,具有较高的航迹关联准确率,对航迹关联的智能化分析具有一定的参考价值。  相似文献   

12.
陶传奇  包盼盼  黄志球  周宇  张智轶 《软件学报》2021,32(11):3351-3371
在软件开发的编程现场,有大量与当前开发任务相关的信息,比如代码上下文信息、用户开发意图等.如果能够根据已有的编程现场上下文给开发人员推荐当前代码行,不仅能够帮助开发人员更好地完成开发任务,还能提高软件开发的效率.而已有的一些方法通常是进行代码修复或者补全,又或者只是基于关键词匹配的搜索方法,很难达到推荐完整代码行的要求.针对上述问题,一种可行的解决方案是基于已有的海量源码数据,利用深度学习析取代码行的相关上下文因子,挖掘隐含的上下文信息,为精准推荐提供基础.因此,提出了一种基于深度学习的编程现场上下文深度感知的代码行推荐方法,能够在已有的大规模代码数据集中学习上下文之间潜在的关联关系,利用编程现场已有的源码数据和任务数据得到当前可能的代码行,并推荐Top-N给编程人员.代码行深度感知使用RNN Encoder-Decoder,该框架能够将编程现场已有的若干行上文代码行进行编码,得到一个包含已有代码行上下文信息的向量,然后根据该向量进行解码,得到预测的Top-N代码行输出.利用在开源平台上收集的大规模代码行数据集,对方法进行实验并测试,结果显示,该方法能够根据已有的上下文推荐相关的代码行给开发人员,Top-10的推荐准确率有60%左右,并且MRR值在0.3左右,表示用户满意的推荐项排在N个推荐结果中比较靠前的位置.  相似文献   

13.
Zhang  Guangsheng  Liu  Bo  Zhu  Tianqing  Zhou  Andi  Zhou  Wanlei 《Artificial Intelligence Review》2022,55(6):4347-4401

The concerns on visual privacy have been increasingly raised along with the dramatic growth in image and video capture and sharing. Meanwhile, with the recent breakthrough in deep learning technologies, visual data can now be easily gathered and processed to infer sensitive information. Therefore, visual privacy in the context of deep learning is now an important and challenging topic. However, there has been no systematic study on this topic to date. In this survey, we discuss algorithms of visual privacy attacks and the corresponding defense mechanisms in deep learning. We analyze the privacy issues in both visual data and visual deep learning systems. We show that deep learning can be used as a powerful privacy attack tool as well as preservation techniques with great potential. We also point out the possible direction and suggestions for future work. By thoroughly investigating the relationship of visual privacy and deep learning, this article sheds insights on incorporating privacy requirements in the deep learning era.

  相似文献   

14.
Hu  Yuxi  Fu  Taimeng  Niu  Guanchong  Liu  Zixiao  Pun  Man-On 《The Journal of supercomputing》2022,78(14):16512-16528

Large-scale high-resolution three-dimensional (3D) maps play a vital role in the development of smart cities. In this work, a novel deep learning-based multi-view-stereo method is proposed for reconstructing the 3D maps in large-scale urban environments by exploiting a monocular camera. Compared with other existing works, the proposed method can perform 3D depth estimation more efficiently in terms of computational complexity and graphics processing unit memory usage. As a result, the proposed method can practically perform depth estimation for each pixel before generating 3D maps for even large-scale scenes. Extensive experiments on the well-known DTU dataset and real-life data collected on our campus confirm the good performance of the proposed method.

  相似文献   

15.
目的 针对深度学习严重依赖大样本的问题,提出多源域混淆的双流深度迁移学习方法,提升了传统深度迁移学习中迁移特征的适用性。方法 采用多源域的迁移策略,增大源域对目标域迁移特征的覆盖率。提出两阶段适配学习的方法,获得域不变的深层特征表示和域间分类器相似的识别结果,将自然光图像2维特征和深度图像3维特征进行融合,提高小样本数据特征维度的同时抑制了复杂背景对目标识别的干扰。此外,为改善小样本机器学习中分类器的识别性能,在传统的softmax损失中引入中心损失,增强分类损失函数的惩罚监督能力。结果 在公开的少量手势样本数据集上进行对比实验,结果表明,相对于传统的识别模型和迁移模型,基于本文模型进行识别准确率更高,在以DenseNet-169为预训练网络的模型中,识别率达到了97.17%。结论 利用多源域数据集、两阶段适配学习、双流卷积融合以及复合损失函数,构建了多源域混淆的双流深度迁移学习模型。所提模型可增大源域和目标域的数据分布匹配率、丰富目标样本特征维度、提升损失函数的监督性能,改进任意小样本场景迁移特征的适用性。  相似文献   

16.
Multi-label text classification is an increasingly important field as large amounts of text data are available and extracting relevant information is important in many application contexts. Probabilistic generative models are the basis of a number of popular text mining methods such as Naive Bayes or Latent Dirichlet Allocation. However, Bayesian models for multi-label text classification often are overly complicated to account for label dependencies and skewed label frequencies while at the same time preventing overfitting. To solve this problem we employ the same technique that contributed to the success of deep learning in recent years: greedy layer-wise training. Applying this technique in the supervised setting prevents overfitting and leads to better classification accuracy. The intuition behind this approach is to learn the labels first and subsequently add a more abstract layer to represent dependencies among the labels. This allows using a relatively simple hierarchical topic model which can easily be adapted to the online setting. We show that our method successfully models dependencies online for large-scale multi-label datasets with many labels and improves over the baseline method not modeling dependencies. The same strategy, layer-wise greedy training, also makes the batch variant competitive with existing more complex multi-label topic models.  相似文献   

17.
过去基于学习用户和物品的表征向量的推荐系统算法在大规模数据中取得了较好的结果。相比早期经典的基于矩阵分解(matrix factorization,MF)的推荐算法,近几年流行的基于深度学习的方法,在稀疏的数据集中具有更好的泛化能力。但许多方法只考虑了二维的评分矩阵信息,或者简单的对各种属性做嵌入表征,而忽略了各种属性之间的内部关系。异构信息网络(heterogeneous information network,HIN)相比同构网络能够存储更加丰富的语义特征。近几年结合异构信息网络与深度学习的推荐系统,通过元路径挖掘关键语义信息的方法成为研究热点。
为了更好地挖掘各种辅助信息与用户喜好的关联性,本文结合张量分解、异构信息网络与深度学习方法,提出了新的模型hin-dcf。首先,基于数据集构建特定场景的异构信息网络;对于某一元路径,根据异构图中的路径信息生成其关联性矩阵。其次,合并不同元路径的关联性矩阵后,得到包含用户、物品、元路径三个维度的张量。接着,通过经典的张量分解算法,将用户、物品、元路径映射到相同维度的隐语义向量空间中。并且将分解得到的隐语义向量作为深度神经网络的输入层的初始化。考虑到不同用户对不同元路径的关联性偏好不同,融入注意力机制,学习不同用户、物品,与不同元路径的偏好权重。在实验部分,该模型在精确度上有效提升,并且更好地应对了数据稀疏的问题。最后提出了未来可能的研究方向。  相似文献   

18.
Liu  Huafeng  Han  Xiaofeng  Li  Xiangrui  Yao  Yazhou  Huang  Pu  Tang  Zhenmin 《Multimedia Tools and Applications》2019,78(17):24269-24283

Robust road detection is a key challenge in safe autonomous driving. Recently, with the rapid development of 3D sensors, more and more researchers are trying to fuse information across different sensors to improve the performance of road detection. Although many successful works have been achieved in this field, methods for data fusion under deep learning framework is still an open problem. In this paper, we propose a Siamese deep neural network based on FCN-8s to detect road region. Our method uses data collected from a monocular color camera and a Velodyne-64 LiDAR sensor. We project the LiDAR point clouds onto the image plane to generate LiDAR images and feed them into one of the branches of the network. The RGB images are fed into another branch of our proposed network. The feature maps that these two branches extract in multiple scales are fused before each pooling layer, via padding additional fusion layers. Extensive experimental results on public dataset KITTI ROAD demonstrate the effectiveness of our proposed approach.

  相似文献   

19.
接警日志包含时间、空间和案件描述信息,属于非结构时空数据.与时空社交媒体相比,接警日志的数据项之间存在较少的联系,数据项之间不能形成复杂网络关系,在挖掘其数据模式时难以提供有价值的线索,因此,其分析更加依赖于其中的语义挖掘和语义时空模式探索.针对这一问题,提出了一个可视分析框架支持对大规模非结构接警日志时空模式的交互探索.首先,提出了一种基于主题模型集成的方法,实现从异构文本中抽取主题;其次,该框架包含一个数据立方体,实现快速响应用户的查询请求;第三,设计并实现了一个可视化交互系统,支持对数据立方体的可视化交互探索.最后,使用国内某城市真实接警日志进行实验,找到的丰富的模式和主题预测准确性证明了方法的有效性.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号