首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
财经新闻的情感分析有助于企业和投资者确定投资风险和提高经济效益,具有较高的应用价值。针对财经新闻文本,提出一种在图卷积神经网络中使用依存句法分析(Dependency Analysis-based Graph Convolutional Network,DA-GCN)的情感分析方法。该方法通过分析文档中词语的依存关系,获取句子的语序信息和文档中重要的句子成分,再通过词语在文档中的共现信息实现信息传递和对图的参数更新。在财经新闻数据集上进行的实验表明,本文提出的方法与传统深度学习方法相比,在各项评价指标上都取得显著提升。  相似文献   

2.
Fuzzy cognitive maps have been widely used as abstract models for complex networks. Traditional ways to construct fuzzy cognitive maps rely on domain knowledge. In this paper, we propose to use fuzzy cognitive map learning algorithms to discover domain knowledge in the form of causal networks from data. More specifically, we propose to infer gene regulatory networks from gene expression data. Furthermore, a new efficient fuzzy cognitive map learning algorithm based on a decomposed genetic algorithm is developed to learn large scale networks. In the proposed algorithm, the simulation error is used as the objective function, while the model error is expected to be minimized. Experiments are performed to explore the feasibility of this approach. The high accuracy of the generated models and the approximate correlation between simulation errors and model errors suggest that it is possible to discover causal networks using fuzzy cognitive map learning. We also compared the proposed algorithm with ant colony optimization, differential evolution, and particle swarm optimization in a decomposed framework. Comparison results reveal the advantage of the decomposed genetic algorithm on datasets with small data volumes, large network scales, or the presence of noise.  相似文献   

3.
Social media has become an important source of information and a medium for following and spreading trends, news, and ideas all over the world. Although determining the subjects of individual posts is important to extract users' interests from social media, this task is nontrivial because posts are highly contextualized and informal and have limited length. To address this problem, we propose a user modeling framework that maps the content of texts in social media to relevant categories in news media. In our framework, the semantic gaps between social media and news media are reduced by using Wikipedia as an external knowledge base. We map term-based features from a short text and a news category into Wikipedia-based features such as Wikipedia categories and article entities. A user's microposts are thus represented in a rich feature space of words. Experimental results show that our proposed method using Wikipedia-based features outperforms other existing methods of identifying users' interests from social media.  相似文献   

4.
王继成  吕维雪 《计算机学报》1995,18(12):949-952
本文介绍一种基于符号神经网络的知识获取方法,该方法首先用传统的机器学习方法获取关于某领域的粗略知识,然后把这些知识映射到神经网络结构,通过神经网络的自学习获取关于该领域的精细知识,这样,既解决了传统机器学习中知识精度,知识表示等问题,又解决了神经网络获取知识时间长,能释能力弱等问题。  相似文献   

5.
Cognitive maps are a tool to represent knowledge from a qualitative perspective, allowing to create models of complex systems where an exact mathematical model cannot be used because of the complexity of the system. In the literature, several tools have been proposed to develop cognitive maps and fuzzy cognitive maps (FCMs); one of them is FCM Designer. This paper designs and implements an extension to the FCM Designer tool that allows creating multilayer FCM. With this extension, it is possible to have several FCMs for the same problem, where each one expresses a different level of knowledge of the system under study, but interlinked. Thus, one can have a first level of detailed abstraction of the system with specific information and then more general levels. In addition, we can have different levels where the variables of one level depend on those of another level. That is, the multilayer approach enriches the modeled systems with flow of information between layers, to derive information about the concepts involved in layers from the concepts in other layers. In our multilayer approach, the relationship between the cognitive maps in different layers can be carried out in various ways: with fuzzy rules, connections with weights and with mathematical equations. This work presents the design and the implementation of the extension of the FCM Designer tool, and several test cases in different domains: a FCM to analyze emergent properties of Wikipedia a FCM for medical analysis for diagnosis, and another like recommender system.  相似文献   

6.
In this paper we proposed two-stage segmentation approach for splitting the TV broadcast news bulletins into sequence of news stories and codebooks derived from vector quantization are used for retrieving the segmented stories. At the first stage of segmentation, speaker (news reader) specific characteristics present in initial headlines of news bulletin are used for gross level segmentation. During second stage, errors in the gross level segmentation (first stage) are corrected by exploiting the speaker specific information captured from the individual news stories other than headlines. During headlines the captured speaker specific information is mixed with background music, and hence the segmentation at the first stage may not be accurate. In this work speaker specific information is represented by using mel frequency cepstral coefficients, and captured by Gaussian mixture models (GMMs). The proposed two-stage segmentation method is evaluated on manual segmented broadcast TV news bulletins. From the evaluation results, it is observed that about 93 % of the news stories are correctly segmented, 7 % are missed and 6 % are spurious. For navigating the bulletins, a quick navigation indexing method is developed based on speaker change points. Performance of the proposed two-stage segmentation and quick navigation methods are evaluated using GMM and neural networks models. For retrieving the target news stories from news corpus, sequence of codebook indices derived from vector quantization is explored. Proposed retrieval approach is evaluated using queries of different sizes. Evaluation results indicating that the retrieval accuracy is proportional to size of the query.  相似文献   

7.
Although using convolutional neural networks (CNN) for computer-aided diagnosis (CAD) has made tremendous progress in the last few years, the small medical datasets remain to be the major bottleneck in this area. To address this problem, researchers start looking for information out of the medical datasets. Previous efforts mainly leverage information from natural images via transfer learning. More recent research work focuses on integrating knowledge from medical practitioners, either letting networks resemble how practitioners are trained, how they view images, or using extra annotations. In this paper, we propose a scheme named Domain Guided-CNN (DG-CNN) to incorporate the margin information, a feature described in the consensus for radiologists to diagnose cancer in breast ultrasound (BUS) images. In DG-CNN, attention maps that highlight margin areas of tumors are first generated, and then incorporated via different approaches into the networks. We have tested the performance of DG-CNN on our own dataset (including 1485 ultrasound images) and on a public dataset. The results show that DG-CNN can be applied to different network structures like VGG and ResNet to improve their performance. For example, experimental results on our dataset show that with a certain integrating mode, the improvement of using DG-CNN over a baseline network structure ResNet18 is 2.17% in accuracy, 1.69% in sensitivity, 2.64% in specificity and 2.57% in AUC (Area Under Curve). To the best of our knowledge, this is the first time that the margin information is utilized to improve the performance of deep neural networks in diagnosing breast cancer in BUS images.  相似文献   

8.
In this research work, a novel framework for the construction of augmented Fuzzy Cognitive Maps based on Fuzzy Rule-Extraction methods for decisions in medical informatics is investigated. Specifically, the issue of designing augmented Fuzzy Cognitive Maps combining knowledge from experts and knowledge from data in the form of fuzzy rules generated from rule-based knowledge discovery methods is explored. Fuzzy cognitive maps are knowledge-based techniques which combine elements of fuzzy logic and neural networks and work as artificial cognitive networks. The knowledge extraction methods used in this study extract the available knowledge from data in the form of fuzzy rules and insert them into the FCM, contributing to the development of a dynamic decision support system. The fuzzy rules, which derived by these extraction algorithms (such as fuzzy decision trees, association rule-based methods and neuro-fuzzy methods) are implemented to restructure the FCM model, producing new weights into the FCM model, that initially structured by experts. Concluding, our scope is to present a new methodology through a framework for decision making tasks using the soft computing technique of FCMs based on knowledge extraction methods. A well known medical decision making problem pertaining to the problem of radiotherapy treatment planning selection is presented to illustrate the application of the proposed framework and its functioning.  相似文献   

9.
The soft computing technique of fuzzy cognitive maps (FCM) for modeling and predicting autistic spectrum disorder has been proposed. The FCM models the behavior of a complex system and is used to develop new knowledge based system applications. FCM combines the robust properties of fuzzy logic and neural networks. To overwhelm the limitations and to improve the efficiency of FCM, a good learning method of unsupervised training could be applied. A decision system based on human knowledge and experience with a FCM trained using unsupervised non-linear hebbian learning algorithm is proposed here. Through this work the hebbian algorithm on non-linear units is used for training FCMs for the autistic disorder prediction problem. The investigated approach serves as a guide in determining the prognosis and in planning the appropriate therapies to special children.  相似文献   

10.
为了能够实时了解国际双边合作中有价值的信息,高效地智能提取Web外交新闻中的国际合作元素就显得至关重要。将国际合作元素抽取抽象为类似命名实体识别的问题,首先,界定国际合作元素的内涵;其次,提取了蕴涵领域知识的规则;再次,结合神经网络与领域知识提出了面向外交新闻文本的国际合作元素抽取方法;最后在相同语料库中与神经网络方法以及自身规则组合进行了比较,实验结果表明该方法具有更好的效果。  相似文献   

11.
For automatically mining the underlying relationships between different famous persons in daily news, for example, building a news person based network with the faces as icons to facilitate face-based person finding, we need a tool to automatically label faces in new images with their real names. This paper studies the problem of linking names with faces from large-scale news images with captions. In our previous work, we proposed a method called Person-based Subset Clustering which is mainly based on face clustering for all face images derived from the same name. The location where a name appears in a caption, as well as the visual structural information within a news image provided informative cues such as who are really in the associated image. By combining the domain knowledge from the captions and the corresponding image we propose a novel cross-modality approach to further improve the performance of linking names with faces. The experiments are performed on the data sets including approximately half a million news images from Yahoo! news, and the results show that the proposed method achieves significant improvement over the clustering-only methods.  相似文献   

12.
The brain can be viewed as a complex modular structure with features of information processing through knowledge storage and retrieval. Modularity ensures that the knowledge is stored in a manner where any complications in certain modules do not affect the overall functionality of the brain. Although artificial neural networks have been very promising in prediction and recognition tasks, they are limited in terms of learning algorithms that can provide modularity in knowledge representation that could be helpful in using knowledge modules when needed. Multi-task learning enables learning algorithms to feature knowledge in general representation from several related tasks. There has not been much work done that incorporates multi-task learning for modular knowledge representation in neural networks. In this paper, we present multi-task learning for modular knowledge representation in neural networks via modular network topologies. In the proposed method, each task is defined by the selected regions in a network topology (module). Modular knowledge representation would be effective even if some of the neurons and connections are disrupted or removed from selected modules in the network. We demonstrate the effectiveness of the method using single hidden layer feedforward networks to learn selected n-bit parity problems of varying levels of difficulty. Furthermore, we apply the method to benchmark pattern classification problems. The simulation and experimental results, in general, show that the proposed method retains performance quality although the knowledge is represented as modules.  相似文献   

13.

In the modern era of computing, the news ecosystem has transformed from old traditional print media to social media outlets. Social media platforms allow us to consume news much faster, with less restricted editing results in the spread of fake news at an incredible pace and scale. In recent researches, many useful methods for fake news detection employ sequential neural networks to encode news content and social context-level information where the text sequence was analyzed in a unidirectional way. Therefore, a bidirectional training approach is a priority for modelling the relevant information of fake news that is capable of improving the classification performance with the ability to capture semantic and long-distance dependencies in sentences. In this paper, we propose a BERT-based (Bidirectional Encoder Representations from Transformers) deep learning approach (FakeBERT) by combining different parallel blocks of the single-layer deep Convolutional Neural Network (CNN) having different kernel sizes and filters with the BERT. Such a combination is useful to handle ambiguity, which is the greatest challenge to natural language understanding. Classification results demonstrate that our proposed model (FakeBERT) outperforms the existing models with an accuracy of 98.90%.

  相似文献   

14.
Knowledge-based artificial neural networks   总被引:25,自引:0,他引:25  
Hybrid learning methods use theoretical knowledge of a domain and a set of classified examples to develop a method for accurately classifying examples not seen during training. The challenge of hybrid learning systems is to use the information provided by one source of information to offset information missing from the other source. By so doing, a hybrid learning system should learn more effectively than systems that use only one of the information sources. KBANN (Knowledge-Based Artificial Neural Networks) is a hybrid learning system built on top of connectionist learning techniques. It maps problem-specific “domain theories”, represented in propositional logic, into neural networks and then refines this reformulated knowledge using backpropagation. KBANN is evaluated by extensive empirical tests on two problems from molecular biology. Among other results, these tests show that the networks created by KBANN generalize better than a wide variety of learning systems, as well as several techniques proposed by biologists.  相似文献   

15.

Nowadays, more and more news readers read news online where they have access to millions of news articles from multiple sources. In order to help users find the right and relevant content, news recommender systems (NRS) are developed to relieve the information overload problem and suggest news items that might be of interest for the news readers. In this paper, we highlight the major challenges faced by the NRS and identify the possible solutions from the state-of-the-art. Our discussion is divided into two parts. In the first part, we present an overview of the recommendation solutions, datasets, evaluation criteria beyond accuracy and recommendation platforms being used in the NRS. We also talk about two popular classes of models that have been successfully used in recent years. In the second part, we focus on the deep neural networks as solutions to build the NRS. Different from previous surveys, we study the effects of news recommendations on user behaviors and try to suggest possible remedies to mitigate those effects. By providing the state-of-the-art knowledge, this survey can help researchers and professional practitioners have a better understanding of the recent developments in news recommendation algorithms. In addition, this survey sheds light on the potential new directions.

  相似文献   

16.
学习特征图语义信息和位置信息对于在视网膜图像分割中产生理想的结果至关重要。最近,卷积神经网络在提取特征图有效信息方面已经表现出了强大的能力,然而,卷积和池化操作会过滤掉一些有用的信息。提出了一种新型跳过注意力指导网络SAG-Net来保存特征图语义和位置信息并指导扩展工作。在SAG-Net中,首先引入了跳过注意力门SAtt模块,将其用作敏感的扩展路径来传递先前特征图的语义信息和位置信息,不仅有助于消除噪声,还进一步减小了背景的负面影响。其次,通过合并图像金字塔保留上下文特征来进一步优化SAG-Net。在Drishti-GS1数据集上,联合视盘和视杯分割任务表明了SAG-Net的有效性。综合结果表明,SAG-Net优于原始的U-Net方法以及其他用于视盘和视杯分割的最新的方法。  相似文献   

17.
模糊认知图在股票市场预测中的应用研究   总被引:5,自引:0,他引:5  
复杂系统中存在着大量的过程依赖、自组织,并且一直是进化的,用传统的方法对其建模十分困难。模糊认知图作为一种模糊逻辑和神经网络相结合的产物,为复杂系统建模提供了一种有效工具。文中根据模糊认知图的特点,提出了用遗传学习算法建立系统的模糊认知图方法,为复杂系统分析及预测提供了一种解决方案。最后,以股票市场的数据为例进行了分析和预测模拟,结果表明,该方法是有效的。  相似文献   

18.
Knowledge graphs have gained increasing popularity in the past couple of years, thanks to their adoption in everyday search engines. Typically, they consist of fairly static and encyclopedic facts about persons and organizations–e.g. a celebrity’s birth date, occupation and family members–obtained from large repositories such as Freebase or Wikipedia.In this paper, we present a method and tools to automatically build knowledge graphs from news articles. As news articles describe changes in the world through the events they report, we present an approach to create Event-Centric Knowledge Graphs (ECKGs) using state-of-the-art natural language processing and semantic web techniques. Such ECKGs capture long-term developments and histories on hundreds of thousands of entities and are complementary to the static encyclopedic information in traditional knowledge graphs.We describe our event-centric representation schema, the challenges in extracting event information from news, our open source pipeline, and the knowledge graphs we have extracted from four different news corpora: general news (Wikinews), the FIFA world cup, the Global Automotive Industry, and Airbus A380 airplanes. Furthermore, we present an assessment on the accuracy of the pipeline in extracting the triples of the knowledge graphs. Moreover, through an event-centered browser and visualization tool we show how approaching information from news in an event-centric manner can increase the user’s understanding of the domain, facilitates the reconstruction of news story lines, and enable to perform exploratory investigation of news hidden facts.  相似文献   

19.
The widespread fake news in social networks is posing threats to social stability, economic development, and political democracy, etc. Numerous studies have explored the effective detection approaches of online fake news, while few works study the intrinsic propagation and cognition mechanisms of fake news. Since the development of cognitive science paves a promising way for the prevention of fake news, we present a new research area called Cognition Security (CogSec), which studies the potential impacts of fake news on human cognition, ranging from misperception, untrusted knowledge acquisition, targeted opinion/attitude formation, to biased decision making, and investigates the effective ways for fake news debunking. CogSec is a multidisciplinary research field that leverages the knowledge from social science, psychology, cognition science, neuroscience, AI and computer science. We first propose related definitions to characterize CogSec and review the literature history. We further investigate the key research challenges and techniques of CogSec, including humancontent cognition mechanism, social influence and opinion diffusion, fake news detection, and malicious bot detection. Finally, we summarize the open issues and future research directions, such as the cognition mechanism of fake news, influence maximization of fact-checking information, early detection of fake news, fast refutation of fake news, and so on.  相似文献   

20.
面向体育比赛的大规模直播脚本快速及时地反映了比赛的实时进程,但依靠体育新闻记者来据此人工撰写新闻报道往往耗时费力。鉴于此,该文提出了一种自动生成体育直播脚本所对应的体育新闻的神经网络模型,该模型在一定程度上避免了传统模型过于依赖人工选择特征的局限性,同时还能综合考虑脚本中句子级局部信息与全局信息以及句子和新闻内容间的语义关联性,从而实现联合建模下的体育新闻生成。在公开数据集上的实验结果验证了该文所提方法的可行性和有效性。此外,还尝试了基于规则和模板来自动生成体育新闻的标题以突显新闻正文的关键内容。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号