首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The widespread fake news in social networks is posing threats to social stability, economic development, and political democracy, etc. Numerous studies have explored the effective detection approaches of online fake news, while few works study the intrinsic propagation and cognition mechanisms of fake news. Since the development of cognitive science paves a promising way for the prevention of fake news, we present a new research area called Cognition Security (CogSec), which studies the potential impacts of fake news on human cognition, ranging from misperception, untrusted knowledge acquisition, targeted opinion/attitude formation, to biased decision making, and investigates the effective ways for fake news debunking. CogSec is a multidisciplinary research field that leverages the knowledge from social science, psychology, cognition science, neuroscience, AI and computer science. We first propose related definitions to characterize CogSec and review the literature history. We further investigate the key research challenges and techniques of CogSec, including humancontent cognition mechanism, social influence and opinion diffusion, fake news detection, and malicious bot detection. Finally, we summarize the open issues and future research directions, such as the cognition mechanism of fake news, influence maximization of fact-checking information, early detection of fake news, fast refutation of fake news, and so on.  相似文献   

2.
社交媒体的兴起促进了新闻行业的发展,使虚假新闻的传播也变得更为便利,然而多样化的新闻表现形式带来了很多负面影响,比如新闻内容夸大事实、恶意篡改新闻文本或图像内容、构造虚假新闻事实引起社会舆论,这促使了虚假新闻检测工作成为新闻领域新的挑战。为了应对虚假新闻检测工作的研究,将新闻文本与图像信息结合起来,通过多模双线性池化方法,改变传统特征融合方法,构建出基于新特征融合方法的虚假新闻检测模型,并且采用虚假新闻检测领域标准数据集验证模型的性能,实验结果表明,文本与图像的融合特征表现在虚假新闻检测领域不可替代,且所提方法能够有效提升虚假新闻检测性能。  相似文献   

3.
Fake news has led to a polarized society as evidenced by diametrically opposed perceptions of and reactions to global events such as the Coronavirus Disease 2019 (COVID-19) pandemic and presidential campaigns. Popular press has linked individuals’ political beliefs and cultural values to the extent to which they believe in false content shared on social networking sites (SNS). However, sweeping generalizations run the risk of helping exacerbate divisiveness in already polarized societies. This study examines the effects of individuals’ political beliefs and espoused cultural values on fake news believability using a repeated-measures design (that exposes individuals to a variety of fake news scenarios). Results from online questionnaire-based survey data collected from participants in the US and India help confirm that conservative individuals tend to exhibit increasing fake news believability and show that collectivists tend to do the same. This study advances knowledge on characteristics that make individuals more susceptible to lending credence to fake news. In addition, this study explores the influence exerted by control variables (i.e., age, sex, and Internet usage). Findings are used to provide implications for theory as well as actionable insights.  相似文献   

4.
近年来社交媒体逐渐成为人们获取新闻信息的主要渠道,但其在给人们带来方便的同时也促进了虚假新闻的传播.在社交媒体的富媒体化趋势下,虚假新闻逐渐由单一的文本形式向多模态形式转变,因此多模态虚假新闻检测正在受到越来越多的关注.现有的多模态虚假新闻检测方法大多依赖于和数据集高度相关的表现层面特征,对新闻的语义层面特征建模不足,难以理解文本和视觉实体的深层语义,在新数据上的泛化能力受限.提出了一种语义增强的多模态虚假新闻检测方法,通过利用预训练语言模型中隐含的事实知识以及显式的视觉实体提取,更好地理解多模态新闻的深层语义.提取不同语义层次的视觉特征,在此基础上采用文本引导的注意力机制建模图文之间的语义交互,从而更好地融合多模态异构特征.在基于微博新闻的真实数据集上的实验结果表明:该方法能够有效提高多模态虚假新闻检测的性能.  相似文献   

5.
People are easily duped by fake news and start to share it on their networks. With high frequency, fake news causes panic and forces people to engage in unethical behavior such as strikes, roadblocks, and similar actions. Thus, counterfeit news detection is highly needed to secure people from misinformation on social platforms. Filtering fake news manually from social media platforms is nearly impossible, as such an act raises security and privacy concerns for users. As a result, it is critical to assess the quality of news early on and prevent it from spreading. In this article, we propose an automated model to identify fake news at an early stage. Machine learning-based models such as Random Forest, Logistic Regression, Naïve Bayes, and K-Nearest Neighbor are used as baseline models, implemented with the features extracted using countvectorizer and tf–idf. The baseline and other existing model outcomes are compared with the proposed deep learning-based Long–Short Term Memory (LSTM) network. Experimental results show that different settings achieved an accuracy of 99.82% and outperformed the baseline and existing models.  相似文献   

6.
知识图谱在人工智能领域有着广泛的应用,如信息检索、自然语言处理、推荐系统等。然而,知识图谱的开放性往往意味着它们是不完备的,具有自身的缺陷。鉴于此,需建立更完整的知识图谱,以提高知识图谱的实际利用率。利用链接预测通过已有关系来推测新的关系,从而实现大规模知识库的补全。通过比较基于翻译模型的知识图谱链接预测模型,从常用数据集与评价指标、翻译模型、采样方法等方面分析知识图谱链接预测模型的框架,并对基于知识图谱的链接预测模型进行了综述。  相似文献   

7.
Recently, several methods have been proposed for introducing Linked Open Data (LOD) into recommender systems. LOD can be used to enrich the representation of items by leveraging RDF statements and adopting graph-based methods to implement effective recommender systems. However, most of those methods do not exploit embeddings of entities and relations built on knowledge graphs, such as datasets coming from the LOD. In this paper, we propose a novel recommender system based on holographic embeddings of knowledge graphs built from Wikidata, a free and open knowledge base that can be read and edited by both humans and machines. The evaluation performed on three standard datasets such as Movielens 1M, Last.fm and LibraryThing shows promising results, which confirm the effectiveness of the proposed method.  相似文献   

8.
Knowledge graphs denote structured data which represent entities and relationships between them in a form of a graph, often expressed in the RDF data model. It may be hard for lay users to explore existing knowledge graphs, especially when graphs from different data sources need to be integrated. In this paper, we present an approach to knowledge graph visual exploration based on the concept of shareable and reusable visual configurations. A visual configuration comprises domain specific views on a knowledge graph which define operations such as node detail or expansion. These operations are easy to understand for lay users who can use them to explore a graph while complexities unnecessary in a given application context remain hidden. We introduce an ontology which enables to express and publish visual configurations and reuse their components in other configurations. We also provide an experimental implementation called KGBrowser. We evaluate the proposed approach with real users. We also compare our implementation KGBrowser with other existing tools for knowledge graph visualization and exploration.  相似文献   

9.
ABSTRACT

The widespread use of social media has enormous consequences for the society, culture and business with potentially positive and negative effects. As online social networks are increasingly used for dissemination of information, at the same they are also becoming a medium for the spread of fake news for various commercial and political purposes. Technologies such as Artificial Intelligence (AI) and Natural Language Processing (NLP) tools offer great promise for researchers to build systems, which could automatically detect fake news. However, detecting fake news is a challenging task to accomplish as it requires models to summarize the news and compare it to the actual news in order to classify it as fake. This project proposes a framework that detects and classifies fake news messages using improved Recurrent Neural Networks and Deep Structured Semantic Model. The proposed approach intuitively identifies important features associated with fake news without previous domain knowledge while achieving accuracy 99%. The performance analysis method used for the proposed system is based on accuracy, specificity and sensitivity.  相似文献   

10.
Raj  Chahat  Meel  Priyanka 《Applied Intelligence》2021,51(11):8132-8148

An upsurge of false information revolves around the internet. Social media and websites are flooded with unverified news posts. These posts are comprised of text, images, audio, and videos. There is a requirement for a system that detects fake content in multiple data modalities. We have seen a considerable amount of research on classification techniques for textual fake news detection, while frameworks dedicated to visual fake news detection are very few. We explored the state-of-the-art methods using deep networks such as CNNs and RNNs for multi-modal online information credibility analysis. They show rapid improvement in classification tasks without requiring pre-processing. To aid the ongoing research over fake news detection using CNN models, we build textual and visual modules to analyze their performances over multi-modal datasets. We exploit latent features present inside text and images using layers of convolutions. We see how well these convolutional neural networks perform classification when provided with only latent features and analyze what type of images are needed to be fed to perform efficient fake news detection. We propose a multi-modal Coupled ConvNet architecture that fuses both the data modules and efficiently classifies online news depending on its textual and visual content. We thence offer a comparative analysis of the results of all the models utilized over three datasets. The proposed architecture outperforms various state-of-the-art methods for fake news detection with considerably high accuracies.

  相似文献   

11.
知识图谱的概念由谷歌于2012年提出,随后逐渐成为人工智能领域的一个研究热点,已在信息搜索、自动问答、决策分析等应用中发挥作用。虽然知识图谱在各领域展现出了巨大的潜力,但不难发现目前缺乏成熟的知识图谱构建平台,需要对知识图谱的构建体系进行研究,以满足不同的行业应用需求。文中以知识图谱构建为主线,首先介绍目前主流的通用知识图谱和领域知识图谱,描述两者在构建过程中的区别;然后,分类讨论图谱构建过程中存在的问题和挑战,并针对这些问题和挑战,分类描述目前图谱构建过程中的知识抽取、知识表示、知识融合、知识推理、知识存储5个层面的解决方法和策略;最后,展望未来可能的研究方向。  相似文献   

12.
龚胜佳  张琳琳  赵楷  刘军涛  杨涵 《计算机应用》2022,42(11):3458-3464
虚假新闻不仅会导致人们形成错误观念,损害人们的知情权,还会降低新闻网站公信力。针对新闻网站出现虚假新闻的问题,提出一种基于区块链技术的虚假新闻检测方法。首先,通过调用智能合约为新闻随机分配审核者来判定新闻的真实性。然后,调整审核者数量以确保有效审核者的数量,提高审核结果的可信度。同时设计激励机制,根据审核者的行为分配奖励,并运用博弈论分析审核者的行为和获得的奖励,为了获得最大利益,审核者的行为必须是诚实的。而后设计审计机制检测恶意的审核者,以提高系统的安全性。最后,利用以太坊智能合约实现了一个简易的区块链虚假新闻检测系统,并对虚假新闻检测进行了仿真,结果显示所提方法的新闻真实性检测的准确率达到了95%,表明该方法可有效防止虚假新闻的发布。  相似文献   

13.
Peng  Xu  Xintong  Bao 《Multimedia Tools and Applications》2022,81(10):13799-13822

News plays an indispensable role in the development of human society. With the emergence of new media, fake news including multi-modal content such as text and images has greater social harm. Therefore how to identify multi-modal fake news has been a challenge. The traditional methods of multi-modal fake news detection are to simply fuse the different modality information, such as concatenation and element-wise product, without considering the different impacts of the different modalities, which leads to the low accuracy of fake news detection. To address this issue, we design a new multi-modal attention adversarial fusion method built on the pre-training language model BERT, which consists of two important components: the attention mechanism and the adversarial mechanism. The attention mechanism is used to capture the differences in different modalities. The adversarial mechanism is to capture the correlation between different modalities. Experiments on a fake news Chinese public dataset indicate that our proposed new method achieves 5% higher in terms of F1.

  相似文献   

14.
量词在知识图中的分类与表示   总被引:3,自引:0,他引:3  
在当今知识表示领域中,知识图作为自然语言理解的语义模型有其独到之处,而在自然语言处理中普遍认为词是最基本的单位,本文从语义学和自然语言处理的角度(主要是从知识图的角度,)在对介词的逻辑词的研究之后,按照量词图的结构,对汉语中的量词进行了分类,并且按照知识量词构造,给一词图。  相似文献   

15.
More and more data in various formats are integrated into knowledge graphs. However, there is no overview of existing approaches for generating knowledge graphs from heterogeneous (semi-)structured data, making it difficult to select the right one for a certain use case. To support better decision making, we study the existing approaches for generating knowledge graphs from heterogeneous (semi-)structured data relying on mapping languages. In this paper, we investigated existing mapping languages for schema and data transformations, and corresponding materialization and virtualization systems that generate knowledge graphs. We gather and unify 52 articles regarding knowledge graph generation from heterogeneous (semi-)structured data. We assess 15 characteristics on mapping languages for schema transformations, 5 characteristics for data transformations, and 14 characteristics for systems. Our survey paper provides an overview of the mapping languages and systems proposed the past two decades. Our work paves the way towards a better adoption of knowledge graph generation, as the right mapping language and system can be selected for each use case.  相似文献   

16.
深度学习在计算机视觉领域取得了重大成功,超越了众多传统的方法.然而近年来,深度学习技术被滥用在假视频的制作上,使得以Deepfakes为代表的伪造视频在网络上泛滥成灾.这种深度伪造技术通过篡改或替换原始视频的人脸信息,并合成虚假的语音来制作色情电影、虚假新闻、政治谣言等.为了消除此类伪造技术带来的负面影响,众多学者对假...  相似文献   

17.
While the early phase of the Semantic Web put emphasis on conceptual modeling through ontology classes, and the recent years saw the rise of loosely structured, instance-level knowledge graphs (used even for modeling concepts), in this paper, we focus on a third kind of concept modeling: via code lists, primarily those embedded in ontologies and vocabularies. We attempt to characterize the candidate structures for code lists based on our observations in OWL ontologies. Our main contribution is then an approach implemented as a series of SPARQL queries and a lightweight web application that can be used to browse and detect potential code lists in ontologies and vocabularies, in order to extract and enhance them, and to store them in a stand-alone knowledge base. The application allows inspecting query results coming from the Linked Open Vocabularies catalog dataset. In addition, we describe a complementary bottom-up analysis of potential code lists. We also provide in this paper a demonstration of the dominant nature of embedded codes from the aspect of ontological universals and their alternatives for modeling code lists.  相似文献   

18.
Information sources such as relational databases, spreadsheets, XML, JSON, and Web APIs contain a tremendous amount of structured data that can be leveraged to build and augment knowledge graphs. However, they rarely provide a semantic model to describe their contents. Semantic models of data sources represent the implicit meaning of the data by specifying the concepts and the relationships within the data. Such models are the key ingredients to automatically publish the data into knowledge graphs. Manually modeling the semantics of data sources requires significant effort and expertise, and although desirable, building these models automatically is a challenging problem. Most of the related work focuses on semantic annotation of the data fields (source attributes). However, constructing a semantic model that explicitly describes the relationships between the attributes in addition to their semantic types is critical.We present a novel approach that exploits the knowledge from a domain ontology and the semantic models of previously modeled sources to automatically learn a rich semantic model for a new source. This model represents the semantics of the new source in terms of the concepts and relationships defined by the domain ontology. Given some sample data from the new source, we leverage the knowledge in the domain ontology and the known semantic models to construct a weighted graph that represents the space of plausible semantic models for the new source. Then, we compute the top k candidate semantic models and suggest to the user a ranked list of the semantic models for the new source. The approach takes into account user corrections to learn more accurate semantic models on future data sources. Our evaluation shows that our method generates expressive semantic models for data sources and services with minimal user input. These precise models make it possible to automatically integrate the data across sources and provide rich support for source discovery and service composition. They also make it possible to automatically publish semantic data into knowledge graphs.  相似文献   

19.
The capitalization and the analysis of historical information is nowadays a prerequisite for any effective risk management and assessment in a wide range of domains. Despite the development of mathematical models, procedures, support decision systems and databases, some engineering disciplines, such as civil engineering, remain resistant to the use of new digital technology due to the gap between the expectations of the engineers and the support that the tools may really provide. It is essential to propose a tool able to process both cross disciplinary and interdisciplinary knowledge flux and feedback from experience in a common and convenient unifying framework. The aim is to assist and to support engineering work and to make the task of knowledge modelling easier. The domain of dam systems is no exception to the rule. Dam failures are still commonplace. These failures stem from a lack of understanding about the complex relationships between three different factors: random hazards, the limit states of dam structures along with human activities and decisions. No generic and holistic approach is currently available that permits the processing of both knowledge and data, performs inferences and is easily usable for all types of users. This paper proposes the basic principles of a convenient design methodology for capitalizing, learning and predicting based on the formalism of conceptual graphs. The aim is to provide an easily usable tool able to (1) capitalise heterogeneous knowledge and store a database about dams, (2) issue alerts on current projects, (3) draw lessons from past dam failures and (4) tackle key issues in forensic civil engineering.  相似文献   

20.
陈烨  周刚  卢记仓 《计算机应用研究》2021,38(12):3535-3543
为了总结前人工作,给相关研究者提供思路,首先讨论了当前多模态知识图谱的基本概念,然后从图数据库和知识图谱这两个角度介绍了多模态知识图谱的构建工作,并总结了两种主要方法的思路.还分析了多模态知识图谱的构建和应用中的关键技术和相关工作,如多模态信息提取、表示学习和实体链接.此外,列举了多模态知识图谱在四种场景中的应用,包括推荐系统、跨模态检索、人机交互和跨模态数据管理.最后,从四个方面展望了多模态知识图谱的发展前景.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号