首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 24 毫秒
1.
Raj  Chahat  Meel  Priyanka 《Applied Intelligence》2021,51(11):8132-8148

An upsurge of false information revolves around the internet. Social media and websites are flooded with unverified news posts. These posts are comprised of text, images, audio, and videos. There is a requirement for a system that detects fake content in multiple data modalities. We have seen a considerable amount of research on classification techniques for textual fake news detection, while frameworks dedicated to visual fake news detection are very few. We explored the state-of-the-art methods using deep networks such as CNNs and RNNs for multi-modal online information credibility analysis. They show rapid improvement in classification tasks without requiring pre-processing. To aid the ongoing research over fake news detection using CNN models, we build textual and visual modules to analyze their performances over multi-modal datasets. We exploit latent features present inside text and images using layers of convolutions. We see how well these convolutional neural networks perform classification when provided with only latent features and analyze what type of images are needed to be fed to perform efficient fake news detection. We propose a multi-modal Coupled ConvNet architecture that fuses both the data modules and efficiently classifies online news depending on its textual and visual content. We thence offer a comparative analysis of the results of all the models utilized over three datasets. The proposed architecture outperforms various state-of-the-art methods for fake news detection with considerably high accuracies.

  相似文献   

2.
现有的大多数虚假新闻检测方法将视觉和文本特征串联拼接,导致模态信息冗余并且忽略了不同模态信息之间的相关性。为了解决上述问题,提出一种基于矩阵分解双线性池化的多模态融合虚假新闻检测算法。首先,该算法将多模态特征提取器捕捉的文本和视觉特征利用矩阵分解双线性池化方法进行有效融合,然后与虚假新闻检测器合作鉴别虚假新闻;此外,在训练阶段加入了事件分类器来预测事件标签并去除事件相关的依赖。在Twitter和微博两个多模态谣言数据集上进行了对比实验,证明了该算法的有效性。实验结果表明提出的模型能够有效地融合多模态数据,缩小模态间的异质性差异,从而提高虚假新闻检测的准确性。  相似文献   

3.
Journal of Intelligent Information Systems - Nowadays, really huge volumes of fake news are continuously posted by malicious users with fraudulent goals thus leading to very negative social effects...  相似文献   

4.
Detection of fake news has spurred widespread interests in areas such as healthcare and Internet societies, in order to prevent propagating misleading information for commercial and political purposes. However, efforts to study a general framework for exploiting knowledge, for judging the trustworthiness of given news based on their content, have been limited. Indeed, the existing works rarely consider incorporating knowledge graphs (KGs), which could provide rich structured knowledge for better language understanding.In this work, we propose a deep triple network (DTN) that leverages knowledge graphs to facilitate fake news detection with triple-enhanced explanations. In the DTN, background knowledge graphs, such as open knowledge graphs and extracted graphs from news bases, are applied for both low-level and high-level feature extraction to classify the input news article and provide explanations for the classification.The performance of the proposed method is evaluated by demonstrating abundant convincing comparative experiments. Obtained results show that DTN outperforms conventional fake news detection methods from different aspects, including the provision of factual evidence supporting the decision of fake news detection.  相似文献   

5.
The Journal of Supercomputing - Social media platforms have simplified the sharing of information, which includes news as well, as compared to traditional ways. The ease of access and sharing the...  相似文献   

6.
Neural Computing and Applications - The increasing popularity of social media platforms has simplified the sharing of news articles that have led to the explosion in fake news. With the emergence...  相似文献   

7.
Due to the exponential growth of documents on the Internet and the emergent need to organize them, the automated categorization of documents into predefined labels has received an ever-increased attention in the recent years. A wide range of supervised learning algorithms has been introduced to deal with text classification. Among all these classifiers, K-Nearest Neighbors (KNN) is a widely used classifier in text categorization community because of its simplicity and efficiency. However, KNN still suffers from inductive biases or model misfits that result from its assumptions, such as the presumption that training data are evenly distributed among all categories. In this paper, we propose a new refinement strategy, which we called as DragPushing, for the KNN Classifier. The experiments on three benchmark evaluation collections show that DragPushing achieved a significant improvement on the performance of the KNN Classifier.  相似文献   

8.
Existing techniques for sharing the processing resources in multiprogrammed shared-memory multiprocessors, such as time-sharing, space-sharing, and gang-scheduling, typically sacrifice the performance of individual parallel applications to improve overall system utilization. We present a new processor allocation technique called Loop-Level Process Control (LLPC) that dynamically adjusts the number of processors an application is allowed to use for the execution of each parallel section of code, based on the current system load. This approach exploits the maximum parallelism possible for each application without overloading the system. We implement our scheme on a Silicon Graphics Challenge multiprocessor system and evaluate its performance using applications from the Perfect Club benchmark suite and synthetic benchmarks. Our approach shows significant improvements over traditional time-sharing and gang-scheduling. It has performance comparable to, or slightly better than, static space-sharing, but our strategy is more robust since, unlike static space-sharing, it does not require a priori knowledge of the applications' parallelism characteristics  相似文献   

9.
With the rapid development of the Internet of things (IoT) and mobile communication technology, the amount of data related to industrial Internet of things (IIoT) applications has shown a trend of explosive growth, and hence edge-cloud collaborative environment becomes one of the most popular paradigms to place the IIoT applications data. However, edge servers are often heterogeneous and capacity limited while having lower access delay, so there is a contradiction between capacity and latency while using edge storage. Additionally, when IIoT applications deployed crossing edge regions, the impact of data replication and data privacy should not be ignored. These factors often pose challenges to proposing an effective data placement strategy to take full advantage of edge storage. To address these challenges, an effective data placement strategy for IIoT applications is designed in this article. We first analyze the data access time and data placement cost in an edge-cloud collaborative environment, with the consideration of data replication and data privacy. Then, we design a data placement strategy based on -constraint and Lagrangian relaxation, to reduce the data access time and meanwhile limit the data placement cost to an ideal level. As a result, our proposed data placement strategy can effectively reduce data access time and control data placement costs. Simulation and comparative analysis results have demonstrated the validity of our proposed strategy.  相似文献   

10.
Multimedia Tools and Applications - The progressive growth of today’s digital world has made news spread exponentially faster on social media platforms like Twitter, Facebook, and Weibo....  相似文献   

11.
The widespread fake news in social networks is posing threats to social stability, economic development, and political democracy, etc. Numerous studies have explored the effective detection approaches of online fake news, while few works study the intrinsic propagation and cognition mechanisms of fake news. Since the development of cognitive science paves a promising way for the prevention of fake news, we present a new research area called Cognition Security (CogSec), which studies the potential impacts of fake news on human cognition, ranging from misperception, untrusted knowledge acquisition, targeted opinion/attitude formation, to biased decision making, and investigates the effective ways for fake news debunking. CogSec is a multidisciplinary research field that leverages the knowledge from social science, psychology, cognition science, neuroscience, AI and computer science. We first propose related definitions to characterize CogSec and review the literature history. We further investigate the key research challenges and techniques of CogSec, including humancontent cognition mechanism, social influence and opinion diffusion, fake news detection, and malicious bot detection. Finally, we summarize the open issues and future research directions, such as the cognition mechanism of fake news, influence maximization of fact-checking information, early detection of fake news, fast refutation of fake news, and so on.  相似文献   

12.
Simulated annealing is a naturally serial algorithm, but its behavior can be controlled by the cooling schedule. Genetic algorithm exhibits implicit parallelism and can retain useful redundant information about what is learned from previous searches by its representation in individuals in the population, but GA may lose solutions and substructures due to the disruptive effects of genetic operators and is not easy to regulate GA's convergence. By reasonably combining these two global probabilistic search algorithms, we develop a general, parallel and easily implemented hybrid optimization framework, and apply it to job-shop scheduling problems. Based on effective encoding scheme and some specific optimization operators, some benchmark job-shop scheduling problems are well solved by the hybrid optimization strategy, and the results are competitive with the best literature results. Besides the effectiveness and robustness of the hybrid strategy, the combination of different search mechanisms and structures can relax the parameter-dependence of GA and SA.Scope and purposeJob-shop scheduling problem (JSP) is one of the most well-known machine scheduling problems and one of the strongly NP-hard combinatorial optimization problems. Developing effective search methods is always an important and valuable work. The scope and purpose of this paper is to present a parallel and easily implemented hybrid optimization framework, which reasonably combines genetic algorithm with simulated annealing. Based on effective encoding scheme and some specific optimization operators, the job-shop scheduling problems are well solved by the hybrid optimization strategy.  相似文献   

13.
Traffic detection (including lane detection and traffic sign detection) is one of the key technologies to realize driving assistance system and auto drive system. However, most of the existing detection methods are designed based on single-modal visible light data, when there are dramatic changes in lighting in the scene (such as insufficient lighting in night), it is difficult for these methods to obtain good detection results. In view of multi-modal data can provide complementary discriminative information, based on the YoLoV5 model, this paper proposes a multi-modal fusion YoLoV5 network, which consists of three key components: the dual stream feature extraction module, the correlation feature extraction module, and the self-attention fusion module. Specifically, the dual stream feature extraction module is used to extract the features of each of the two modalities. Secondly, input the features learned from the dual stream feature extraction module into the correlation feature extraction module to learn the features with maximum correlation. Then, the extracted maximum correlation features are used to achieve information exchange between modalities through a self-attention mechanism, and thus obtain fused features. Finally, the fused features are inputted into the detection layer to obtain the final detection result. Experimental results on different traffic detection tasks can demonstrate the superiority of the proposed method.  相似文献   

14.
Nowadays, it becomes increasingly difficult to find reliable multimedia content in the Web 2.0. Open decentralized networks (on the Web) are populated with lots of unauthenticated agents providing fake multimedia. Conventional automatic detection and authentication approaches lack scalability and the ability to capture media semantics by means of forgery. Using them in online scenarios is computationally expensive. Thus, our aim was to develop a trust-aware community approach to facilitate fake media detection. In this paper, we present our approach and highlight four important outcomes. First, a Media Quality Profile (MQP) is proposed for multimedia evaluation and semantic classification with one substantial part on estimating media authenticity based on trust-aware community ratings. Second, we employ the concept of serious gaming in our collaborative fake media detection approach overcoming the cold-start problem and providing sufficient data powering our Media Quality Profile. Third, we identify the notion of confidence, trust, distrust and their dynamics as necessary refinements of existing trust models. Finally, we improve the precision of trust-aware aggregated media authenticity ratings by introducing a trust inference algorithm for yet unknown sources uploading and rating media.  相似文献   

15.
张新钰    邹镇洪    李志伟    刘华平  李骏   《智能系统学报》2020,15(4):758-771
研究者关注利用多个传感器来提升自动驾驶中目标检测模型的准确率,因此对目标检测中的数据融合方法进行研究具有重要的学术和应用价值。为此,本文总结了近年来自动驾驶中深度目标检测模型中的数据融合方法。首先介绍了自动驾驶中深度目标检测技术和数据融合技术的发展,以及已有的研究综述;接着从多模态目标检测、数据融合的层次、数据融合的计算方法3个方面展开阐述,全面展现了该领域的前沿进展;此外,本文提出了数据融合的合理性分析,从方法、鲁棒性、冗余性3个角度对数据融合方法进行了讨论;最后讨论了融合方法的一些公开问题,并从挑战、策略和前景等方面作了总结。  相似文献   

16.
征察  吉立新  李邵梅  高超 《计算机应用》2017,37(10):3006-3011
针对传统新闻图像中人脸标注方法主要依赖人脸相似度信息,分辨噪声和非噪声人脸能力以及非噪声人脸标注能力较差的问题,提出一种基于多模态信息融合的新闻图像人脸标注方法。首先根据人脸和姓名的共现关系,利用改进的K近邻算法,获得基于人脸相似度信息的人脸姓名匹配度;然后,分别从图像中提取人脸大小和位置的信息对人脸重要程度进行表征,从文本中提取姓名位置信息对姓名重要程度进行表征;最后,使用反向传播神经网络来融合上述信息完成人脸标签的推理,并提出一个标签修正策略来进一步改善标注结果。在Label Yahoo! News数据集上的测试效果表明,所提方法的标注准确率、精度和召回率分别达到了77.11%、73.58%和78.75%,与仅基于人脸相似度的算法相比,具有较好的分辨噪声和非噪声人脸能力以及非噪声人脸标注能力。  相似文献   

17.
An effective and efficient algorithm for high-dimensional outlier detection   总被引:8,自引:0,他引:8  
The outlier detection problem has important applications in the field of fraud detection, network robustness analysis, and intrusion detection. Most such applications are most important for high-dimensional domains in which the data can contain hundreds of dimensions. Many recent algorithms have been proposed for outlier detection that use several concepts of proximity in order to find the outliers based on their relationship to the other points in the data. However, in high-dimensional space, the data are sparse and concepts using the notion of proximity fail to retain their effectiveness. In fact, the sparsity of high-dimensional data can be understood in a different way so as to imply that every point is an equally good outlier from the perspective of distance-based definitions. Consequently, for high-dimensional data, the notion of finding meaningful outliers becomes substantially more complex and nonobvious. In this paper, we discuss new techniques for outlier detection that find the outliers by studying the behavior of projections from the data set.Received: 19 November 2002, Accepted: 6 February 2004, Published online: 19 August 2004Edited by: R. Ng.  相似文献   

18.
Multimedia Tools and Applications - In data mining and knowledge discovery applications, outlier detection is a fundamental problem for robust machine learning and anomaly discovery. There are many...  相似文献   

19.
While online auctions continue to increase, so does the incidence of online auction fraud. To avoid discovery, fraudsters often disguise themselves as honest members by imitating normal trading behaviors. Therefore, maintaining vigilance is not sufficient to prevent fraud. Participants in online auctions need a more proactive approach to protect their profits, such as an early fraud detection system. In practice, both accuracy and timeliness are equally important when designing an effective detection system. An instant but incorrect message to the users is not acceptable. However, a lengthy detection procedure is also unsatisfactory in assisting traders to place timely bids. The detection result would be more helpful if it can report potential fraudsters as early as possible. This study proposes a new early fraud detection method that considers accuracy and timeliness simultaneously. To determine the most appropriate attributes that distinguish between normal traders and fraudsters, a modified wrapper procedure is developed to select a subset of attributes from a large candidate attribute pool. Using these attributes, a complement phased modeling procedure is then proposed to extract the features of the latest part of traders’ transaction histories, reducing the time and resources needed for modeling and data collection. An early fraud detection model can be obtained by constructing decision trees or by instance-based learning. Our experimental results show that the performance of the selected attributes is superior to other attribute sets, while the hybrid complement phased models markedly improve the accuracy of fraud detection.  相似文献   

20.
As the Online Social Networks (OSNs) have become popular, more and more people want to increase their influence not only in the real world but also in the OSNs. However, increasing the influence in OSNs is time-consuming job, so some users want to find a shortcut to increase their relationships. The demand for quick increasement of relationship has led to the growth of the fake follower markets that cater to customers who want to grow their relationships rapidly. However, customers of fake follower markets cannot manipulate legitimate user’s relationship perfectly. Existing approaches explore node’s relationships or features to detect customers. But none of them combines the relationships and node’s features directly. In this article, we propose a model that directly combines the relationship and node’s feature to detect customers of fake followers. Specifically, we study the geographical distance for 1-hop-directional links using the nodes geographical location. Motivated by the difference of a distance ratio for 1-hop directional links, the proposed method is designed to generate a 1-hop link distance ratio, and classifies a node as a customer or not. Experimental results on a Twitter dataset demonstrate that the proposed method achieves higher performance than other baseline methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号