Littérature scientifique sur le sujet « Natural Language Processing, Machine Learning, Fake News Detection, Profiling »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Natural Language Processing, Machine Learning, Fake News Detection, Profiling ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Natural Language Processing, Machine Learning, Fake News Detection, Profiling"

1

Balgi, Sanjana Madhav. « Fake News Detection using Natural Language Processing ». International Journal for Research in Applied Science and Engineering Technology 10, no 6 (30 juin 2022) : 4790–95. http://dx.doi.org/10.22214/ijraset.2022.45095.

Texte intégral
Résumé :
Abstract: Fake news is information that is false or misleading but is reported as news. The tendency for people to spread false information is influenced by human behaviour; research indicates that people are drawn to unexpected fresh events and information, which increases brain activity. Additionally, it was found that motivated reasoning helps spread incorrect information. This ultimately encourages individuals to repost or disseminate deceptive content, which is frequently identified by click-bait and attention-grabbing names. The proposed study uses machine learning and natural language processing approaches to identify false news specifically, false news items that come from unreliable sources. The dataset used here is ISOT dataset which contains the Real and Fake news collected from various sources. Web scraping is used here to extract the text from news website to collect the present news and is added into the dataset. Data pre-processing, feature extraction is applied on the data. It is followed by dimensionality reduction and classification using models such as Rocchio classification, Bagging classifier, Gradient Boosting classifier and Passive Aggressive classifier. To choose the best functioning model with an accurate prediction for fake news, we compared a number of algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Reddy, Vookanti Anurag, CH Vamsidhar Reddy et Dr R. Lakshminarayanan. « Fake News Detection using Machine Learning ». International Journal for Research in Applied Science and Engineering Technology 10, no 4 (30 avril 2022) : 227–30. http://dx.doi.org/10.22214/ijraset.2022.41124.

Texte intégral
Résumé :
Abstract: This Project comes up with the applications of NLP (Natural Language Processing) techniques for detecting the 'fake news', that is, misleading news stories that comes from the non-reputable sources. Only by building a model based on a count vectorizer (using word tallies) or a (Term Frequency Inverse Document Frequency) tfidf matrix, (word tallies relative to how often they’re used in other articles in your dataset) can only get you so far. But these models do not consider the important qualities like word ordering and context. It is very possible that two articles that are similar in their word count will be completely different in their meaning. The data science community has responded by taking actions against the problem. There is a Kaggle competition called as the “Fake News Challenge” and Facebook is employing AI to filter fake news stories out of users’ feeds. Combatting the fake news is a classic text classification project with a straight forward proposition. Is it possible for you to build a model that can differentiate between “Real “news and “Fake” news? So a proposed work on assembling a dataset of both fake and real news and employ a Naive Bayes classifier in order to create a model to classify an article into fake or real based on its words and phrases
Styles APA, Harvard, Vancouver, ISO, etc.
3

Jagirdar, Srinivas, et Venkata Subba K. Reddy. « Phony News Detection in Reddit Using Natural Language Techniques and Machine Learning Pipelines ». International Journal of Natural Computing Research 10, no 3 (juillet 2021) : 1–11. http://dx.doi.org/10.4018/ijncr.2021070101.

Texte intégral
Résumé :
Phony news or fake news spreads like a wildfire on social media causing loss to the society. Swift detection of fake news is a priority as it reduces harm to society. This paper developed a phony news detector for Reddit posts using popular machine learning techniques in conjunction with natural language processing techniques. Popular feature extraction algorithms like CountVectorizer (CV) and Term Frequency Inverse Document Frequency (TFIDF) were implemented. These features were fed to Multinomial Naive Bayes (MNB), Random Forest (RF), Support Vector Classifier (SVC), Logistic Regression (LR), AdaBoost, and XGBoost for classifying news as either genuine or phony. Finally, coefficient analysis was performed in order to interpret the best coefficients. The study revealed that the pipeline model of MNB and TFIDF achieved a best accuracy rate of 79.05% when compared to other pipeline models.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Srivastava, Rahul, et Pawan Singh. « Fake news Detection Using Naive Bayes Classifier ». Journal of Management and Service Science (JMSS) 2, no 1 (25 février 2022) : 1–7. http://dx.doi.org/10.54060/jmss/002.01.005.

Texte intégral
Résumé :
Fake news has been on the rise thanks to rapid digitalization across all platforms and mediums. Many governments throughout the world are attempting to address this issue. The use of Natural Language Processing and Machine Learning techniques to properly identify fake news is the subject of this research. The data is cleaned, and feature extraction is performed using pre-processing techniques. Then, employing four distinct strategies, a false news detection model is created. Finally, the research examines and contrasts the accuracy of Naive Bayes, Support Vector Machine (SVM), neural network, and long short-term memory (LSTM) methodologies in order to determine which is the most accurate. To clean the data and conduct feature extraction, pre-processing technologies are needed. Then, employing four distinct strategies, a false news detection model is created. Finally, in order to determine the best fit for the model, the research explores and analyzes the accuracy of Naive Bayes, Support Vector Machine (SVM), neural network, and long short-term memory (LSTM) approaches. The proposed model is working well with an accuracy of products up to 93.6%.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Alghamdi, Jawaher, Yuqing Lin et Suhuai Luo. « A Comparative Study of Machine Learning and Deep Learning Techniques for Fake News Detection ». Information 13, no 12 (12 décembre 2022) : 576. http://dx.doi.org/10.3390/info13120576.

Texte intégral
Résumé :
Efforts have been dedicated by researchers in the field of natural language processing (NLP) to detecting and combating fake news using an assortment of machine learning (ML) and deep learning (DL) techniques. In this paper, a review of the existing studies is conducted to understand and curtail the dissemination of fake news. Specifically, we conducted a benchmark study using a wide range of (1) classical ML algorithms such as logistic regression (LR), support vector machines (SVM), decision tree (DT), naive Bayes (NB), random forest (RF), XGBoost (XGB) and an ensemble learning method of such algorithms, (2) advanced ML algorithms such as convolutional neural networks (CNNs), bidirectional long short-term memory (BiLSTM), bidirectional gated recurrent units (BiGRU), CNN-BiLSTM, CNN-BiGRU and a hybrid approach of such techniques and (3) DL transformer-based models such as BERTbase and RoBERTabase. The experiments are carried out using different pretrained word embedding methods across four well-known real-world fake news datasets—LIAR, PolitiFact, GossipCop and COVID-19—to examine the performance of different techniques across various datasets. Furthermore, a comparison is made between context-independent embedding methods (e.g., GloVe) and the effectiveness of BERTbase—contextualised representations in detecting fake news. Compared with the state of the art’s results across the used datasets, we achieve better results by solely relying on news text. We hope this study can provide useful insights for researchers working on fake news detection.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Ezarfelix, Juandreas, Nathannael Jeffrey et Novita Sari. « Systematic Literature Review : Instagram Fake Account Detection Based on Machine Learning ». Engineering, MAthematics and Computer Science (EMACS) Journal 4, no 1 (5 février 2022) : 25–31. http://dx.doi.org/10.21512/emacsjournal.v4i1.8076.

Texte intégral
Résumé :
The popularity of social media continues to grow, and its dominance of the entire world has become one of the aspects of modern life that cannot be ignored. The rapid growth of social media has resulted in the emergence of ecosystem problems. Hate speech, fraud, fake news, and a slew of other issues are becoming un-stoppable. With over 1.7 billion fake accounts on social media, the losses have al-ready been significant, and removing these accounts will take a long time. Due to the growing number of Instagram users, the need for identifying fake accounts on social media, specifically in Instagram, is increasing. Because this process takes a long time if done manually by humans, we can now use machine learning to identify fake accounts thanks to the rapid development of machine learning. We can detect fake accounts on Instagram using machine learning by implementing the combination of image detection and natural language processing.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Nistor, Andreea, et Eduard Zadobrischi. « The Influence of Fake News on Social Media : Analysis and Verification of Web Content during the COVID-19 Pandemic by Advanced Machine Learning Methods and Natural Language Processing ». Sustainability 14, no 17 (23 août 2022) : 10466. http://dx.doi.org/10.3390/su141710466.

Texte intégral
Résumé :
The purpose of this research was to analyze the prevalence of fake news on social networks, and implicitly, the economic crisis generated by the COVID-19 pandemic, as well as the identification of solutions for filtering and detecting fake news. In this context, we created a series of functions to identify fake content, using information collected from different articles, through advanced machine learning methods with which we could upload and analyze the obtained data. The methodology proposed in this research determined a higher accuracy of fake news collected from Facebook, one of the most powerful social networks for the dissemination of informative content. Thus, the use of advanced machine learning methods and natural language processing code led to an improvement in the detection of fake news compared to conventional methods.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Nanath, Krishnadas, Supriya Kaitheri, Sonia Malik et Shahid Mustafa. « Examination of fake news from a viral perspective : an interplay of emotions, resonance, and sentiments ». Journal of Systems and Information Technology 24, no 2 (14 janvier 2022) : 131–55. http://dx.doi.org/10.1108/jsit-11-2020-0257.

Texte intégral
Résumé :
Purpose The purpose of this paper is to examine the factors that significantly affect the prediction of fake news from the virality theory perspective. The paper looks at a mix of emotion-driven content, sentimental resonance, topic modeling and linguistic features of news articles to predict the probability of fake news. Design/methodology/approach A data set of over 12,000 articles was chosen to develop a model for fake news detection. Machine learning algorithms and natural language processing techniques were used to handle big data with efficiency. Lexicon-based emotion analysis provided eight kinds of emotions used in the article text. The cluster of topics was extracted using topic modeling (five topics), while sentiment analysis provided the resonance between the title and the text. Linguistic features were added to the coding outcomes to develop a logistic regression predictive model for testing the significant variables. Other machine learning algorithms were also executed and compared. Findings The results revealed that positive emotions in a text lower the probability of news being fake. It was also found that sensational content like illegal activities and crime-related content were associated with fake news. The news title and the text exhibiting similar sentiments were found to be having lower chances of being fake. News titles with more words and content with fewer words were found to impact fake news detection significantly. Practical implications Several systems and social media platforms today are trying to implement fake news detection methods to filter the content. This research provides exciting parameters from a viral theory perspective that could help develop automated fake news detectors. Originality/value While several studies have explored fake news detection, this study uses a new perspective on viral theory. It also introduces new parameters like sentimental resonance that could help predict fake news. This study deals with an extensive data set and uses advanced natural language processing to automate the coding techniques in developing the prediction model.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Islam, Noman, Asadullah Shaikh, Asma Qaiser, Yousef Asiri, Sultan Almakdi, Adel Sulaiman, Verdah Moazzam et Syeda Aiman Babar. « Ternion : An Autonomous Model for Fake News Detection ». Applied Sciences 11, no 19 (6 octobre 2021) : 9292. http://dx.doi.org/10.3390/app11199292.

Texte intégral
Résumé :
In recent years, the consumption of social media content to keep up with global news and to verify its authenticity has become a considerable challenge. Social media enables us to easily access news anywhere, anytime, but it also gives rise to the spread of fake news, thereby delivering false information. This also has a negative impact on society. Therefore, it is necessary to determine whether or not news spreading over social media is real. This will allow for confusion among social media users to be avoided, and it is important in ensuring positive social development. This paper proposes a novel solution by detecting the authenticity of news through natural language processing techniques. Specifically, this paper proposes a novel scheme comprising three steps, namely, stance detection, author credibility verification, and machine learning-based classification, to verify the authenticity of news. In the last stage of the proposed pipeline, several machine learning techniques are applied, such as decision trees, random forest, logistic regression, and support vector machine (SVM) algorithms. For this study, the fake news dataset was taken from Kaggle. The experimental results show an accuracy of 93.15%, precision of 92.65%, recall of 95.71%, and F1-score of 94.15% for the support vector machine algorithm. The SVM is better than the second best classifier, i.e., logistic regression, by 6.82%.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Ali, Abdullah Marish, Fuad A. Ghaleb, Bander Ali Saleh Al-Rimy, Fawaz Jaber Alsolami et Asif Irshad Khan. « Deep Ensemble Fake News Detection Model Using Sequential Deep Learning Technique ». Sensors 22, no 18 (15 septembre 2022) : 6970. http://dx.doi.org/10.3390/s22186970.

Texte intégral
Résumé :
Recently, fake news has been widely spread through the Internet due to the increased use of social media for communication. Fake news has become a significant concern due to its harmful impact on individual attitudes and the community’s behavior. Researchers and social media service providers have commonly utilized artificial intelligence techniques in the recent few years to rein in fake news propagation. However, fake news detection is challenging due to the use of political language and the high linguistic similarities between real and fake news. In addition, most news sentences are short, therefore finding valuable representative features that machine learning classifiers can use to distinguish between fake and authentic news is difficult because both false and legitimate news have comparable language traits. Existing fake news solutions suffer from low detection performance due to improper representation and model design. This study aims at improving the detection accuracy by proposing a deep ensemble fake news detection model using the sequential deep learning technique. The proposed model was constructed in three phases. In the first phase, features were extracted from news contents, preprocessed using natural language processing techniques, enriched using n-gram, and represented using the term frequency–inverse term frequency technique. In the second phase, an ensemble model based on deep learning was constructed as follows. Multiple binary classifiers were trained using sequential deep learning networks to extract the representative hidden features that could accurately classify news types. In the third phase, a multi-class classifier was constructed based on multilayer perceptron (MLP) and trained using the features extracted from the aggregated outputs of the deep learning-based binary classifiers for final classification. The two popular and well-known datasets (LIAR and ISOT) were used with different classifiers to benchmark the proposed model. Compared with the state-of-the-art models, which use deep contextualized representation with convolutional neural network (CNN), the proposed model shows significant improvements (2.41%) in the overall performance in terms of the F1score for the LIAR dataset, which is more challenging than other datasets. Meanwhile, the proposed model achieves 100% accuracy with ISOT. The study demonstrates that traditional features extracted from news content with proper model design outperform the existing models that were constructed based on text embedding techniques.
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Natural Language Processing, Machine Learning, Fake News Detection, Profiling"

1

Frimodig, Matilda, et Sivertsson Tom Lanhed. « A Comparative study of Knowledge Graph Embedding Models for use in Fake News Detection ». Thesis, Malmö universitet, Institutionen för datavetenskap och medieteknik (DVMT), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-43228.

Texte intégral
Résumé :
During the past few years online misinformation, generally referred to as fake news, has been identified as an increasingly dangerous threat. As the spread of misinformation online has increased, fake news detection has become an active line of research. One approach is to use knowledge graphs for the purpose of automated fake news detection. While large scale knowledge graphs are openly available these are rarely up to date, often missing the relevant information needed for the task of fake news detection. Creating new knowledge graphs from online sources is one way to obtain the missing information. However extracting information from unstructured text is far from straightforward. Using Natural Language Processing techniques we developed a pre-processing pipeline for extracting information from text for the purpose of creating knowledge graphs. In order to classify news as fake or not fake with the use of knowledge graphs, these need to be converted into a machine understandable format, called knowledge graph embeddings. These embeddings also allow new information to be inferred or classified based on the already existing information in the knowledge graph. Only one knowledge graph embedding model has previously been used for the purpose of fake news detection while several new models have recently been developed. We compare the performance of three different embedding models, all relying on different fundamental architectures, in the specific context of fake news detection. The models used were the geometric model TransE, the tensor decomposition model ComplEx and the deep learning model ConvKB. The results of this study shows that out of the three models, ConvKB is the best performing. However other aspects than performance need to be considered and as such these results do not necessarily mean that a deep learning approach is the most suitable for real world fake news detection.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Bondielli, Alessandro. « Combining natural language processing and machine learning for profiling and fake news detection ». Doctoral thesis, 2021. http://hdl.handle.net/2158/1244287.

Texte intégral
Résumé :
In recent years, Natural Language Processing (NLP) and Text Mining have become an ever-increasing field of research, also due to the advancements of Deep Learning and Language Models that allow tackling several interesting and novel problems in different application domains. Traditional techniques of text mining mostly relied on structured data to design machine learning algorithms. Nonetheless, a growing number of online platforms contain a lot of unstructured information that represent a great value for both Industry, especially in the context of Industry 4.0, and Public Administration services, e.g. for smart cities. This holds especially true in the context of social media, where the production of user-generated data is rapidly growing. Such data can be exploited with great benefits for several purposes, including profiling, information extraction, and classification. User-generated texts can in fact provide crucial insight into their interests and their skills and mindset, and can enable the comprehension of wider phenomena such as how information is spread through the internet. The goal of the present work is twofold. Firstly, several case studies are provided to demonstrate how a mixture of NLP and Text Mining approaches, and in particular the notion of distributional semantics, can be successfully exploited to model different kinds of profiles that are purely based on the provided unstructured textual information. First, city areas are profiled exploiting newspaper articles by means of word embeddings and clustering to categorize them based on their tags. Second, experiments are performed using distributional representations (aka embeddings) of entire sequences of texts. Several techniques, including traditional methods and Language Models, aimed at profiling professional figures based on their résumés are proposed and evaluated. Secondly, such key concepts and insights are applied to the challenging and open task of fake news detection and fact-checking, in order to build models capable of distinguishing between trustworthy and not trustworthy information. The proposed method exploits the semantic similarity of texts. An architecture exploiting state-of-the-art language models for semantic textual similarity and classification is proposed to perform fact-checking. The approach is evaluated against real world data containing fake news. To collect and label the data, a methodology is proposed that is able to include both real/fake news and a ground truth. The framework has been exploited to face the problems of data collection and annotation of fake news, also by exploiting fact-checking techniques. In light of the obtained results, advantages and shortcomings of approaches based on distributional text embeddings are discussed, as is the effectiveness of the proposed system for detecting fake news exploiting factually correct information. The proposed method is shown to be a viable alternative to perform fake news detection with respect to a traditional classification-based approach.
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Natural Language Processing, Machine Learning, Fake News Detection, Profiling"

1

Patel, Mansi, Jeel Padiya et Mangal Singh. « Fake News Detection Using Machine Learning and Natural Language Processing ». Dans Studies in Computational Intelligence, 127–48. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-90087-8_6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Ibrishimova, Marina Danchovsky, et Kin Fun Li. « A Machine Learning Approach to Fake News Detection Using Knowledge Verification and Natural Language Processing ». Dans Advances in Intelligent Networking and Collaborative Systems, 223–34. Cham : Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-29035-1_22.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Desamsetti, Sankar, Satya Hemalatha Juttuka, Yamini Mahitha Posina, S. Rama Sree et B. S. Kiruthika Devi. « Artificial Intelligence Based Fake News Detection Techniques ». Dans Advances in Transdisciplinary Engineering. IOS Press, 2023. http://dx.doi.org/10.3233/atde221284.

Texte intégral
Résumé :
Fake news on social media platforms is increasing rapidly, so many people are becoming victims of this news without their interference. It is a big challenge for us to detect who is spreading fake news. Fake news spreads faster nowadays than in the past due to the widespread use of the internet. This research paper is a study of techniques based on artificial intelligence, such as neural networks, natural language processing, and machine learning algorithms that work together. The learning models surveyed are Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), and Bidirectional Recurrent Neural Network (RNN) methods. Natural language processing methods contain the tokenization model, and machine learning includes Term Frequency-Inverse Document Frequency (TFIDF) and unsupervised algorithms. The algorithms are compared and their effectiveness in detecting fake news is investigated, along with the advantages and disadvantages of the respective techniques.
Styles APA, Harvard, Vancouver, ISO, etc.
4

A B, Pawar, Jawale M A et Kyatanavar D N. « Analyzing Fake News Based on Machine Learning Algorithms ». Dans Intelligent Systems and Computer Technology. IOS Press, 2020. http://dx.doi.org/10.3233/apc200146.

Texte intégral
Résumé :
Usages of Natural Language Processing techniques in the field of detection of fake news is analyzed in this research paper. Fake news are misleading concepts spread by invalid resources can provide damages to human-life, society. To carry out this analysis work, dataset obtained from web resource OpenSources.co is used which is mainly part of Signal Media. The document frequency terms as TF-IDF of bi-grams used in correlation with PCFG (Probabilistic Context Free Grammar) on a set of 11,000 documents extracted as news articles. This set tested on classification algorithms namely SVM (Support Vector Machines), Stochastic Gradient Descent, Bounded Decision Trees, Gradient Boosting algorithm with Random Forests. In experimental analysis, found that combination of Stochastic Gradient Descent with TF-IDF of bi-grams gives an accuracy of 77.2% in detecting fake contents, which observes with PCFGs having slight recalling defects
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Natural Language Processing, Machine Learning, Fake News Detection, Profiling"

1

Mohawesh, Rami, Shuxiang Xu, Matthew Springer, Muna Al-Hawawreh et Sumbal Maqsood. « Fake or Genuine ? Contextualised Text Representation for Fake Review Detection ». Dans 10th International Conference on Natural Language Processing (NLP 2021). Academy and Industry Research Collaboration Center (AIRCC), 2021. http://dx.doi.org/10.5121/csit.2021.112311.

Texte intégral
Résumé :
Online reviews have a significant influence on customers' purchasing decisions for any products or services. However, fake reviews can mislead both consumers and companies. Several models have been developed to detect fake reviews using machine learning approaches. Many of these models have some limitations resulting in low accuracy in distinguishing between fake and genuine reviews. These models focused only on linguistic features to detect fake reviews and failed to capture the semantic meaning of the reviews. To deal with this, this paper proposes a new ensemble model that employs transformer architecture to discover the hidden patterns in a sequence of fake reviews and detect them precisely. The proposed approach combines three transformer models to improve the robustness of fake and genuine behaviour profiling and modelling to detect fake reviews. The experimental results using semi-real benchmark datasets showed the superiority of the proposed model over state-of-the-art models.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Kumar, Vinit, Arvind Kumar, Abhinav Kumar Singh et Ansh Pachauri. « Fake News Detection using Machine Learning and Natural Language Processing ». Dans 2021 International Conference on Technological Advancements and Innovations (ICTAI). IEEE, 2021. http://dx.doi.org/10.1109/ictai53825.2021.9673378.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Shariff, Moosa, Brian Thoms, Jason T. Isaacs et Vida Vakilian. « Approaches in Fake News Detection : An Evaluation of Natural Language Processing and Machine Learning Techniques on the Reddit Social Network ». Dans 9th International Conference on Artificial Intelligence and Applications (AIAPP 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.120910.

Texte intégral
Résumé :
Classifier algorithms are a subfield of data mining and play an integral role in finding patterns and relationships within large datasets. In recent years, fake news detection has become a popular area of data mining for several important reasons, including its negative impact on decision-making and its virality within social networks. In the past, traditional fake news detection has relied primarily on information context, while modern approaches rely on auxiliary information to classify content. Modelling with machine learning and natural language processing can aid in distinguishing between fake and real news. In this research, we mine data from Reddit, the popular online discussion forum and social news aggregator, and measure machine learning classifiers in order to evaluate each algorithm’s accuracy in detecting fake news using only a minimal subset of data.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie