Artículos de revistas sobre el tema "Natural Language Processing, Machine Learning, Fake News Detection, Profiling"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 31 mejores artículos de revistas para su investigación sobre el tema "Natural Language Processing, Machine Learning, Fake News Detection, Profiling".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Balgi, Sanjana Madhav. "Fake News Detection using Natural Language Processing". International Journal for Research in Applied Science and Engineering Technology 10, n.º 6 (30 de junio de 2022): 4790–95. http://dx.doi.org/10.22214/ijraset.2022.45095.

Texto completo
Resumen
Abstract: Fake news is information that is false or misleading but is reported as news. The tendency for people to spread false information is influenced by human behaviour; research indicates that people are drawn to unexpected fresh events and information, which increases brain activity. Additionally, it was found that motivated reasoning helps spread incorrect information. This ultimately encourages individuals to repost or disseminate deceptive content, which is frequently identified by click-bait and attention-grabbing names. The proposed study uses machine learning and natural language processing approaches to identify false news specifically, false news items that come from unreliable sources. The dataset used here is ISOT dataset which contains the Real and Fake news collected from various sources. Web scraping is used here to extract the text from news website to collect the present news and is added into the dataset. Data pre-processing, feature extraction is applied on the data. It is followed by dimensionality reduction and classification using models such as Rocchio classification, Bagging classifier, Gradient Boosting classifier and Passive Aggressive classifier. To choose the best functioning model with an accurate prediction for fake news, we compared a number of algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Reddy, Vookanti Anurag, CH Vamsidhar Reddy y Dr R. Lakshminarayanan. "Fake News Detection using Machine Learning". International Journal for Research in Applied Science and Engineering Technology 10, n.º 4 (30 de abril de 2022): 227–30. http://dx.doi.org/10.22214/ijraset.2022.41124.

Texto completo
Resumen
Abstract: This Project comes up with the applications of NLP (Natural Language Processing) techniques for detecting the 'fake news', that is, misleading news stories that comes from the non-reputable sources. Only by building a model based on a count vectorizer (using word tallies) or a (Term Frequency Inverse Document Frequency) tfidf matrix, (word tallies relative to how often they’re used in other articles in your dataset) can only get you so far. But these models do not consider the important qualities like word ordering and context. It is very possible that two articles that are similar in their word count will be completely different in their meaning. The data science community has responded by taking actions against the problem. There is a Kaggle competition called as the “Fake News Challenge” and Facebook is employing AI to filter fake news stories out of users’ feeds. Combatting the fake news is a classic text classification project with a straight forward proposition. Is it possible for you to build a model that can differentiate between “Real “news and “Fake” news? So a proposed work on assembling a dataset of both fake and real news and employ a Naive Bayes classifier in order to create a model to classify an article into fake or real based on its words and phrases
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Jagirdar, Srinivas y Venkata Subba K. Reddy. "Phony News Detection in Reddit Using Natural Language Techniques and Machine Learning Pipelines". International Journal of Natural Computing Research 10, n.º 3 (julio de 2021): 1–11. http://dx.doi.org/10.4018/ijncr.2021070101.

Texto completo
Resumen
Phony news or fake news spreads like a wildfire on social media causing loss to the society. Swift detection of fake news is a priority as it reduces harm to society. This paper developed a phony news detector for Reddit posts using popular machine learning techniques in conjunction with natural language processing techniques. Popular feature extraction algorithms like CountVectorizer (CV) and Term Frequency Inverse Document Frequency (TFIDF) were implemented. These features were fed to Multinomial Naive Bayes (MNB), Random Forest (RF), Support Vector Classifier (SVC), Logistic Regression (LR), AdaBoost, and XGBoost for classifying news as either genuine or phony. Finally, coefficient analysis was performed in order to interpret the best coefficients. The study revealed that the pipeline model of MNB and TFIDF achieved a best accuracy rate of 79.05% when compared to other pipeline models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Srivastava, Rahul y Pawan Singh. "Fake news Detection Using Naive Bayes Classifier". Journal of Management and Service Science (JMSS) 2, n.º 1 (25 de febrero de 2022): 1–7. http://dx.doi.org/10.54060/jmss/002.01.005.

Texto completo
Resumen
Fake news has been on the rise thanks to rapid digitalization across all platforms and mediums. Many governments throughout the world are attempting to address this issue. The use of Natural Language Processing and Machine Learning techniques to properly identify fake news is the subject of this research. The data is cleaned, and feature extraction is performed using pre-processing techniques. Then, employing four distinct strategies, a false news detection model is created. Finally, the research examines and contrasts the accuracy of Naive Bayes, Support Vector Machine (SVM), neural network, and long short-term memory (LSTM) methodologies in order to determine which is the most accurate. To clean the data and conduct feature extraction, pre-processing technologies are needed. Then, employing four distinct strategies, a false news detection model is created. Finally, in order to determine the best fit for the model, the research explores and analyzes the accuracy of Naive Bayes, Support Vector Machine (SVM), neural network, and long short-term memory (LSTM) approaches. The proposed model is working well with an accuracy of products up to 93.6%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Alghamdi, Jawaher, Yuqing Lin y Suhuai Luo. "A Comparative Study of Machine Learning and Deep Learning Techniques for Fake News Detection". Information 13, n.º 12 (12 de diciembre de 2022): 576. http://dx.doi.org/10.3390/info13120576.

Texto completo
Resumen
Efforts have been dedicated by researchers in the field of natural language processing (NLP) to detecting and combating fake news using an assortment of machine learning (ML) and deep learning (DL) techniques. In this paper, a review of the existing studies is conducted to understand and curtail the dissemination of fake news. Specifically, we conducted a benchmark study using a wide range of (1) classical ML algorithms such as logistic regression (LR), support vector machines (SVM), decision tree (DT), naive Bayes (NB), random forest (RF), XGBoost (XGB) and an ensemble learning method of such algorithms, (2) advanced ML algorithms such as convolutional neural networks (CNNs), bidirectional long short-term memory (BiLSTM), bidirectional gated recurrent units (BiGRU), CNN-BiLSTM, CNN-BiGRU and a hybrid approach of such techniques and (3) DL transformer-based models such as BERTbase and RoBERTabase. The experiments are carried out using different pretrained word embedding methods across four well-known real-world fake news datasets—LIAR, PolitiFact, GossipCop and COVID-19—to examine the performance of different techniques across various datasets. Furthermore, a comparison is made between context-independent embedding methods (e.g., GloVe) and the effectiveness of BERTbase—contextualised representations in detecting fake news. Compared with the state of the art’s results across the used datasets, we achieve better results by solely relying on news text. We hope this study can provide useful insights for researchers working on fake news detection.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Ezarfelix, Juandreas, Nathannael Jeffrey y Novita Sari. "Systematic Literature Review: Instagram Fake Account Detection Based on Machine Learning". Engineering, MAthematics and Computer Science (EMACS) Journal 4, n.º 1 (5 de febrero de 2022): 25–31. http://dx.doi.org/10.21512/emacsjournal.v4i1.8076.

Texto completo
Resumen
The popularity of social media continues to grow, and its dominance of the entire world has become one of the aspects of modern life that cannot be ignored. The rapid growth of social media has resulted in the emergence of ecosystem problems. Hate speech, fraud, fake news, and a slew of other issues are becoming un-stoppable. With over 1.7 billion fake accounts on social media, the losses have al-ready been significant, and removing these accounts will take a long time. Due to the growing number of Instagram users, the need for identifying fake accounts on social media, specifically in Instagram, is increasing. Because this process takes a long time if done manually by humans, we can now use machine learning to identify fake accounts thanks to the rapid development of machine learning. We can detect fake accounts on Instagram using machine learning by implementing the combination of image detection and natural language processing.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Nistor, Andreea y Eduard Zadobrischi. "The Influence of Fake News on Social Media: Analysis and Verification of Web Content during the COVID-19 Pandemic by Advanced Machine Learning Methods and Natural Language Processing". Sustainability 14, n.º 17 (23 de agosto de 2022): 10466. http://dx.doi.org/10.3390/su141710466.

Texto completo
Resumen
The purpose of this research was to analyze the prevalence of fake news on social networks, and implicitly, the economic crisis generated by the COVID-19 pandemic, as well as the identification of solutions for filtering and detecting fake news. In this context, we created a series of functions to identify fake content, using information collected from different articles, through advanced machine learning methods with which we could upload and analyze the obtained data. The methodology proposed in this research determined a higher accuracy of fake news collected from Facebook, one of the most powerful social networks for the dissemination of informative content. Thus, the use of advanced machine learning methods and natural language processing code led to an improvement in the detection of fake news compared to conventional methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Nanath, Krishnadas, Supriya Kaitheri, Sonia Malik y Shahid Mustafa. "Examination of fake news from a viral perspective: an interplay of emotions, resonance, and sentiments". Journal of Systems and Information Technology 24, n.º 2 (14 de enero de 2022): 131–55. http://dx.doi.org/10.1108/jsit-11-2020-0257.

Texto completo
Resumen
Purpose The purpose of this paper is to examine the factors that significantly affect the prediction of fake news from the virality theory perspective. The paper looks at a mix of emotion-driven content, sentimental resonance, topic modeling and linguistic features of news articles to predict the probability of fake news. Design/methodology/approach A data set of over 12,000 articles was chosen to develop a model for fake news detection. Machine learning algorithms and natural language processing techniques were used to handle big data with efficiency. Lexicon-based emotion analysis provided eight kinds of emotions used in the article text. The cluster of topics was extracted using topic modeling (five topics), while sentiment analysis provided the resonance between the title and the text. Linguistic features were added to the coding outcomes to develop a logistic regression predictive model for testing the significant variables. Other machine learning algorithms were also executed and compared. Findings The results revealed that positive emotions in a text lower the probability of news being fake. It was also found that sensational content like illegal activities and crime-related content were associated with fake news. The news title and the text exhibiting similar sentiments were found to be having lower chances of being fake. News titles with more words and content with fewer words were found to impact fake news detection significantly. Practical implications Several systems and social media platforms today are trying to implement fake news detection methods to filter the content. This research provides exciting parameters from a viral theory perspective that could help develop automated fake news detectors. Originality/value While several studies have explored fake news detection, this study uses a new perspective on viral theory. It also introduces new parameters like sentimental resonance that could help predict fake news. This study deals with an extensive data set and uses advanced natural language processing to automate the coding techniques in developing the prediction model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Islam, Noman, Asadullah Shaikh, Asma Qaiser, Yousef Asiri, Sultan Almakdi, Adel Sulaiman, Verdah Moazzam y Syeda Aiman Babar. "Ternion: An Autonomous Model for Fake News Detection". Applied Sciences 11, n.º 19 (6 de octubre de 2021): 9292. http://dx.doi.org/10.3390/app11199292.

Texto completo
Resumen
In recent years, the consumption of social media content to keep up with global news and to verify its authenticity has become a considerable challenge. Social media enables us to easily access news anywhere, anytime, but it also gives rise to the spread of fake news, thereby delivering false information. This also has a negative impact on society. Therefore, it is necessary to determine whether or not news spreading over social media is real. This will allow for confusion among social media users to be avoided, and it is important in ensuring positive social development. This paper proposes a novel solution by detecting the authenticity of news through natural language processing techniques. Specifically, this paper proposes a novel scheme comprising three steps, namely, stance detection, author credibility verification, and machine learning-based classification, to verify the authenticity of news. In the last stage of the proposed pipeline, several machine learning techniques are applied, such as decision trees, random forest, logistic regression, and support vector machine (SVM) algorithms. For this study, the fake news dataset was taken from Kaggle. The experimental results show an accuracy of 93.15%, precision of 92.65%, recall of 95.71%, and F1-score of 94.15% for the support vector machine algorithm. The SVM is better than the second best classifier, i.e., logistic regression, by 6.82%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Ali, Abdullah Marish, Fuad A. Ghaleb, Bander Ali Saleh Al-Rimy, Fawaz Jaber Alsolami y Asif Irshad Khan. "Deep Ensemble Fake News Detection Model Using Sequential Deep Learning Technique". Sensors 22, n.º 18 (15 de septiembre de 2022): 6970. http://dx.doi.org/10.3390/s22186970.

Texto completo
Resumen
Recently, fake news has been widely spread through the Internet due to the increased use of social media for communication. Fake news has become a significant concern due to its harmful impact on individual attitudes and the community’s behavior. Researchers and social media service providers have commonly utilized artificial intelligence techniques in the recent few years to rein in fake news propagation. However, fake news detection is challenging due to the use of political language and the high linguistic similarities between real and fake news. In addition, most news sentences are short, therefore finding valuable representative features that machine learning classifiers can use to distinguish between fake and authentic news is difficult because both false and legitimate news have comparable language traits. Existing fake news solutions suffer from low detection performance due to improper representation and model design. This study aims at improving the detection accuracy by proposing a deep ensemble fake news detection model using the sequential deep learning technique. The proposed model was constructed in three phases. In the first phase, features were extracted from news contents, preprocessed using natural language processing techniques, enriched using n-gram, and represented using the term frequency–inverse term frequency technique. In the second phase, an ensemble model based on deep learning was constructed as follows. Multiple binary classifiers were trained using sequential deep learning networks to extract the representative hidden features that could accurately classify news types. In the third phase, a multi-class classifier was constructed based on multilayer perceptron (MLP) and trained using the features extracted from the aggregated outputs of the deep learning-based binary classifiers for final classification. The two popular and well-known datasets (LIAR and ISOT) were used with different classifiers to benchmark the proposed model. Compared with the state-of-the-art models, which use deep contextualized representation with convolutional neural network (CNN), the proposed model shows significant improvements (2.41%) in the overall performance in terms of the F1score for the LIAR dataset, which is more challenging than other datasets. Meanwhile, the proposed model achieves 100% accuracy with ISOT. The study demonstrates that traditional features extracted from news content with proper model design outperform the existing models that were constructed based on text embedding techniques.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Naik, Samrudhi. "Fake News Detection Using NLP". International Journal for Research in Applied Science and Engineering Technology 9, n.º 12 (31 de diciembre de 2021): 2022–31. http://dx.doi.org/10.22214/ijraset.2021.39582.

Texto completo
Resumen
Abstract: The spreading of fake news has given rise to many problems in society. It is due to its ability to cause a lot of social and national damage with destructive impacts. Sometimes it gets very difficult to know if the news is genuine or fake. Therefore it is very important to detect if the news is fake or not. "Fake News" is a term used to represent fabricated news or propaganda comprising misinformation communicated through traditional media channels like print, and television as well as nontraditional media channels like social media. Techniques of NLP and Machine learning can be used to create models which can help to detect fake news. In this paper we have presented six LSTM models using the techniques of NLP and ML. The datasets in comma-separated values format, pertaining to political domain were used in the project. The different attributes like the title and text of the news headline/article were used to perform the fake news detection. The results showed that the proposed solution performs well in terms of providing an output with good accuracy, precision and recall. The performance analysis made between all the models showed that the models which have used GloVe and Word2vec method work better than the models using TF-IDF. Further, a larger dataset for better output and also other factors such as the author ,publisher of the news can be used to determine the credibility of the news. Also, further research can also be done on images, videos, images containing text which can help in improving the models in future. Keywords: Fake news detection, LSTM(long short term memory),Word2Vec,TF-IDF,Natural Language Processing.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

De, Arkadipta, Dibyanayan Bandyopadhyay, Baban Gain y Asif Ekbal. "A Transformer-Based Approach to Multilingual Fake News Detection in Low-Resource Languages". ACM Transactions on Asian and Low-Resource Language Information Processing 21, n.º 1 (31 de enero de 2022): 1–20. http://dx.doi.org/10.1145/3472619.

Texto completo
Resumen
Fake news classification is one of the most interesting problems that has attracted huge attention to the researchers of artificial intelligence, natural language processing, and machine learning (ML). Most of the current works on fake news detection are in the English language, and hence this has limited its widespread usability, especially outside the English literate population. Although there has been a growth in multilingual web content, fake news classification in low-resource languages is still a challenge due to the non-availability of an annotated corpus and tools. This article proposes an effective neural model based on the multilingual Bidirectional Encoder Representations from Transformer (BERT) for domain-agnostic multilingual fake news classification. Large varieties of experiments, including language-specific and domain-specific settings, are conducted. The proposed model achieves high accuracy in domain-specific and domain-agnostic experiments, and it also outperforms the current state-of-the-art models. We perform experiments on zero-shot settings to assess the effectiveness of language-agnostic feature transfer across different languages, showing encouraging results. Cross-domain transfer experiments are also performed to assess language-independent feature transfer of the model. We also offer a multilingual multidomain fake news detection dataset of five languages and seven different domains that could be useful for the research and development in resource-scarce scenarios.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Balouchzahi, Fazlourrahman, Grigori Sidorov y Hosahalli Lakshmaiah Shashirekha. "Fake news spreaders profiling using N-grams of various types and SHAP-based feature selection". Journal of Intelligent & Fuzzy Systems 42, n.º 5 (31 de marzo de 2022): 4437–48. http://dx.doi.org/10.3233/jifs-219233.

Texto completo
Resumen
Complex learning approaches along with complicated and expensive features are not always the best or the only solution for Natural Language Processing (NLP) tasks. Despite huge progress and advancements in learning approaches such as Deep Learning (DL) and Transfer Learning (TL), there are many NLP tasks such as Text Classification (TC), for which basic Machine Learning (ML) classifiers perform superior to DL or TL approaches. Added to this, an efficient feature engineering step can significantly improve the performance of ML based systems. To check the efficacy of ML based systems and feature engineering on TC, this paper explores char, character sequences, syllables, word n-grams as well as syntactic n-grams as features and SHapley Additive exPlanations (SHAP) values to select the important features from the collection of extracted features. Voting Classifiers (VC) with soft and hard voting of four ML classifiers, namely: Support Vector Machine (SVM) with Linear and Radial Basis Function (RBF) kernel, Logistic Regression (LR), and Random Forest (RF) was trained and evaluated on Fake News Spreaders Profiling (FNSP) shared task dataset in PAN 2020. This shared task consists of profiling fake news spreaders in English and Spanish languages. The proposed models exhibited an average accuracy of 0.785 for both languages in this shared task and outperformed the best models submitted to this task.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Giussani, Andrea. "Machine Learning for Dissimulating Reality". Proceedings 77, n.º 1 (27 de abril de 2021): 17. http://dx.doi.org/10.3390/proceedings2021077017.

Texto completo
Resumen
In the last decade, advances in statistical modeling and computer science have boosted the production of machine-produced contents in different fields: from language to image generation, the quality of the generated outputs is remarkably high, sometimes better than those produced by a human being. Modern technological advances such as OpenAI’s GPT-2 (and recently GPT-3) permit automated systems to dramatically alter reality with synthetic outputs so that humans are not able to distinguish the real copy from its counteracts. An example is given by an article entirely written by GPT-2, but many other examples exist. In the field of computer vision, Nvidia’s Generative Adversarial Network, commonly known as StyleGAN (Karras et al. 2018), has become the de facto reference point for the production of a huge amount of fake human face portraits; additionally, recent algorithms were developed to create both musical scores and mathematical formulas. This presentation aims to stimulate participants on the state-of-the-art results in this field: we will cover both GANs and language modeling with recent applications. The novelty here is that we apply a transformer-based machine learning technique, namely RoBerta (Liu et al. 2019), to the detection of human-produced versus machine-produced text concerning fake news detection. RoBerta is a recent algorithm that is based on the well-known Bidirectional Encoder Representations from Transformers algorithm, known as BERT (Devlin et al. 2018); this is a bi-directional transformer used for natural language processing developed by Google and pre-trained over a huge amount of unlabeled textual data to learn embeddings. We will then use these representations as an input of our classifier to detect real vs. machine-produced text. The application is demonstrated in the presentation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Truică, Ciprian-Octavian y Elena-Simona Apostol. "It’s All in the Embedding! Fake News Detection Using Document Embeddings". Mathematics 11, n.º 3 (18 de enero de 2023): 508. http://dx.doi.org/10.3390/math11030508.

Texto completo
Resumen
With the current shift in the mass media landscape from journalistic rigor to social media, personalized social media is becoming the new norm. Although the digitalization progress of the media brings many advantages, it also increases the risk of spreading disinformation, misinformation, and malformation through the use of fake news. The emergence of this harmful phenomenon has managed to polarize society and manipulate public opinion on particular topics, e.g., elections, vaccinations, etc. Such information propagated on social media can distort public perceptions and generate social unrest while lacking the rigor of traditional journalism. Natural Language Processing and Machine Learning techniques are essential for developing efficient tools that can detect fake news. Models that use the context of textual data are essential for resolving the fake news detection problem, as they manage to encode linguistic features within the vector representation of words. In this paper, we propose a new approach that uses document embeddings to build multiple models that accurately label news articles as reliable or fake. We also present a benchmark on different architectures that detect fake news using binary or multi-labeled classification. We evaluated the models on five large news corpora using accuracy, precision, and recall. We obtained better results than more complex state-of-the-art Deep Neural Network models. We observe that the most important factor for obtaining high accuracy is the document encoding, not the classification model's complexity.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Bonsu, Kwadwo Osei. "Weighted Accuracy Algorithmic Approach in Counteracting Fake News and Disinformation". Economic and Regional Studies / Studia Ekonomiczne i Regionalne 14, n.º 1 (1 de marzo de 2021): 99–107. http://dx.doi.org/10.2478/ers-2021-0007.

Texto completo
Resumen
Abstract Subject and purpose of work: Fake news and disinformation are polluting information environment. Hence, this paper proposes a methodology for fake news detection through the combined weighted accuracies of seven machine learning algorithms. Materials and methods: This paper uses natural language processing to analyze the text content of a list of news samples and then predicts whether they are FAKE or REAL. Results: Weighted accuracy algorithmic approach has been shown to reduce overfitting. It was revealed that the individual performance of the different algorithms improved after the data was extracted from the news outlet websites and ‘quality’ data was filtered by the constraint mechanism developed in the experiment. Conclusions: This model is different from the existing mechanisms in the sense that it automates the algorithm selection process and at the same time takes into account the performance of all the algorithms used, including the less performing ones, thereby increasing the mean accuracy of all the algorithm accuracies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Thaher, Thaer, Mahmoud Saheb, Hamza Turabieh y Hamouda Chantar. "Intelligent Detection of False Information in Arabic Tweets Utilizing Hybrid Harris Hawks Based Feature Selection and Machine Learning Models". Symmetry 13, n.º 4 (27 de marzo de 2021): 556. http://dx.doi.org/10.3390/sym13040556.

Texto completo
Resumen
Fake or false information on social media platforms is a significant challenge that leads to deliberately misleading users due to the inclusion of rumors, propaganda, or deceptive information about a person, organization, or service. Twitter is one of the most widely used social media platforms, especially in the Arab region, where the number of users is steadily increasing, accompanied by an increase in the rate of fake news. This drew the attention of researchers to provide a safe online environment free of misleading information. This paper aims to propose a smart classification model for the early detection of fake news in Arabic tweets utilizing Natural Language Processing (NLP) techniques, Machine Learning (ML) models, and Harris Hawks Optimizer (HHO) as a wrapper-based feature selection approach. Arabic Twitter corpus composed of 1862 previously annotated tweets was utilized by this research to assess the efficiency of the proposed model. The Bag of Words (BoW) model is utilized using different term-weighting schemes for feature extraction. Eight well-known learning algorithms are investigated with varying combinations of features, including user-profile, content-based, and words-features. Reported results showed that the Logistic Regression (LR) with Term Frequency-Inverse Document Frequency (TF-IDF) model scores the best rank. Moreover, feature selection based on the binary HHO algorithm plays a vital role in reducing dimensionality, thereby enhancing the learning model’s performance for fake news detection. Interestingly, the proposed BHHO-LR model can yield a better enhancement of 5% compared with previous works on the same dataset.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Surya, Chennam Chandrika, Karunakar K, Murali Mohan T y R. Prasanthi Kumari. "Language Variety Prediction using Word Embeddings and Machine Leaning Algorithms". International Journal for Research in Applied Science and Engineering Technology 10, n.º 12 (31 de diciembre de 2022): 1616–23. http://dx.doi.org/10.22214/ijraset.2022.48280.

Texto completo
Resumen
Abstract: Author Profiling is a technique of predicting demographic characteristics like gender, age, location, nativity language, educational background etc., of an author by analysing their written texts. Author profiling is used in several text processing applications like forensics analysis, marketing, security. The author profiling techniques identify the stylistic differences among the author writing styles to identify the demographics of authors. Researchers experimented with various stylistic features like lexical features, content-based features, syntactic features, semantic features, domain specific features, structural features, readability features etc., to identify the stylistic differences among different author’s texts. The dataset plays an important role to analyse the stylistic differences of authors. PAN is one competition organizes different types of tasks in every year to encourage the participants around the globe for providing solutions to different types of text classification problems like plagiarism detection, authorship attribution, authorship verification, authorship profiling, celebrity profiling, style change detection, fake news spreaders detection, hate speech spreaders detection etc. The author profiling task was introduced in 2013 by the organizers of PAN competition. The organizers carefully gather the datasets and make available to the researchers for providing solutions to the problems. Every year the organizers conduct competitions on different sub-tasks of author profiling and provides datasets in different languages and in different genres. In 2017 competition, PAN introduces a task of predicting the language variety of an author. They release the dataset in four languages. In this work, we proposed an approach for English language dataset of language variety prediction. The proposed approach used the word embeddings generated by the Word2Vec model and BERT (Bidirectional Encoder Representations from Transformers) model. The word embeddings are used for generating the document vectors by combining the word embeddings of words those contain in documents. The document vectors are trained with two machine learning algorithms such as support vector machine and random forest. The Random Forest attained best accuracy of 96.87 for language variety prediction when experiment conducted with BERT embeddings
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

De Magistris, Giorgio, Samuele Russo, Paolo Roma, Janusz T. Starczewski y Christian Napoli. "An Explainable Fake News Detector Based on Named Entity Recognition and Stance Classification Applied to COVID-19". Information 13, n.º 3 (7 de marzo de 2022): 137. http://dx.doi.org/10.3390/info13030137.

Texto completo
Resumen
Over the last few years, the phenomenon of fake news has become an important issue, especially during the worldwide COVID-19 pandemic, and also a serious risk for the public health. Due to the huge amount of information that is produced by the social media such as Facebook and Twitter it is becoming difficult to check the produced contents manually. This study proposes an automatic fake news detection system that supports or disproves the dubious claims while returning a set of documents from verified sources. The system is composed of multiple modules and it makes use of different techniques from machine learning, deep learning and natural language processing. Such techniques are used for the selection of relevant documents, to find among those, the ones that are similar to the tested claim and their stances. The proposed system will be used to check medical news and, in particular, the trustworthiness of posts related to the COVID-19 pandemic, vaccine and cure.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Verma, Gaurav, Rohit Mujumdar, Zijie J. Wang, Munmun De Choudhury y Srijan Kumar. "Overcoming Language Disparity in Online Content Classification with Multimodal Learning". Proceedings of the International AAAI Conference on Web and Social Media 16 (31 de mayo de 2022): 1040–51. http://dx.doi.org/10.1609/icwsm.v16i1.19356.

Texto completo
Resumen
Advances in Natural Language Processing (NLP) have revolutionized the way researchers and practitioners address crucial societal problems. Large language models are now the standard to develop state-of-the-art solutions for text detection and classification tasks. However, the development of advanced computational techniques and resources is disproportionately focused on the English language, sidelining a majority of the languages spoken globally. While existing research has developed better multilingual and monolingual language models to bridge this language disparity between English and non-English languages, we explore the promise of incorporating the information contained in images via multimodal machine learning. Our comparative analyses on three detection tasks focusing on crisis information, fake news, and emotion recognition, as well as five high-resource non-English languages, demonstrate that: (a) detection frameworks based on pre-trained large language models like BERT and multilingual-BERT systematically perform better on the English language compared against non-English languages, and (b) including images via multimodal learning bridges this performance gap. We situate our findings with respect to existing work on the pitfalls of large language models, and discuss their theoretical and practical implications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Bogdanchikov, Andrey, Dauren Ayazbayev y Iraklis Varlamis. "Classification of Scientific Documents in the Kazakh Language Using Deep Neural Networks and a Fusion of Images and Text". Big Data and Cognitive Computing 6, n.º 4 (24 de octubre de 2022): 123. http://dx.doi.org/10.3390/bdcc6040123.

Texto completo
Resumen
The rapid development of natural language processing and deep learning techniques has boosted the performance of related algorithms in several linguistic and text mining tasks. Consequently, applications such as opinion mining, fake news detection or document classification that assign documents to predefined categories have significantly benefited from pre-trained language models, word or sentence embeddings, linguistic corpora, knowledge graphs and other resources that are in abundance for the more popular languages (e.g., English, Chinese, etc.). Less represented languages, such as the Kazakh language, balkan languages, etc., still lack the necessary linguistic resources and thus the performance of the respective methods is still low. In this work, we develop a model that classifies scientific papers written in the Kazakh language using both text and image information and demonstrate that this fusion of information can be beneficial for cases of languages that have limited resources for machine learning models’ training. With this fusion, we improve the classification accuracy by 4.4499% compared to the models that use only text or only image information. The successful use of the proposed method in scientific documents’ classification paves the way for more complex classification models and more application in other domains such as news classification, sentiment analysis, etc., in the Kazakh language.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

"Fake News Detection with Machine Learning". Regular 10, n.º 1 (10 de noviembre de 2020): 124–27. http://dx.doi.org/10.35940/ijitee.a8090.1110120.

Texto completo
Resumen
As the internet is becoming part of our daily routine there is sudden growth and popularity of online news reading. This news can become a major issue to the public and government bodies (especially politically) if its fake hence authentication is necessary. It is essential to flag the fake news before it goes viral and misleads the society. In this paper, various Natural Language Processing techniques along with the number of classifiers are used to identify news content for its credibility.Further this technique can be used for various applications like plagiarismcheck , checking for criminal records.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

"An Effecient Fake News Detection System Using Machine Learning". VOLUME-8 ISSUE-10, AUGUST 2019, REGULAR ISSUE 8, n.º 10 (10 de agosto de 2019): 3125–29. http://dx.doi.org/10.35940/ijitee.j9453.0881019.

Texto completo
Resumen
Social media plays a major role in several things in our life. Social media helps all of us to find some important news with low price. It also provides easy access in less time. But sometimes social media gives a chance for the fast-spreading of fake news. So there is a possibility that less quality news with false information is spread through the social media. This shows a negative impact on the number of people. Sometimes it may impact society also. So, detection of fake news has vast importance. Machine learning algorithms play a vital role in fake news detection; Especially NLP (Natural Language Processing) algorithms are very useful for detecting the fake news. In this paper, we employed machine learning classifiers SVM, K-Nearest Neighbors, Decision tree, Random forest. By using these classifiers we successfully build a model to detect fake news from the given dataset. Python language was used for experiments.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Prachi, Noshin Nirvana, Md Habibullah, Md Emanul Haque Rafi, Evan Alam y Riasat Khan. "Detection of Fake News Using Machine Learning and Natural Language Processing Algorithms". Journal of Advances in Information Technology 13, n.º 6 (2022). http://dx.doi.org/10.12720/jait.13.6.652-661.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Himdi, Hanen, George Weir, Fatmah Assiri y Hassanin Al-Barhamtoshy. "Arabic Fake News Detection Based on Textual Analysis". Arabian Journal for Science and Engineering, 11 de febrero de 2022. http://dx.doi.org/10.1007/s13369-021-06449-y.

Texto completo
Resumen
AbstractOver the years, social media has had a considerable impact on the way we share information and send messages. With this comes the problem of the rapid distribution of fake news which can have negative impacts on both individuals and society. Given the potential negative influence, detecting unmonitored ‘fake news’ has become a critical issue in mainstream media. While there are recent studies that built machine learning models that detect fake news in several languages, lack of studies in detecting fake news in the Arabic language is scare. Hence, in this paper, we study the issue of fake news detection in the Arabic language based on textual analysis. In an attempt to address the challenges of authenticating news, we introduce a supervised machine learning model that classifies Arabic news articles based on their context’s credibility. We also introduce the first dataset of Arabic fake news articles composed through crowdsourcing. Subsequently, to extract textual features from the articles, we create a unique approach of forming Arabic lexical wordlists and design an Arabic Natural Language Processing tool to perform textual features extraction. The findings of this study promises great results and outperformed human performance in the same task.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Meesad, Phayung. "Thai Fake News Detection Based on Information Retrieval, Natural Language Processing and Machine Learning". SN Computer Science 2, n.º 6 (23 de agosto de 2021). http://dx.doi.org/10.1007/s42979-021-00775-6.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Vaishnavi Kesharwani, Vaishnavi Ladole, Shraddha Tak, Vaishnavi Gaigol, Prof. V. B. Bhagat y Dr. V. R. Thakare. "Real and Fake News Detection Smart System Using Passive Aggressive Algorithm (Supervised Machine Learning)". International Journal of Advanced Research in Science, Communication and Technology, 11 de mayo de 2022, 137–41. http://dx.doi.org/10.48175/ijarsct-3629.

Texto completo
Resumen
Today where the internet is ubiquitous, everyone intake news from various online platforms . Along with the increase in the use of social media platforms like Facebook, Twitter, etc. News spread rapidly among millions of users within a very short spam of time. The spread of fake news has far-reaching consequences like the creation of biased opinions to swaying election outcomes for the benefit of certain candidates. Moreover, spammers use appealing news headlines to generate revenue using advertisements via clickbaits. In this paper, we aim to perform binary classification of various news articles available online with the help of concepts pertaining to Artificial Intelligence, Natural Language Processing and Machine Learning(Supervised ML). Our aim is to provide the news is authentic or fake.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

"Fake News Detection of Indian and United States Election Data using Machine Learning Algorithm". International Journal of Innovative Technology and Exploring Engineering 8, n.º 11 (10 de septiembre de 2019): 1559–63. http://dx.doi.org/10.35940/ijitee.k1829.0981119.

Texto completo
Resumen
The world of digital media is thriving by the day and hence, there is an urge of businesses to magnify it more gaining them maximum financial benefits. This particular urge calls for more and more expansions concerning creating and developing new content whether it's in the form of websites that aims at branding businesses or could be in the form of online newspapers and magazines. Since from last few decades’ medium of communication had changed. Now a day people are using social networks very extensively for news updates. These networks aim to make social lives better. Today, everyone knows and uses social media which contains unverified article, post, message and news. Nowadays' fake news is making various issues from mocking articles to a created news and plan government publicity in certain outlets. Fake news and the absence of trust in the media are developing issues with immense consequences in our general public. It is needed to look into how the techniques in the fields of computer science using machine learning, natural language processing helps us to detect fake news. Fake news is now observed as one of the major threats to freedom of expression, journalism, and democracy of a country. In this research, a comprehensive way of detecting fake news using machine learning model has been presented that is trained by Fake News data based on US election and trained on recent Indian political Fake news.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Katariya, Piyush, Vedika Gupta, Rohan Arora, Adarsh Kumar, Shreya Dhingra, Qin Xin y Jude Hemanth. "A deep neural network-based approach for fake news detection in regional language". International Journal of Web Information Systems, 27 de julio de 2022. http://dx.doi.org/10.1108/ijwis-02-2022-0036.

Texto completo
Resumen
Purpose The current natural language processing algorithms are still lacking in judgment criteria, and these approaches often require deep knowledge of political or social contexts. Seeing the damage done by the spreading of fake news in various sectors have attracted the attention of several low-level regional communities. However, such methods are widely developed for English language and low-resource languages remain unfocused. This study aims to provide analysis of Hindi fake news and develop a referral system with advanced techniques to identify fake news in Hindi. Design/methodology/approach The technique deployed in this model uses bidirectional long short-term memory (B-LSTM) as compared with other models like naïve bayes, logistic regression, random forest, support vector machine, decision tree classifier, kth nearest neighbor, gated recurrent unit and long short-term models. Findings The deep learning model such as B-LSTM yields an accuracy of 95.01%. Originality/value This study anticipates that this model will be a beneficial resource for building technologies to prevent the spreading of fake news and contribute to research with low resource languages.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Sharma, Srishti, Mala Saraswat y Anil Kumar Dubey. "Fake news detection on Twitter". International Journal of Web Information Systems, 19 de septiembre de 2022. http://dx.doi.org/10.1108/ijwis-02-2022-0044.

Texto completo
Resumen
Purpose Owing to the increased accessibility of internet and related technologies, more and more individuals across the globe now turn to social media for their daily dose of news rather than traditional news outlets. With the global nature of social media and hardly any checks in place on posting of content, exponential increase in spread of fake news is easy. Businesses propagate fake news to improve their economic standing and influencing consumers and demand, and individuals spread fake news for personal gains like popularity and life goals. The content of fake news is diverse in terms of topics, styles and media platforms, and fake news attempts to distort truth with diverse linguistic styles while simultaneously mocking true news. All these factors together make fake news detection an arduous task. This work tried to check the spread of disinformation on Twitter. Design/methodology/approach This study carries out fake news detection using user characteristics and tweet textual content as features. For categorizing user characteristics, this study uses the XGBoost algorithm. To classify the tweet text, this study uses various natural language processing techniques to pre-process the tweets and then apply a hybrid convolutional neural network–recurrent neural network (CNN-RNN) and state-of-the-art Bidirectional Encoder Representations from Transformers (BERT) transformer. Findings This study uses a combination of machine learning and deep learning approaches for fake news detection, namely, XGBoost, hybrid CNN-RNN and BERT. The models have also been evaluated and compared with various baseline models to show that this approach effectively tackles this problem. Originality/value This study proposes a novel framework that exploits news content and social contexts to learn useful representations for predicting fake news. This model is based on a transformer architecture, which facilitates representation learning from fake news data and helps detect fake news easily. This study also carries out an investigative study on the relative importance of content and social context features for the task of detecting false news and whether absence of one of these categories of features hampers the effectiveness of the resultant system. This investigation can go a long way in aiding further research on the subject and for fake news detection in the presence of extremely noisy or unusable data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

"Fake News Detection Models and Performances". International Journal of Engineering and Advanced Technology 9, n.º 2 (30 de diciembre de 2019): 3754–57. http://dx.doi.org/10.35940/ijeat.b2928.129219.

Texto completo
Resumen
Fake News detection is a hard problem for decades after the advent of social media. As misinformation, so called fake news continues to be rapidly distributing on internet, the reality has becoming increasingly shaped by false information. Time after time we have consumed or being exposed to inaccurate information. The last few years have been talking about guarding against misinformation and not progressed much in this direction. The social media is one of the medium where the fake news spreads so rapidly and impact many in a lesser span of time. Machine Learning and Natural Language processing are the core techniques to detect the fake news and stopping from spreading on social media. Many researchers putting their effort in this new challenge to curb down. This paper provides an insight on feature extraction techniques used for fake news detection on soft media. Text feature extraction works with extracting the document information which represent the whole document without loss of the sole information but words which are considered irrelevant were ignored for the purpose of improving the accuracy. Term Frequency Inverse Document Frequency (TF-IDF), BoW(Bag of Words) are some of the important techniques used in text feature extraction. These techniques are discussed with their significance in this paper. One of the important approach, Automated Readability Index is used to test the readability of the text to build the model also discussed in this paper. This paper will play a significant role for the researchers who are interested in the area of fake news Identification.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía