Literatura académica sobre el tema "AI Generated Text Detection"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "AI Generated Text Detection".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "AI Generated Text Detection"

1

Bhattacharjee, Amrita, and Huan Liu. "Fighting Fire with Fire: Can ChatGPT Detect AI-generated Text?" ACM SIGKDD Explorations Newsletter 25, no. 2 (2024): 14–21. http://dx.doi.org/10.1145/3655103.3655106.

Texto completo
Resumen
Large language models (LLMs) such as ChatGPT are increasingly being used for various use cases, including text content generation at scale. Although detection methods for such AI-generated text exist already, we investigate ChatGPT's performance as a detector on such AI-generated text, inspired by works that use ChatGPT as a data labeler or annotator. We evaluate the zeroshot performance of ChatGPT in the task of human-written vs. AI-generated text detection, and perform experiments on publicly available datasets. We empirically investigate if ChatGPT is symmetrically effective in detecting AI-generated or human-written text. Our findings provide insight on how ChatGPT and similar LLMs may be leveraged in automated detection pipelines by simply focusing on solving a specific aspect of the problem and deriving the rest from that solution. All code and data is available at https://github.com/AmritaBh/ChatGPT-as-Detector.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Wang, Yu. "Survey for Detecting AI-generated Content." Advances in Engineering Technology Research 11, no. 1 (2024): 643. http://dx.doi.org/10.56028/aetr.11.1.643.2024.

Texto completo
Resumen
In large language models (LLMs) field, the rapid advancements have significantly improved text generation, which has blured the distinction between AI-generated and human-written texts. These developments have sparked concerns about potential risks, such as disseminating fake information or engaging in academic cheating. As the responsible use of LLMs becomes imperative, the detection of AI-generated content has become a crucial task. Most existing surveys on AI-generated text (AIGT) Detection have analysed the detection approaches from a computational perspective, with less attention to linguistic aspects. This survey seeks to provide a fresh perspective to drive progress in the area of LLM-generated text detection. Futhermore, in order to make the assessment more explainable, we emphasize the great importence of leveraging specific parameters or metrics to linguistically evaluate the candidate text.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

A, Nykonenko. "How Text Transformations Affect AI Detection." Artificial Intelligence 29, AI.2024.29(4) (2024): 233–41. https://doi.org/10.15407/jai2024.04.233.

Texto completo
Resumen
This study addresses the critical issue of AI writing detection, which currently plays a key role in deterring technology misuse and proposes a foundation for the controllable and conscious use of AI. The ability to differentiate between human-written and AI-generated text is crucial for the practical application of any policies or guidelines. Current detection tools are unable to interpret their decisions in a way that is understandable to humans or provide any human-readable evidence or proof for their decisions. We assume that there should be a traceable footprint in LLM-generated texts that is invisible to the human eye but can be detected by AI detection tools-referred to as the AI footprint. Understanding its nature will help bring more light into the guiding principles lying at the core of AI detection technology and help build more trust in the technology in general. The main goal of this paper is to examine the AI footprint in text data generated by large language models (LLMs). To achieve this, we propose a new method for text transformation that should measurably decrease the AI footprint in the text data, impacting AI writing scores. We applied a set of stage-by-stage text transformations focused on decreasing meaningfulness by masking or removing words. Using a set of AI detectors, we measured the AI writing score as a proxy metric for assessing the impact of the proposed method. The results demonstrate a significant correlation between the severity of changes and the resulting impact on AI writing scores, highlighting the need for developing more reliable AI writing identification methods that are immune to attempts to hide the AI footprint through subtle changes
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Singh, Dr Viomesh, Bhavesh Agone, Aryan More, Aryan Mengawade, Atharva Deshmukh, and Atharva Badgujar. "SAVANA- A Robust Framework for Deepfake Video Detection and Hybrid Double Paraphrasing with Probabilistic Analysis Approach for AI Text Detection." International Journal for Research in Applied Science and Engineering Technology 12, no. 11 (2024): 2074–83. http://dx.doi.org/10.22214/ijraset.2024.65526.

Texto completo
Resumen
Abstract: As the generative AI has advanced with a great speed, the need to detect AI-generated content, including text and deepfake media, also increased. This research work proposes a hybrid detection method that includes double paraphrasing-based consistency checks, coupled with probabilistic content analysis through natural language processing and machine learning algorithms for text and advanced deepfake detection techniques for media. Our system hybridizes the double paraphrasing framework of SAVANA with probabilistic analysis toward high accuracy on AI-text detection in forms such as DOCX or PDF from diverse domains- academic text, business text, reviews, and media. Specifically, for detecting visual artifact and spatiotemporal inconsistencies attributed to deepfakes within media applications, we'll be exploiting BlazeFace, EfficientNetB4 for extracting features while classifying and detecting respective deepfakes. Experimental results indicate that the hybrid model achieves up to 95% accuracy for AI-generated text detection and up to 96% accuracy for deepfake detection with the traditional models and the standalone SAVANA-based methods. This approach therefore positions our framework as an adaptive and reliable tool to detect AI-generated content within various contexts, thereby enriching content integrity in digital environments.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Vismay Vora, Et al. "A Multimodal Approach for Detecting AI Generated Content using BERT and CNN." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 9 (2023): 691–701. http://dx.doi.org/10.17762/ijritcc.v11i9.8861.

Texto completo
Resumen
With the advent of Generative AI technologies like LLMs and image generators, there will be an unprecedented rise in synthetic information which requires detection. While deepfake content can be identified by considering biological cues, this article proposes a technique for the detection of AI generated text using vocabulary, syntactic, semantic and stylistic features of the input data and detecting AI generated images through the use of a CNN model. The performance of these models is also evaluated and benchmarked with other comparative models. The ML Olympiad Competition dataset from Kaggle is used in a BERT Model for text detection and the CNN model is trained on the CIFAKE dataset to detect AI generated images. It can be concluded that in the upcoming era, AI generated content will be omnipresent and no single model will truly be able to detect all AI generated content especially when these technologies are getting better.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Subramaniam, Raghav. "Identifying Text Classification Failures in Multilingual AI-Generated Content." International Journal of Artificial Intelligence & Applications 14, no. 5 (2023): 57–63. http://dx.doi.org/10.5121/ijaia.2023.14505.

Texto completo
Resumen
With the rising popularity of generative AI tools, the nature of apparent classification failures by AI content detection softwares, especially between different languages, must be further observed. This paper aims to do this through testing OpenAI’s “AI Text Classifier” on a set of human and AI-generated texts inEnglish, German, Arabic, Hindi, Chinese, and Swahili. Given the unreliability of existing tools for detection of AIgenerated text, it is notable that specific types of classification failures often persist in slightly different ways when various languages are observed: misclassification of human-written content as “AI-generated” and vice versa may occur more frequently in specific language content than others. Our findings indicate that false negative labelings are more likely to occur in English, whereas false positives are more likely to occur in Hindi and Arabic. There was an observed tendency for other languages to not be confidently labeled at all.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Sushma D S, Pooja C N, Varsha H S, Yasir Hussain, and P Yashash. "Detection and Classification of ChatGPT Generated Contents Using Deep Transformer Models." International Research Journal on Advanced Engineering Hub (IRJAEH) 2, no. 05 (2024): 1404–7. http://dx.doi.org/10.47392/irjaeh.2024.0193.

Texto completo
Resumen
AI advancements, particularly in neural networks, have brought about groundbreaking tools like text generators and chatbots. While these technologies offer tremendous benefits, they also pose serious risks such as privacy breaches, spread of misinformation, and challenges to academic integrity. Previous efforts to distinguish between human and AI-generated text have been limited, especially with models like ChatGPT. To tackle this, we created a dataset containing both human and ChatGPT-generated text, using it to train and test various machine and deep learning models. Your results, particularly the high F1-score and accuracy achieved by the RoBERTa-based custom deep learning model and Distil BERT, indicate promising progress in this area. By establishing a robust baseline for detecting and classifying AI-generated content, your work contributes significantly to mitigating potential misuse of AI-powered text generation tools.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Alshammari, Hamed, and Khaled Elleithy. "Toward Robust Arabic AI-Generated Text Detection: Tackling Diacritics Challenges." Information 15, no. 7 (2024): 419. http://dx.doi.org/10.3390/info15070419.

Texto completo
Resumen
Current AI detection systems often struggle to distinguish between Arabic human-written text (HWT) and AI-generated text (AIGT) due to the small marks present above and below the Arabic text called diacritics. This study introduces robust Arabic text detection models using Transformer-based pre-trained models, specifically AraELECTRA, AraBERT, XLM-R, and mBERT. Our primary goal is to detect AIGTs in essays and overcome the challenges posed by the diacritics that usually appear in Arabic religious texts. We created several novel datasets with diacritized and non-diacritized texts comprising up to 9666 HWT and AIGT training examples. We aimed to assess the robustness and effectiveness of the detection models on out-of-domain (OOD) datasets to assess their generalizability. Our detection models trained on diacritized examples achieved up to 98.4% accuracy compared to GPTZero’s 62.7% on the AIRABIC benchmark dataset. Our experiments reveal that, while including diacritics in training enhances the recognition of the diacritized HWTs, duplicating examples with and without diacritics is inefficient despite the high accuracy achieved. Applying a dediacritization filter during evaluation significantly improved model performance, achieving optimal performance compared to both GPTZero and the detection models trained on diacritized examples but evaluated without dediacritization. Although our focus was on Arabic due to its writing challenges, our detector architecture is adaptable to any language.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Jeremie Busio Legaspi, Roan Joyce Ohoy Licuben, Emmanuel Alegado Legaspi, and Joven Aguinaldo Tolentino. "Comparing ai detectors: evaluating performance and efficiency." International Journal of Science and Research Archive 12, no. 2 (2024): 833–38. http://dx.doi.org/10.30574/ijsra.2024.12.2.1276.

Texto completo
Resumen
The widespread utilization of AI tools such as ChatGPT has become increasingly prevalent among learners, posing a threat to academic integrity. This study seeks to evaluate capability and efficiency of AI detection tools in distinguishing between human-authored and AI-generated works. Three-paragraph works on “AutoCAD and Architecture” were generated through ChatGPT, and three human-written works were subjected to evaluation. AI detection tools such as GPTZero, Copyleaks and Writer AI were used to evaluate these paragraphs. Parameters such as “Human/Human Text/Human Generated Text” and “AI/AI Content Detected” were used to evaluate the performance of the three AI detection tools in evaluating outputs. Findings indicate that GPT Zero and Copyleaks have higher reliability in determining human-authored work and AI generated work while Writer AI showed a notable content classification of “Human Generated Content” on all tested outputs showing less sensitivity on determining human-authored work and AI generated work. Findings indicate that the use of Artificial Intelligence as an AI detection tool should be accompanied with thorough validation and cross-referencing of results.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Kim, Min-Gyu, and Heather Desaire. "Detecting the Use of ChatGPT in University Newspapers by Analyzing Stylistic Differences with Machine Learning." Information 15, no. 6 (2024): 307. http://dx.doi.org/10.3390/info15060307.

Texto completo
Resumen
Large language models (LLMs) have the ability to generate text by stringing together words from their extensive training data. The leading AI text generation tool built on LLMs, ChatGPT, has quickly grown a vast user base since its release, but the domains in which it is being heavily leveraged are not yet known to the public. To understand how generative AI is reshaping print media and the extent to which it is being implemented already, methods to distinguish human-generated text from that generated by AI are required. Since college students have been early adopters of ChatGPT, we sought to study the presence of generative AI in newspaper articles written by collegiate journalists. To achieve this objective, an accurate AI detection model is needed. Herein, we analyzed university newspaper articles from different universities to determine whether ChatGPT was used to write or edit the news articles. We developed a detection model using classical machine learning and used the model to detect AI usage in the news articles. The detection model showcased a 93% accuracy in the training data and had a similar performance in the test set, demonstrating effectiveness in AI detection above existing state-of-the-art detection tools. Finally, the model was applied to the task of searching for generative AI usage in 2023, and we found that ChatGPT was not used to revise articles to any appreciable measure to write university news articles at the schools we studied.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía