Journal articles on the topic 'ChatGPT'

To see the other types of publications on this topic, follow the link: ChatGPT.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'ChatGPT.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Li, Runtian. "The Investigation Related to Application of ChatGPT in Language Teaching." Lecture Notes in Education Psychology and Public Media 35, no. 1 (January 3, 2024): 295–99. http://dx.doi.org/10.54254/2753-7048/35/20232152.

Full text
Abstract:
Being one of the foremost commercial artificial intelligence models, ChatGPT has garnered extensive attention and utilization within the education sector. Controversies have also arisen in scholarly circles regarding its potential adverse impacts. This study examines the operational principles of ChatGPT and elucidates its applications across various language teaching scenarios through illustrative instances. In the investigation of ChatGPT-assisted grammar instruction, the role and constraints of ChatGPT's grammar correction for Chinese learners are analyzed. Despite ChatGPT's accuracy in educational settings, it falls short in comprehending the requirements of novice learners. Using it as a self-study tool can leave students confused with difficult knowledge. However, it can significantly improve the teaching efficiency of teachers. In the process of ChatGPT assisted pronunciation teaching, this paper collected survey questionnaires from volunteers and summarized the conclusions. Most volunteers support the effectiveness of ChatGPT in oral teaching. Compared to traditional voice assistants like Siri, ChatGPTs pronunciation is more accurate and its sentences are closer to real humans. This paper provides a summary of ChatGPTs application in the field of language teaching, with a focus on two scenarios. It also discusses the shortcomings of ChatGPT in the current stage of language teaching. The future prospects of ChatGPT in language teaching are still very broad, and with the development of technology, the combination of ChatGPT and language teaching will become even closer.
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Chen. "The investigation of application related to ChatGPT in foreign language learning." Applied and Computational Engineering 35, no. 1 (January 22, 2024): 110–15. http://dx.doi.org/10.54254/2755-2721/35/20230376.

Full text
Abstract:
The integration of Chat Generative Pre-trained Transformer (ChatGPT) into the realm of education has garnered significant attention recently. Due to its capability of providing answers to inquiries, the feasibility of ChatGPTs implementation in education has been examined. This review explores the potential benefits of ChatGPT in foreign language learning. Initially, different methods used to research the ChatGPTs plausibility of improving language skills are shown. Furthermore, this review discusses some different application scenarios of ChatGPT, also presenting its drawbacks and limitations. Despite recognized concerns, the viability of ChatGPT as a valuable instrument for foreign language education remains substantiated. The technology exhibits promise in offering personalized and interactive language practice. While acknowledging potential pitfalls, the study underscores the need for further investigation to harness ChatGPT's potential optimally. By emphasizing ongoing research and development, this review envisions a future where ChatGPT contributes significantly to language education, underscoring the necessity for careful exploration of its evolving capabilities.
APA, Harvard, Vancouver, ISO, and other styles
3

Ponzo, Valentina, Ilaria Goitre, Enrica Favaro, Fabio Dario Merlo, Maria Vittoria Mancino, Sergio Riso, and Simona Bo. "Is ChatGPT an Effective Tool for Providing Dietary Advice?" Nutrients 16, no. 4 (February 6, 2024): 469. http://dx.doi.org/10.3390/nu16040469.

Full text
Abstract:
The chatbot Chat Generative Pretrained Transformer (ChatGPT) is becoming increasingly popular among patients for searching health-related information. Prior studies have raised concerns regarding accuracy in offering nutritional advice. We investigated in November 2023 ChatGPT’s potential as a tool for providing nutritional guidance in relation to different non-communicable diseases (NCDs). First, the dietary advice given by ChatGPT (version 3.5) for various NCDs was compared with guidelines; then, the chatbot’s capacity to manage a complex case with several diseases was investigated. A panel of nutrition experts assessed ChatGPT’s responses. Overall, ChatGPT offered clear advice, with appropriateness of responses ranging from 55.5% (sarcopenia) to 73.3% (NAFLD). Only two recommendations (one for obesity, one for non-alcoholic-fatty-liver disease) contradicted guidelines. A single suggestion for T2DM was found to be “unsupported”, while many recommendations for various NCDs were deemed to be “not fully matched” to the guidelines despite not directly contradicting them. However, when the chatbot handled overlapping conditions, limitations emerged, resulting in some contradictory or inappropriate advice. In conclusion, although ChatGPT exhibited a reasonable accuracy in providing general dietary advice for NCDs, its efficacy decreased in complex situations necessitating customized strategies; therefore, the chatbot is currently unable to replace a healthcare professional’s consultation.
APA, Harvard, Vancouver, ISO, and other styles
4

Tangadulrat, Pasin, Supinya Sono, and Boonsin Tangtrakulwanich. "Using ChatGPT for Clinical Practice and Medical Education: Cross-Sectional Survey of Medical Students’ and Physicians’ Perceptions." JMIR Medical Education 9 (December 22, 2023): e50658. http://dx.doi.org/10.2196/50658.

Full text
Abstract:
Background ChatGPT is a well-known large language model–based chatbot. It could be used in the medical field in many aspects. However, some physicians are still unfamiliar with ChatGPT and are concerned about its benefits and risks. Objective We aim to evaluate the perception of physicians and medical students toward using ChatGPT in the medical field. Methods A web-based questionnaire was sent to medical students, interns, residents, and attending staff with questions regarding their perception toward using ChatGPT in clinical practice and medical education. Participants were also asked to rate their perception of ChatGPT’s generated response about knee osteoarthritis. Results Participants included 124 medical students, 46 interns, 37 residents, and 32 attending staff. After reading ChatGPT’s response, 132 of the 239 (55.2%) participants had a positive rating about using ChatGPT for clinical practice. The proportion of positive answers was significantly lower in graduated physicians (48/115, 42%) compared with medical students (84/124, 68%; P<.001). Participants listed a lack of a patient-specific treatment plan, updated evidence, and a language barrier as ChatGPT’s pitfalls. Regarding using ChatGPT for medical education, the proportion of positive responses was also significantly lower in graduate physicians (71/115, 62%) compared to medical students (103/124, 83.1%; P<.001). Participants were concerned that ChatGPT’s response was too superficial, might lack scientific evidence, and might need expert verification. Conclusions Medical students generally had a positive perception of using ChatGPT for guiding treatment and medical education, whereas graduated doctors were more cautious in this regard. Nonetheless, both medical students and graduated doctors positively perceived using ChatGPT for creating patient educational materials.
APA, Harvard, Vancouver, ISO, and other styles
5

Vasudevan, Asokan, Alma Vorfi Lama, and Zohaib Hassan Sain. "The Game-Changing Impact of AI Chatbots on Education ChatGPT and Beyond." Journal of Information Systems and Technology Research 3, no. 1 (January 31, 2024): 38–44. http://dx.doi.org/10.55537/jistr.v3i1.770.

Full text
Abstract:
ChatGPT, an AI-driven chatbot, delivers coherent and valuable responses by analysing extensive data sets. This article explores the profound impact of ChatGPT on contemporary education, as discussed by prominent academics, scientists, distinguished researchers, and engineers. The study delves into ChatGPT's capabilities, its application in the education sector, and the identification of potential concerns and challenges. Initial assessments reveal variations in ChatGPT's performance across diverse subjects, including finance, coding, mathematics, and general public queries. While ChatGPT can assist educators by generating instructional content, providing suggestions, serving as an online educator for learners, answering questions and revolutionising education through smartphones and IoT devices, it has shortcomings. These include the risk of generating inaccurate or false information and evading plagiarism detection systems, a critical concern for maintaining originality. The commonly reported "hallucinations" in Generative AI, which applies to ChatGPT, also limit its utility in situations where precision is paramount. A notable deficiency in ChatGPT is the absence of a stochastic measure to facilitate authentic and empathetic communication with users. The article suggests that educational institutions' academic regulations and evaluation practices should be updated if ChatGPT is integrated into the educational toolkit. To effectively navigate the transformative effects of ChatGPT in the learning environment, it is imperative to educate teachers and students about its capabilities and limitations.
APA, Harvard, Vancouver, ISO, and other styles
6

Hofert, Marius. "Correlation Pitfalls with ChatGPT: Would You Fall for Them?" Risks 11, no. 7 (June 21, 2023): 115. http://dx.doi.org/10.3390/risks11070115.

Full text
Abstract:
This paper presents an intellectual exchange with ChatGPT, an artificial intelligence chatbot, about correlation pitfalls in risk management. The exchange takes place in the form of a conversation that provides ChatGPT with context. The purpose of this conversation is to evaluate ChatGPT’s understanding of correlation pitfalls, to offer readers an engaging alternative for learning about them, but also to identify related risks. Our findings indicate that ChatGPT possesses solid knowledge of basic and mostly non-technical aspects of the topic, but falls short in terms of the mathematical rigor needed to avoid certain pitfalls or completely comprehend the underlying concepts. Nonetheless, we suggest ways in which ChatGPT can be utilized to enhance one’s own learning process.
APA, Harvard, Vancouver, ISO, and other styles
7

Kim, Rachel, Alex Margolis, Joe Barile, Kyle Han, Saia Kalash, Helen Papaioannou, Anna Krevskaya, and Ruth Milanaik. "Challenging the Chatbot: An Assessment of ChatGPT's Diagnoses and Recommendations for DBP Case Studies." Journal of Developmental & Behavioral Pediatrics 45, no. 1 (January 2024): e8-e13. http://dx.doi.org/10.1097/dbp.0000000000001255.

Full text
Abstract:
Objective: Chat Generative Pretrained Transformer-3.5 (ChatGPT) is a publicly available and free artificial intelligence chatbot that logs billions of visits per day; parents may rely on such tools for developmental and behavioral medical consultations. The objective of this study was to determine how ChatGPT evaluates developmental and behavioral pediatrics (DBP) case studies and makes recommendations and diagnoses. Methods: ChatGPT was asked to list treatment recommendations and a diagnosis for each of 97 DBP case studies. A panel of 3 DBP physicians evaluated ChatGPT's diagnostic accuracy and scored treatment recommendations on accuracy (5-point Likert scale) and completeness (3-point Likert scale). Physicians also assessed whether ChatGPT's treatment plan correctly addressed cultural and ethical issues for relevant cases. Scores were analyzed using Python, and descriptive statistics were computed. Results: The DBP panel agreed with ChatGPT's diagnosis for 66.2% of the case reports. The mean accuracy score of ChatGPT's treatment plan was deemed by physicians to be 4.6 (between entirely correct and more correct than incorrect), and the mean completeness was 2.6 (between complete and adequate). Physicians agreed that ChatGPT addressed relevant cultural issues in 10 out of the 11 appropriate cases and the ethical issues in the single ethical case. Conclusion: While ChatGPT can generate a comprehensive and adequate list of recommendations, the diagnosis accuracy rate is still low. Physicians must advise caution to patients when using such online sources.
APA, Harvard, Vancouver, ISO, and other styles
8

Verma, Aditya A. "Augmenting AI’s Prospects: ChatGPT Future Insights." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 04 (April 11, 2024): 1–5. http://dx.doi.org/10.55041/ijsrem30633.

Full text
Abstract:
An expansion of GPT-3, the big language model ChatGPT was published by OpenAI on November 30, 2022. In real time, the AI chatbot responds to user requests by communicating. The level of natural speaking responses provided by ChatGPT signals a significant change in the way we will utilise AI-generated data in our daily lives. There are several applications for ChatGPT for a student studying software engineering, including assessment preparation, translation, and writing specific source code. Even more difficult parts of scientific writing, including paraphrasing and literature summaries, may be handled by it. Therefore, the purpose of this position paper is to examine possible methods for incorporating ChatGPT into higher education. As a result, we concentrate on papers discussing ChatGPT's impact in our day to day lives. Keywords: Artificial Intelligence (AI); ChatGPT; educational technology; university education.
APA, Harvard, Vancouver, ISO, and other styles
9

Kashish, Hitakshi, Neha Sharma, and Neeru Jindal. "Still, Man Is The Most Remarkable Being." international journal of engineering technology and management sciences 7, no. 6 (2023): 178–85. http://dx.doi.org/10.46647/ijetms.2023.v07i06.028.

Full text
Abstract:
ChatGPT and Google, two natural language processing systems, have attracted media attention. Google is a search engine that finds specific information online, while ChatGPT is an AI chatbot that understands and responds to natural language. The study in this paper looks at how users of Google and ChatGPT behave differently when using search engines and chatbots to find information. Both ChatGPT and Google have advantages and disadvantages. Since data inaccuracies can cause bias, accuracy concerns were especially important. According to ChatGPT's readability scores, answers are frequently inappropriate for people with limited literacy. Google occasionally returns results based on for-profit organizations, whereas ChatGPT responds to inquiries with more pertinent information. Google does share the source, but one might need to use their own imagination to improve it. ChatGPT does not permit this because it presents the result immediately. Despite the fact that this study carefully investigated ChatGPT user reactions, it has a number of limitations that might all be taken into account for the purpose of improving future research.
APA, Harvard, Vancouver, ISO, and other styles
10

Sirohi, Dr Sanjula. "Utilization of ChatGPT in Dental Healthcare." Academia Journal of Medicine 7, no. 1 (May 21, 2024): 61–64. http://dx.doi.org/10.62245/ajm.v7.i1.10.

Full text
Abstract:
The integration of artificial intelligence (AI) in the field of dentistry has been gaining prominence, with Chatbot Generative Pretrained Transformer (ChatGPT), emerging as a pivotal tool. ChatGPT's applications span a myriad of functionalities including diagnostic aid, educational support, and patient interaction.
APA, Harvard, Vancouver, ISO, and other styles
11

Li, Changhao, Weihong Yan, and Xiyan Zhang. "Competitive Analysis and Valuation Application of ChatGPT in the Artificial Intelligence Industry." Highlights in Business, Economics and Management 19 (November 2, 2023): 434–39. http://dx.doi.org/10.54097/hbem.v19i.11979.

Full text
Abstract:
In recent years, with the rapid development of computation ability based on GPU and advances in artificial intelligence, a well-known big natural language process named ChatGPT has been developed. As a matter of fact, it is a chatbot in the artificial intelligence industry and based on artificial intelligence technology with a wide range of applications. This study aims to assess the competitive position and potential value of ChatGPT in the artificial intelligence market through market competition analysis and valuation application. According to the analysis, we will introduce the technical features and background of ChatGPT. Then, by analyzing the competitors, we will assess ChatGPT's competitive advantage in the market. Finally, we will apply the relative valuation method to estimate the market potential of ChatGPT. Based on the evaluation, the competitive advantage of ChatGPT in the industry is revealed. Overall, these results shed light on guiding further exploration of valuation for artificial intelligence industry.
APA, Harvard, Vancouver, ISO, and other styles
12

Lee, Yongjik, and Hyoung-Sook Cho. "l." Educational Research Institute 43, no. 3 (February 29, 2024): 877–94. http://dx.doi.org/10.34245/jed.43.3.877.

Full text
Abstract:
생성형 인공 지능의 도입 이후, 교사양성기관에서는 예비 교사들의 ChatGPT 활용에 대해 관심을 가져왔다. 본 연구에서는 수업 과제에서의 ChatGPT 활용에 대한 영어과 예비교사의 인식을 탐구하였다. 총 75명의 참여자를 대상으로 온라인 설문조사를 시행하였다. 연구 결과에 따르면, 절반 이상의 예비교사들은 ChatGPT 활용을 통해 글쓰기 시간을 절약하고 과제를 효율적으로 수행하는 데 유용한 도구로 인식하고 있는 것으로 나타났다. 또한, 그들은 쓰기-전 단계에서 아이디어 생성과 정보 수집을 위해 ChatGPT를 가장 많이 활용하고 쓰기-후 단계에서 수정 및 점검을 위해 활용하였다. 그러나 예비교사들은 ChatGPT 에서 생성된 정보의 질, 출처, 신뢰성에 대해 우려를 제기하는 것으로 나타났다. 또한, ChatGPT를 활용한 경험이 있는 참여자는 향후 지속적으로 활용을 하겠다는 응답이 유의미하게 높았다. 그러므로 교사양성기관에서는 ChatGPT 등 AI 프로그램을 사용하는 과제를 고안하여 예비교사에게 제시함으로써 다양한 학습 경험을 제공할 필요가 있다. 또한, 교육 프로그램에서 ChatGPT를 효율적으로 활용하기 위한 명확한 지침을 제시할 필요가 있다.
APA, Harvard, Vancouver, ISO, and other styles
13

Koh, Sungran. "An Analysis of ChatGPT’s Language Translation Based on the Korean Film Minari." STEM Journal 24, no. 4 (November 30, 2023): 1–14. http://dx.doi.org/10.16875/stem.2023.24.4.1.

Full text
Abstract:
The emergence of ChatGPT as an AI chatbot has marked significant progress in multiple-language translation. However, concerns persist regarding its capability to precisely translate various languages. This research aims to find a more effective method for improving the quality of Korean-to-English translation by using ChatGPT for EFL learners. To achieve this, EFL learners were asked to make questionnaires to assess the overall awareness and understanding of ChatGPT among their peers. Subsequently, they were instructed to select among Korean-to-English translations generated by ChatGPT and Korean-to-English translations done by human translators, providing reasons for their choices. The script of the film <i>Minari</i> (Lee, 2021) was collected and ChatGPT was used to translate it from Korean to English. These translations, both by the human translator and ChatGPT, were manually evaluated and a comparative analysis of the two translations was conducted. Finally, any errors in ChatGPT’s Korean-to-English translations were addressed by providing additional prompts to achieve the best possible translations. The result showed that overall translations were significantly enhanced by the prompts and demonstrated accuracy in translation. This finding demonstrates that EFL learners should make the most use of ChatGPT as a language learning and translation tool to improve their language communication skills.
APA, Harvard, Vancouver, ISO, and other styles
14

Tsang, Ricky. "Practical Applications of ChatGPT in Undergraduate Medical Education." Journal of Medical Education and Curricular Development 10 (January 2023): 238212052311784. http://dx.doi.org/10.1177/23821205231178449.

Full text
Abstract:
ChatGPT is a chatbot developed by OpenAI that has garnered significant attention for achieving at or near a passing standard on the United States Medical Licensing Exam (USMLE). Currently, researchers and users are exploring ChatGPT's broad range of potential applications in academia, business, programming, and beyond. We attempt outline how ChatGPT may be applied to support undergraduate medical education during the preclinical and clinical years, and highlight possible concerns regarding its use which necessitates the creation of formal policies and training by medical schools.
APA, Harvard, Vancouver, ISO, and other styles
15

B, Ravisankar, Shruti K.S, Rajeswary K, and Kalaivani S. "Assessment of Knowledge, Attitude, and Practices among General Population Toward Utilizing ChatGPT: A Cross-Sectional Study." International Journal of Contemporary Dental Research 1, no. 2 (April 27, 2023): 19–26. http://dx.doi.org/10.62175/apdch2310.

Full text
Abstract:
ChatGPT is a chatbot developed by OpenAI and launched on November 30, 2022. Based on a large language model, it enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language. Many, customers in a variety of fields have utilised ChatGPT, an artificial intelligence-based programme.The purpose of this study was to ascertain the general public's awareness, attitudes, and usage habits regarding ChatGPT. Materials and Methods: For the study, we sent an invitation to everyone living in Chennai, Tamil Nadu, between March and June of 2023.The survey evaluated the general public's knowledge, attitudes, and habits with ChatGPT. For the study, we were able to enlist participants. The participant’s average age was 27.7 ±0.46 According to the findings, individuals, or76.2%, have experience using ChatGPT. Results:Fifty-one percent of the participants thought that using ChatGPT could potentially impair their cognitive abilities, even though 51.4% of them did not use it frequently. However, a sizable portion of users (40.0%) expressed no worries about running into privacy or security problems when utilising ChatGPT. While gender and education level were statistically significant indicators, age and occupation had no significant impact on ChatGPT behaviors. Conclusion:The survey concludes that the majority of participants were aware of ChatGPT and trusted it to understand and respond to user inquiries. They also shown a moderate degree of trust by saying they were confident in ChatGPT's veracity of the material. It's noteworthy to notice that several users expressed concerns about ChatGPT usage potentially impairing cognitive abilities.
APA, Harvard, Vancouver, ISO, and other styles
16

Ali, Hassnian, and Ahmet Faruk Aysan. "What will ChatGPT revolutionize in the financial industry?" Modern Finance 1, no. 1 (November 17, 2023): 116–30. http://dx.doi.org/10.61351/mf.v1i1.67.

Full text
Abstract:
The launch of the open AI chatbot, ChatGPT, in November 2022 has generated widespread excitement around Generative Artificial Intelligence (AI). While researchers have explored ChatGPT's ability to produce content and respond to input, our study takes a different approach and examines its use cases in the financial industry. We aim to understand what ChatGPT offers the financial industry and how it differs from existing banking and financial chatbots. Financial institutions can use ChatGPT for a variety of purposes, including customer engagement, personalization, up-selling and cross-selling, stock forecasting, product development, and financial education. By focusing on the potential of ChatGPT in finance, we hope to spark discussions about its applications in other domains and explore the possibilities of a larger revolution in the future. Finally, this study identifies the challenges associated with the use of Generative Open AI and LLMs-based chatbots in the financial industry and provides recommendations for addressing these challenges.
APA, Harvard, Vancouver, ISO, and other styles
17

Das, Soumya Ranjan, and Madhusudan J.V. "Perceptions of Higher Education Students towards ChatGPT Usage." International Journal of Technology in Education 7, no. 1 (February 4, 2024): 86–106. http://dx.doi.org/10.46328/ijte.583.

Full text
Abstract:
In the context of contemporary technological advancements, Artificial Intelligence (AI) has gained considerable significance in the field of education. In light of ChatGPT’s growing popularity, this research aims to explore how higher education students perceive the use of ChatGPT in academics, examining factors influencing its acceptance, as well as its benefits, limitations, and ethical concerns. The study applied a survey design, collecting data through Google Forms from undergraduate, postgraduate, and doctoral students. A total of 162 participants, who were using ChatGPT selected through convenience sampling. The findings indicate a positive perception among respondents regarding ChatGPT's academic applications, its benefits, limitations, acceptance factors, and ethical concerns. The study also reveals that the perception of higher education students towards ChatGPT usage is not significantly influenced by gender, academic programs, and streams. The insights gained from this study holds significant implications for the responsible and effective integration of ChatGPT in higher education environments, taking into account its perceived benefits and ethical concerns.
APA, Harvard, Vancouver, ISO, and other styles
18

Padhiyar, Radhika, and Sandhya Modha. "Impact of the Usage of ChatGPT on Creativity Among Postgraduate Student." International Journal of Sustainable Social Science (IJSSS) 2, no. 1 (March 1, 2024): 83–92. http://dx.doi.org/10.59890/ijsss.v2i1.1376.

Full text
Abstract:
This study looks into how postgraduate students' levels of creativity are affected by their use of ChatGPT. We intend to investigate the possible relationship between regular ChatGPT use and the promotion or inhibition of creative thinking in postgraduate students using surveys, interviews, and creative assessments. The study tries to shed light on the consequences for teaching strategies and encouraging creativity in higher education by taking into account a number of variables, such as the frequency and type of ChatGPT interactions. Focuse on ChatGPT's fine-tuning ability to adjust to particular tasks or domains by examining its modification options. The institute regulations and assessment procedures in colleges and universities need to be updated right away. In order to address ChatGPT’s effects on the educational environment, instructor training and student education are equally crucial
APA, Harvard, Vancouver, ISO, and other styles
19

Pieterse, Heloise. "Friend or Foe – The Impact of ChatGPT on Capture the Flag Competitions." International Conference on Cyber Warfare and Security 19, no. 1 (March 21, 2024): 268–76. http://dx.doi.org/10.34190/iccws.19.1.1992.

Full text
Abstract:
ChatGPT, an artificial intelligence (AI)-based chatbot, has taken the world by storm since the technology’s release to the public in November 2022. The first reactions were awe and amazement as ChatGPT presented the capability to instantly respond to various text-based questions following a conversational approach. However, it is ChatGPT’s ability to complete more advanced tasks, such as supplying source code to programming-related questions or generating complete articles focusing on a specific topic, which has caused eyebrows to be raised. The capabilities offered by ChatGPT, fuelled by popularity and easy accessibility, have introduced several new challenges for the academic sector. One such challenge is the concept of AI-assisted cheating, where students utilise chatbots, such as ChatGPT, to answer specific questions or complete assignments. Although various research studies have explored the impact of ChatGPT on university education, few studies have discussed the influence of ChatGPT on Capture the Flag (CTF) competitions. CTF competitions offer a popular platform to promote cybersecurity education, allowing students to gain hands-on experience solving cybersecurity challenges in a fun but controlled environment. The typical style of CTF challenges usually follows a question-answer format, which offers students the ideal opportunity to enlist the assistance of ChatGPT. This paper investigates the ability of ChatGPT to assist and aid students in solving CTF challenges. The exploratory study involves past CTF challenges across various categories and the questioning of ChatGPT in an attempt to solve the challenges. The outcome of the study reveals that although ChatGPT can assist students with challenges during CTF competitions, the assistance that can be offered is minimal. Instead of producing answers to CTF challenges, ChatGPT can merely offer insight or guidance regarding the questions asked.
APA, Harvard, Vancouver, ISO, and other styles
20

Tiara, Tiara, and Fandi Yulian Pamuji. "KOMPARASI USABILITY CHATGPT VS GEMINI AI BERDASARKAN ISO/IEC 9126 DAN NIELSEN MODEL MENGGUNAKAN METODE USABILITY TESTING." JUSIM (Jurnal Sistem Informasi Musirawas) 9, no. 1 (June 22, 2024): 89–100. http://dx.doi.org/10.32767/jusim.v9i1.2285.

Full text
Abstract:
Artificial Intelligence (AI), khususnya yang berbasis chatbot sedang nge-trend dan mengalami perkembangan yang sangat pesat akhir-akhir ini, menyebabkan munculnya fenomena perdebatan dalam memilih chatbot AI yang terbaik. Oleh karena itu peneliti ingin mengkomparasi chatbot yang paling banyak digunakan berdasarkan data dari Visual Capitalist, yaitu ChatGPT dan Gemini AI. Peneliti ingin mengetahui chatbot mana yang terbaik khusunya dari segi usabilitynya berdasarkan nilai dan persentasenya. Metode yang digunakan pada penelitian ini yaitu metode usability testing dengan melakukan skenario tugas dan menyebarkan kuesioner dengan pertanyaan dari parameter gabungan Nielsen Model dan ISO/IEC 9126. Temuan yang didapatkan dari hasil komparasi ChatGPT dan Gemini AI didapatkan bahwa pengguna ChatGPT dapat menyelesaikan skenario tugas hanya 0,07 per detik, sedangkan Gemini AI 0,10 per detik, ini menunjukan bahwa dalam segi efficiency ChatGPT lebih unggul dibandingkan Gemini AI. Sedangkan hasil variabel learnabilitymenunjukan Gemini AI lebih unggul dengan hasil 92% dibandingkan ChatGPT yang memperoleh hasil 90% dikategori diatas rata-rata. Dan untuk variabel satisfaction, Gemini AI juga lebih unggul dengan nilai 85,52% yang termasuk kategori sangat baik, sedangkan ChatGPT memperoleh hasil 81,86% berada di kategori baik. Sehingga, secara keseluruhan Gemini AI lebih unggul dibandingkan ChatGPT, Gemini AI unggul pada variabel leanability dan satisfaction, sedangkan ChatGPT unggul di variabel efficiency. Penelitian ini berkontribusi pada bidang software quality control khususnya diobjek yang diteliti dalam hal studi komparasi sehingga diharapkan dapat menjadi insight dalam pengambilan keputusan ketika ingin memilih antara ChatGPT atau Gemini AI. Kata kunci— Usability; Usability Testing; ChatGPT; Gemini AI
APA, Harvard, Vancouver, ISO, and other styles
21

Liu, Zeyu. "ChatGPT - A new milestone in the field of education." Applied and Computational Engineering 35, no. 1 (January 22, 2024): 129–33. http://dx.doi.org/10.54254/2755-2721/35/20230380.

Full text
Abstract:
With the rapid development of artificial intelligence, natural language processing models like ChatGPT hold creative potential for applications in education. This study focuses on ChatGPT and explores its applications in the field of education. This research employs literature review and case analysis methods to systematically investigate the applications of ChatGPT in instructional assistance, personalized education, and writing support. By analyzing three cases involving ChatGPT's professional competence in medical exams, the construction of personalized learning systems, and writing support, the paper emphasizes ChatGPT's advantages in education, future development directions, as well as ethical and privacy concerns. The study finds that ChatGPT possesses strong semantic understanding and analytical capabilities. Within the context of general applicability, it can provide accurate answers to questions posed by users with certain specialized backgrounds. The paper not only addresses ChatGPT's role as a student's auxiliary tool but also explores collaborative models between ChatGPT and teachers. However, it's important to note that ChatGPT currently has limitations in terms of limited knowledge and inadequate emotional understanding. Additionally, in terms of ethics and privacy, considerations regarding information protection and moral aspects within the educational application of ChatGPT are crucial. In the future, it's worth further exploring specialized models for ChatGPT's tasks in both teacher and student roles, along with enhancing ChatGPT's emotional support capabilities.
APA, Harvard, Vancouver, ISO, and other styles
22

Hofert, Marius. "Assessing ChatGPT’s Proficiency in Quantitative Risk Management." Risks 11, no. 9 (September 19, 2023): 166. http://dx.doi.org/10.3390/risks11090166.

Full text
Abstract:
The purpose and novelty of this article is to investigate the extent to which artificial intelligence chatbot ChatGPT can grasp concepts from quantitative risk management. To this end, we enter a scholarly discussion with ChatGPT in the form of questions and answers, and analyze the responses. The questions are classics from undergraduate and graduate courses on quantitative risk management, and address risk in general, risk measures, time series, extremes and dependence. As a result, the non-technical aspects of risk (such as explanations of various types of financial risk, the driving factors underlying the financial crisis of 2007 to 2009, or a basic introduction to the Basel Framework) are well understood by ChatGPT. More technical aspects (such as mathematical facts), however, are often inaccurate or wrong, partly in rather subtle ways not obvious without expert knowledge, which we point out. The article concludes by providing guidance on the types of applications for which consulting ChatGPT can be useful in order to enhance one’s own knowledge of quantitative risk management (e.g., using ChatGPT as an educational tool to test one’s own understanding of an already grasped concept, or using ChatGPT as a practical tool for identifying risks just not on one’s own radar), and points out those applications for which the current version of ChatGPT should not be invoked (e.g., for learning mathematical concepts, or for learning entirely new concepts for which one has no basis of comparison to assess ChatGPT’s capabilities).
APA, Harvard, Vancouver, ISO, and other styles
23

Huh, Sun. "Can we trust AI chatbots’ answers about disease diagnosis and patient care?" Journal of the Korean Medical Association 66, no. 4 (April 10, 2023): 218–22. http://dx.doi.org/10.5124/jkma.2023.66.4.218.

Full text
Abstract:
Background: Several chatbots that utilize large language models now exist. As a particularly well-known example, ChatGPT employs an autoregressive modeling process to generate responses, predicting the next word based on previously derived words. Consequently, instead of deducing a correct answer, it arranges the most frequently appearing words in the learned data in order. Optimized for interactivity and content generation, it presents a smooth and plausible context, regardless of whether the content it presents is true. This report aimed to examine the reliability of ChatGPT, an artificial intelligence (AI) chatbot, in diagnosing diseases and treating patients, how to interpret its responses, and directions for future development.Current Concepts: Ten published case reports from Korea were analyzed to evaluate the efficacy of ChatGPT, which was asked to describe the correct diagnosis and treatment. ChatGPT answered 3 cases correctly after being provided with the patient’s symptoms, findings, and medical history. The accuracy rate increased to 7 out of 10 after adding laboratory, pathological, and radiological results. In one case, ChatGPT did not provide appropriate information about suitable treatment, and its response contained inappropriate content in 4 cases. In contrast, ChatGPT recommended appropriate measures in 4 cases.Discussion and Conclusion: ChatGPT’s responses to the 10 case reports could have been better. To utilize ChatGPT efficiently and appropriately, users should possess sufficient knowledge and skills to determine the validity of its responses. AI chatbots based on large language models will progress significantly, but physicians must be vigilant in using these tools in practice.
APA, Harvard, Vancouver, ISO, and other styles
24

Lee, Hyun, Zahra Hamedi, David Oliver, Hui Lin, Uzma Athar, Hiba Zainab Vohra, Peter Daniels, and Sejal Kothadia. "Assessing ChatGPT's potential as a clinical resource for medical oncologists: An evaluation with board-style questions and real-world patient cases." Journal of Clinical Oncology 42, no. 16_suppl (June 1, 2024): e13628-e13628. http://dx.doi.org/10.1200/jco.2024.42.16_suppl.e13628.

Full text
Abstract:
e13628 Background: ChatGPT is a conversational artificial intelligence (AI) model that learns from massive text-based datasets and then responds to user input, which often involves completing tasks or answering questions. Recent studies showed ChatGPT’s success in passing multiple specialty medical licensing and board examinations, showcasing its promising capabilities in the medical domain. Here, we investigated ChatGPT's potential as a swift and reliable information source for medical oncologists using board examination style questions and real patient cases. Methods: We randomly selected 121 board-style questions from the American Society of Clinical Oncology Self-Evaluation Program (ASCO SEP). The questions were entered into ChatGPT in both multiple-choice (MC) and open-ended (OE) prompts. ChatGPT’s answers and explanations were evaluated for accuracy and concordance. Non-inferiority analysis was performed with power of 80% at α = 0.05 and non-inferiority margin set at 70% correct answers given the historical board exam pass rate of about 65% correct answers. For subgroup analysis, the questions were categorized by tested competency and primary tumor pathology. ChatGPT was also given 10 questions derived from real patient cases. We compared its responses to the answers provided by experienced oncologists to determine accuracy and practical applicability. Results: ChatGPT answered 75 (62.0%) MC queries correctly. Among the correctly answered queries, 2 responses contained faulty explanations. Such inaccurate or discordant explanations were found in 26 of the 46 incorrectly answered queries. In OE prompts, ChatGPT answered 53 (43.8%) questions correctly with correct explanations for all. Of the 68 incorrect responses, 32 of them contained inaccurate or discordant explanations. Subgroup analysis suggested varying performance across the categories. The best performance was seen with malignant hematology (81.8% of MC and 72.8% of OE prompts answered correctly) while the weakest performance was seen with genitourinary malignancies (60% of MC and 20% of OE prompts answered correctly). As for the real-world patient case questions, responses from ChatGPT and the clinicians were concordant in 5 questions. None of the discordant responses contained inaccurate information while 80% of the concordant responses contained sufficient details to assist with patient management decisions. Conclusions: ChatGPT's performance fell short of the non-inferiority margin, highlighting the challenges with incorporating AI in the rapidly evolving field of medical oncology. Despite the limitations, ChatGPT’s partial success, in both board-style and real-world patient care questions, affirms its potential for clinical utility in future.
APA, Harvard, Vancouver, ISO, and other styles
25

Park, Janghee. "Medical students’ patterns of using ChatGPT as a feedback tool and perceptions of ChatGPT in a Leadership and Communication course in Korea: a cross-sectional study." Journal of Educational Evaluation for Health Professions 20 (November 10, 2023): 29. http://dx.doi.org/10.3352/jeehp.2023.20.29.

Full text
Abstract:
Purpose: This study aimed to analyze patterns of using ChatGPT before and after group activities and to explore medical students’ perceptions of ChatGPT as a feedback tool in the classroom.Methods: The study included 99 2nd-year pre-medical students who participated in a “Leadership and Communication” course from March to June 2023. Students engaged in both individual and group activities related to negotiation strategies. ChatGPT was used to provide feedback on their solutions. A survey was administered to assess students’ perceptions of ChatGPT’s feedback, its use in the classroom, and the strengths and challenges of ChatGPT from May 17 to 19, 2023.Results: The students responded by indicating that ChatGPT’s feedback was helpful, and revised and resubmitted their group answers in various ways after receiving feedback. The majority of respondents expressed agreement with the use of ChatGPT during class. The most common response concerning the appropriate context of using ChatGPT’s feedback was “after the first round of discussion, for revisions.” There was a significant difference in satisfaction with ChatGPT’s feedback, including correctness, usefulness, and ethics, depending on whether or not ChatGPT was used during class, but there was no significant difference according to gender or whether students had previous experience with ChatGPT. The strongest advantages were “providing answers to questions” and “summarizing information,” and the worst disadvantage was “producing information without supporting evidence.”Conclusion: The students were aware of the advantages and disadvantages of ChatGPT, and they had a positive attitude toward using ChatGPT in the classroom.
APA, Harvard, Vancouver, ISO, and other styles
26

Zhang, Xingxing, Juveria Shah, and Mengjie Han. "ChatGPT for Fast Learning of Positive Energy District (PED): A Trial Testing and Comparison with Expert Discussion Results." Buildings 13, no. 6 (May 27, 2023): 1392. http://dx.doi.org/10.3390/buildings13061392.

Full text
Abstract:
Positive energy districts (PEDs) are urban areas which seek to take an integral approach to climate neutrality by including technological, spatial, regulatory, financial, legal, social, and economic perspectives. It is still a new concept and approach for many stakeholders. ChatGPT, a generative pre-trained transformer, is an advanced artificial intelligence (AI) chatbot based on a complex network structure and trained by the company OpenAI. It has the potential for the fast learning of PED. This paper reports a trial test in which ChatGPT is used to provide written formulations of PEDs within three frameworks: challenge, impact, and communication and dissemination. The results are compared with the formulations derived from over 80 PED experts who took part in a two-day workshop discussing many aspects of PED research and development. The proposed methodology involves querying ChatGPT with specific questions and recording its responses. Subsequently, expert opinions on the same questions are provided to ChatGPT, aiming to elicit a comparison between the two sources of information. This approach enables an evaluation of ChatGPT’s answers in relation to the insights shared by domain experts. By juxtaposing the outputs, a comprehensive assessment can be made regarding the reliability, accuracy, and alignment of ChatGPT’s responses with expert viewpoints. It is found that ChatGPT can be a useful tool for the rapid formulation of basic information about PEDs that could be used for its wider dissemination amongst the general public. The model is also noted as having a number of limitations, such as providing pre-set single answers, a sensitivity to the phrasing of questions, a tendency to repeat non-important (or general) information, and an inability to assess inputs negatively or provide diverse answers to context-based questions. Its answers were not always based on up-to-date information. Other limitations and some of the ethical–social issues related to the use of ChatGPT are also discussed. This study not only validated the possibility of using ChatGPT to rapid study PEDs but also trained ChatGPT by feeding back the experts’ discussion into the tool. It is recommended that ChatGPT can be involved in real-time PED meetings or workshops so that it can be trained both iteratively and dynamically.
APA, Harvard, Vancouver, ISO, and other styles
27

Puri, Anindita Dewangga, and FX Risang Baskara. "ENHANCING PRAGMATIC KNOWLEDGE WITH CHATGPT: BENEFITS AND CONSIDERATIONS." Prosiding Konferensi Linguistik Tahunan Atma Jaya (KOLITA) 21, no. 21 (October 30, 2023): 18–23. http://dx.doi.org/10.25170/kolita.21.4828.

Full text
Abstract:
Pragmatic knowledge, or the ability to use language appropriately in social situations, is a crucial aspect of effective communication. ChatGPT, a chatbot based on the GPT-3 language model, has the potential to serve as a tool for improving pragmatic knowledge. ChatGPT is trained on a large dataset of human conversation and can generate responses that are coherent, diverse, and context-specific. By providing opportunities for users to practice their language use in a variety of contexts and real-time feedback, ChatGPT can help language learners to develop their pragmatic skills. However, the use of ChatGPT also raises several challenges and ethical considerations. Understanding the benefits and challenges of using ChatGPT for pragmatic language learning will be crucial in determining its potential as a tool for language learning. The purpose of this paper is to evaluate the benefits and challenges of using ChatGPT for pragmatic language learning. To understand the potential of ChatGPT for pragmatic language learning, the researchers review the main features and capabilities of ChatGPT. This includes its ability to generate a response in different styles and registers and how it can be used for pragmatic language acquisition. The challenges and limitations of ChatGPT are also examined, including its imperfect simulation of human conversation, reliance on large amounts of data, and potential to reinforce stereotypes or biases. The researchers discuss the potential benefits of ChatGPT, including its ability to simulate natural conversation, generate diverse and context-specific responses, and adapt to the user's language level and learning goals. In addition, to providing valuable practice in simulating natural conversation, ChatGPT’s ability to generate coherent and context-specific responses can also help the user to develop their pragmatic skills, such as understanding and interpreting the intended meanings of others. The researchers consider the ethical implications of using ChatGPT for pragmatic language learning and the need for further research in this area. Overall, this paper provides a nuanced perspective on the potential and limitations of ChatGPT as a tool for enhancing pragmatic knowledge.
APA, Harvard, Vancouver, ISO, and other styles
28

Fiialka, Svitlana, Zoia Kornieva, and Tamara Honcharuk. "The use of ChatGPT in creative writing assistance." XLinguae 17, no. 1 (January 2024): 3–19. http://dx.doi.org/10.18355/xl.2024.17.01.01.

Full text
Abstract:
This paper explores the integration of ChatGPT, an advanced AI language model, into creative writing. The paper investigates the capabilities of ChatGPT in generating novel story ideas, characters, plots, and stylistic elements such as metaphors and dialogue within various genres, including narrative, poetry, and drama. With its generative potential, ChatGPT is a valuable tool to simplify the creative writing process, providing authors with innovative concepts and supporting material that can be further developed and refined. Key findings of the study include ChatGPT's ability to craft detailed character portraits, engage in realistic dialogue, and produce atmospheric descriptions. While the chatbot can occasionally produce repetitive or biased results, careful human curation and interaction with the model can mitigate these issues and improve the writing process. The paper concludes that, despite limitations, AI agents like ChatGPT can significantly reinforce human creativity in writing, provided they are used as inspiration rather than a replacement for human ingenuity. The research underlines the importance of research on AI and creativity, especially regarding the ethical implications and the balance between human and machine contributions to art.
APA, Harvard, Vancouver, ISO, and other styles
29

Cadiente, Angelo, Puja Patel, and Antonia F. Oladipo. "Navigating Intimate Partner Violence: A Comparative Analysis Between ACOG and ChatGPT on Patient Education [ID 2683552]." Obstetrics & Gynecology 143, no. 5S (May 2024): 75S. http://dx.doi.org/10.1097/01.aog.0001013984.40432.dc.

Full text
Abstract:
INTRODUCTION: With the surge in artificial intelligence usage, there is an increasing need to assess its applicability in conveying public health information. We compare the readability and quality of responses between ChatGPT and the American College of Obstetricians and Gynecologists (ACOG) in answering frequently asked questions (FAQs) on intimate partner violence (IPV). METHODS: Twelve questions from ACOG's “Intimate Partner Violence” FAQs were posed to ChatGPT-3.5 (July 19 Version). Readability and grade-level scores were determined. The quality of responses were also graded by two obstetrician–gynecologists using a 1–4 scale, where 1 represents a comprehensive response and 4 indicates an incorrect response. Statistical analysis utilized a two-tailed t-test. A weighted Cohen’s kappa coefficient evaluated interrater reliability. RESULTS: Mean readability favored ACOG over ChatGPT, but only the Coleman Liau Index was statistically significant (ACOG: 12.71; ChatGPT: 15.76; P=.003). Other readability measures demonstrated no significant differences. The ACOG responses were graded with an average of 1.29, alongside a Cohen’s kappa coefficient of 0.375, implying fair agreement between graders. All ChatGPT’s responses were graded a 1 with a Cohen’s kappa coefficient of 1, indicating perfect agreement. The difference in grades between ACOG and ChatGPT was statistically significant (P=.013). CONCLUSION: The ACOG FAQ responses are relatively equivalent in readability when compared to ChatGPT-generated responses with only one statistically significant difference across the indices. However, ChatGPT's responses were more comprehensive and accurate. Although patients can use ACOG as a source, ChatGPT provides equivalently clear and more comprehensive information on IPV.
APA, Harvard, Vancouver, ISO, and other styles
30

Imran, Muhammad, and Norah Almusharraf. "Analyzing the role of ChatGPT as a writing assistant at higher education level: A systematic review of the literature." Contemporary Educational Technology 15, no. 4 (October 1, 2023): ep464. http://dx.doi.org/10.30935/cedtech/13605.

Full text
Abstract:
This study examines the role of ChatGPT as a writing assistant in academia through a systematic literature review of the 30 most relevant articles. Since its release in November 2022, ChatGPT has become the most debated topic among scholars and is also being used by many users from different fields. Many articles, reviews, blogs, and opinion essays have been published in which the potential role of ChatGPT as a writing assistant is discussed. For this systematic review, 550 articles published six months after ChatGPT’s release (December 2022 to May 2023) were collected based on specific keywords, and the final 30 most relevant articles were finalized through PRISMA flowchart. The analyzed literature identifies different opinions and scenarios associated with using ChatGPT as a writing assistant and how to interact with it. Findings show that artificial intelligence (AI) in education is a part of the ongoing development process, and its latest chatbot, ChatGPT is a part of it. Therefore, the education process, particularly academic writing, has both opportunities and challenges in adopting ChatGPT as a writing assistant. The need is to understand its role as an aid and facilitator for both the learners and instructors, as chatbots are relatively beneficial devices to facilitate, create ease and support the academic process. However, academia should revisit and update students’ and teachers’ training, policies, and assessment ways in writing courses for academic integrity and originality, like plagiarism issues, AI-generated assignments, online/home-based exams, and auto-correction challenges.
APA, Harvard, Vancouver, ISO, and other styles
31

Szanto, David. "with ChatGPT." Canadian Food Studies / La Revue canadienne des études sur l'alimentation 11, no. 1 (March 29, 2024): 243–47. http://dx.doi.org/10.15353/cfs-rcea.v11i1.688.

Full text
Abstract:
For this Choux Questionnaire, we turned to ChatGPT, the generative AI chatbot. Given the challenges and opportunities that AI presents to academic practice, teaching, and writing, we thought it might be intriguing to use these responses as a means to interpret ChatGPT’s ‘perspectives’ on food through our own. Both the process and outcomes of conducting the questionnaire provided occasions to reflect on the underlying technology, its sources of ‘knowledge’, and its apparent biases. In reading the bot’s words below, a fairly distinct character profile might emerge, as well as a kind of positionality that seems connected to both no place and every place at once. Beyond social and physical geographies, a set of privileges also tends to emerge, one that points to a lack of actual, lived experience. Where are the preferences, quirks, and affect that non-artificial intelligence comprises? Where are the outlier and emotional responses that would make one want to share food or ideas with this being? From your perspective as food scholar, practitioner, eater, or activist, what else do you extrapolate from ChatGPT’s ‘voice’?
APA, Harvard, Vancouver, ISO, and other styles
32

Hartley, Kendall, Merav Hayak, and Un Hyeok Ko. "Artificial Intelligence Supporting Independent Student Learning: An Evaluative Case Study of ChatGPT and Learning to Code." Education Sciences 14, no. 2 (January 24, 2024): 120. http://dx.doi.org/10.3390/educsci14020120.

Full text
Abstract:
Artificial intelligence (AI) tools like ChatGPT demonstrate the potential to support personalized and adaptive learning experiences. This study explores how ChatGPT can facilitate self-regulated learning processes and learning computer programming. An evaluative case study design guided the investigation of ChatGPT’s capabilities to aid independent learning. Prompts mapped to self-regulated learning processes elicited ChatGPT’s support across learning tools: instructional materials, content tools, assessments, and planning. Overall, ChatGPT provided comprehensive, tailored guidance on programming concepts and practices. It consolidated multimodal information sources into integrated explanations with examples. ChatGPT also effectively assisted planning by generating detailed schedules. However, its interactivity and assessment functionality demonstrated shortcomings. ChatGPT’s effectiveness relies on learners’ metacognitive skills to seek help and assess its limitations. The implications include ChatGPT’s potential to provide Bloom’s two-sigma tutoring benefit at scale.
APA, Harvard, Vancouver, ISO, and other styles
33

SONAWANE, KESHAV, and SANCHIT KHANDALKAR. "Can ChatGpt Replace Humans?" INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 07, no. 12 (December 23, 2023): 1–6. http://dx.doi.org/10.55041/ijsrem27743.

Full text
Abstract:
This study investigates the viability of ChatGPT as a replacement for human interaction. Through empirical analysis, the paper assesses ChatGPT's performance in comparison to human communication, considering empathy, context comprehension, and nuanced dialogue. Ethical implications, including privacy and bias concerns, are explored. Findings reveal ChatGPT's strengths and underscore the irreplaceable aspects of human interaction. The paper advocates for a balanced integration where ChatGPT enhances, rather than replaces, human communication. Keywords: ChatGPT, artificial intelligence, human- computer interaction, empathy, ethical considerations.
APA, Harvard, Vancouver, ISO, and other styles
34

Jeong, Raepil. "Research on Problem-Based Learning(PBL) Literature Class Method Using ChatGPT." Korean Society of Culture and Convergence 45, no. 10 (October 31, 2023): 219–29. http://dx.doi.org/10.33645/cnc.2023.10.45.10.219.

Full text
Abstract:
The paper analyzed the results of learners' literary exploration activities performed with ChatGPT and proposed a problem-based learning(PBL) method using ChatGPT. Considering that ChatGPT can clarify and deepen problem awareness during communication, the paper explored how to utilize ChatGPT in the “problem discovery, problem exploration, and problemsolving”. As a result, the learner interacted with ChatGPT and proceeded with the task by verifying ChatGPT's answers with intertextuality, intermediality, and metacognition in mind. This paper proposed a PBL model using ChatGPT based on the learner's problem discovery and solution process shown in literature reading.
APA, Harvard, Vancouver, ISO, and other styles
35

Ngo, Thi Thuy An. "The Perception by University Students of the Use of ChatGPT in Education." International Journal of Emerging Technologies in Learning (iJET) 18, no. 17 (September 14, 2023): 4–19. http://dx.doi.org/10.3991/ijet.v18i17.39019.

Full text
Abstract:
ChatGPT, a generative language model recently created by OpenAI, has drawn a lot of criticism from people all around the world. ChatGPT illustrates both potential opportunities and challenges in education. This study aims to investigate how university students perceive using ChatGPT for learning, including benefits, barriers, and potential solutions. To determine how students felt about using ChatGPT in their learning, a questionnaire was distributed to 200 students via an online survey, and 30 students participated in semi-structured interviews. The research results showed that, in general, students had a favorable opinion of ChatGPT’s application. The benefits of ChatGPT, according to students, included saving time, providing information in various areas, providing personalized tutoring and feedback, and illuminating ideas in writing. Also, several barriers to using ChatGPT were recognized, and some solutions were suggested for improvement of using ChatGPT in education. The most concerning issues for students while using ChatGPT were inability to assess the quality and reliability of sources, inability to cite sources accurately, and inability to replace words and use idioms accurately. To address these concerns, some potential solutions can be implemented; for example, verifying ChatGPT’s responses with reliable sources; using ChatGPT as a reference source or a consultant tool; providing guidelines for use; and promoting academic integrity to ensure ethical uses of ChatGPT in an academic context.
APA, Harvard, Vancouver, ISO, and other styles
36

Firdausa Nuzula, Ifta, and Muhammad Miftahul Amri. "Will ChatGPT bring a New Paradigm to HR World? A Critical Opinion Article." Journal of Management Studies and Development 2, no. 02 (April 24, 2023): 142–61. http://dx.doi.org/10.56741/jmsd.v2i02.316.

Full text
Abstract:
Since ChatGPT was first publicly available in November 2022, it has sparked attention worldwide. Some believe that ChatGPT might revolutionize our lives. There have been interesting discussions surrounding ChatGPT. In this chance, we explore the potential use of ChatGPT in the HR industry. For HR managers, the use of AI in HR has traditionally been limited to simple tasks, but ChatGPT's ability to understand and respond to natural language input opens up new possibilities for automating more complex HR tasks such as interviewing and onboarding. On the other side, ChatGPT can also help job seekers to get their dream jobs. To begin with, ChatGPT can be employed to find out the benefits and working environment in a specific company, although it should be noted that ChatGPT's answers cannot be blindly trusted. ChatGPT can also prepare a cover letter, a motivation letter, an application email, etc. Job seekers can even ask ChatGPT to act as an interviewer in an interview mockup. We present ChatGPT utilization examples in the HR world. In this article, we also conducted a mini experiment to investigate whether, in its current state, ChatGPT can generate a convincing cover letter that is able to fool HR specialists. The results showed that among the 5 study participants, none of them were able to correctly guess all of the letters, with the average correct answers of only 2 of 4 cover letters. Lastly, we also discuss the potential benefits and challenges of using ChatGPT in the HR industry in this article. Nevertheless, we argue that ChatGPT introduces a HR world’s paradigm shift.
APA, Harvard, Vancouver, ISO, and other styles
37

Song, Zheyu. "A REVIEW OF HOW THE USE OF CHATGPT IN LEARNING AFFECTS STUDENT WELL-BEING." Modern Psychology 6, no. 2(13) (November 27, 2023): 89–96. http://dx.doi.org/10.46991/sbmp/2023.6.2.089.

Full text
Abstract:
This article provides a review of the current relations between using ChatGPT for learning and student well-being. ChatGPT is an AI-based chatbot that can be used to assist students in their learning. It can provide personalized learning recommendations, answer questions, and offer feedback on assignments. However, the impact of using ChatGPT on student well-being can vary depending on several factors. Firstly, the design and implementation of ChatGPT can affect the student's experience. If the chatbot is poorly designed or implemented, it can cause frustration or confusion, which can negatively impact the student's well-being. On the other hand, a well-designed chatbot that is easy to use and provides helpful feedback can enhance the student's learning experience and overall well-being. Secondly, the student's perception of ChatGPT can also affect their well-being. If the student views ChatGPT as a helpful tool that enhances their learning experience, it can lead to positive emotions and a sense of accomplishment. However, if the student sees ChatGPT as a replacement for human interaction or as a means of surveillance, it can lead to negative emotions such as anxiety or fear. Lastly, the role of ChatGPT in the overall learning environment can also impact student well-being. If ChatGPT is used as a supplement to traditional teaching methods and provides additional support, it can enhance the student's learning experience and improve their well-being. However, if ChatGPT is used as a replacement for human teachers or as the sole means of learning, it can lead to a sense of isolation and disconnection from the learning experience. This article analyzes the impact of using ChatGPT on student well-being depending on several factors such as design, implementation, perception, and role in the overall learning environment. When designed and implemented well, and used as a supplement to traditional teaching methods, ChatGPT can enhance the student's learning experience and improve their well-being.
APA, Harvard, Vancouver, ISO, and other styles
38

Liu, Xiaoni, Ying Song, Hai Lin, Yiying Xu, Chao Chen, Changjian Yan, Xiaoliang Yuan, et al. "Evaluating Chatgpt As an Adjunct for Analyzing Challenging Case." Blood 142, Supplement 1 (November 28, 2023): 7273. http://dx.doi.org/10.1182/blood-2023-181518.

Full text
Abstract:
Introduction Despite the rapid development of medicine, clinicians still faced with many challenging cases in practice. For challenging cases, most inexperienced doctors are unable to make a clear diagnosis in the first place, resulting in misdiagnosis and delayed disease. Recently, OpenAI released a new Chatbot model, called ChatGPT (Generation Pre training Converter), which is an artificial intelligence (AI) system that uses reinforcement learning model based on human feedback for training. There have been many research reports on the application of ChatGPT in the medical field, and it has been proven to have a wide range of clinical applications. In our study, we aimed to assess the accuracy and reliability of ChatGPT in diagnosing and treating challenging case. Methods We provided 2 challenging cases. Case 1 provided a medical history and auxiliary test results for ChatGPT, and then ChatGPT answered the possible diagnosis, diagnostic basis, further auxiliary examination and treatment. Case 2 provides negative symptoms and auxiliary test results for ChatGPT, which answers all possible diagnoses. Conduct a comprehensive evaluation and scoring of the results of ChatGPT responses by 5 senior physicians. A score of 10 is the perfect score. Results The analysis results of 2 challenging cases by ChatGPT were comprehensively evaluated by senior experienced professional doctors and scored 9 points. We found that ChatGPT's analysis of challenging case is comprehensive. Especially for the diagnosis and treatment of diseases, ChatGPT analyzes all possible diagnoses and provides accurate treatment plans through the provided medical history, auxiliary examination results, and treatment processes. ChatGPT can directly interpret the auxiliary examination results, such as high level of procalcitonin (PCT), abnormal kidney function, and with negative results can rule out certain diseases, such as ANA, ENA, and ANCA negative, ruling out the possibility of autoimmune diseases. ChatGPT can summarize all the diagnoses including rare diseases, based on the results of an auxiliary test, and explain the cause of the disease and the pathogenesis in detail. Conclusions Our study shows that ChatGPT can provide important reference value as an adjunct to clinicians in challenging case analysis. In particular, it can assist inexperienced doctors in remote areas and underdeveloped medical conditions to make clear diagnosis and treatment as soon as possible, avoid unnecessary referral to higher-level hospitals, reduce medical burden, and reduce waste of medical resources. In the medical field, ChatGPT needs more research to evaluate in clinical applications and become a better adjunct tool in clinical work.
APA, Harvard, Vancouver, ISO, and other styles
39

Kim, Tae Won. "Application of artificial intelligence chatbot, including ChatGPT in education, scholarly work, programming, and content generation and its prospects: a narrative review." Journal of Educational Evaluation for Health Professions 20 (December 27, 2023): 38. http://dx.doi.org/10.3352/jeehp.2023.20.38.

Full text
Abstract:
It aims to explore ChatGPT's (GPT-3.5 version) functionalities, including reinforcement learning, diverse applications, and limitations. ChatGPT is an AI chatbot powered by OpenAI's Generative Pre-trained Transformer (GPT) model. The chatbot's applications span education, programming, content generation, and more, demonstrating its versatility. ChatGPT can enhance education by creating assignments and offering personalized feedback, as shown by its notable performance in medical exams and the USMLE. However, concerns include plagiarism, reliability, and educational disparities. It aids in various research tasks, from design to writing, and has shown proficiency in summarizing and suggesting titles. Its use in scientific writing and language translation is promising, but professional oversight is needed for accuracy and originality. It assists in programming tasks like writing code, debugging, and guiding installation and updates. It offers diverse applications, from cheering up individuals to generating creative content like essays, news articles, and business plans. ChatGPT, unlike search engines, provides interactive, generative responses and understands context, making it more akin to human conversation. These characteristics are contrasted with conventional search engines' keyword-based, non-interactive nature. ChatGPT has limitations, such as potential bias, dependence on outdated data, and revenue generation challenges. Despite these issues, ChatGPT is seen as a transformative AI tool poised to redefine the future of generative technology. In conclusion, advancements in AI, like ChatGPT, are altering how knowledge is acquired and applied, marking a shift from search engines to creativity engines. This transformation highlights the increasing importance of AI literacy and the ability to utilize AI in various life aspects effectively.
APA, Harvard, Vancouver, ISO, and other styles
40

McInnis-Dominguez, Meghan. "An AI Solution: Using ChatGPT to Counter Plagiarism and Boost Enrollments in Hispanic Literature Courses." Middle Atlantic Review of Latin American Studies 7, no. 2 (December 28, 2023): 127–82. http://dx.doi.org/10.23870/marlas.444.

Full text
Abstract:
This study presents an exploration into the application of the artificial intelligence chatbot ChatGPT 3.5 in the context of two Hispanic literature survey courses at the University of Delaware during the Spring 2023 semester. The introduction of ChatGPT into these learning environments aimed to address two key challenges in teaching Spanish and Latin American literature: falling enrollment rates and rising instances of plagiarism. ChatGPT’s role was two-pronged: it served as a component of in-class discussions and a resource for final papers. A total of twenty-one students participated in a follow-up survey designed to gauge their perceptions of ChatGPT as an innovative learning tool. The outcome revealed mixed, yet promising, responses to the use of AI technology in a literature classroom. The data suggest that the ongoing incorporation of cutting-edge tools like ChatGPT into literature courses holds the potential to captivate a diverse group of students and potentially increase enrollment. In addition, the data collected, along with the professor’s analysis of final papers, indicate that ChatGPT can be effective in curbing plagiarism. Responsible AI utilization is becoming increasingly vital in various job markets, and integrating such skills into curricula from STEM to Humanities is crucial for our students’ future career success. Este estudio presenta una exploración de la aplicación del chatbot de inteligencia artificial ChatGPT 3.5 en el contexto de dos cursos panorámicos de literatura hispana en la Universidad de Delaware durante el semestre de primavera de 2023. La introducción de ChatGPT en estos entornos de aprendizaje tenía como objetivo abordar dos desafíos clave en la enseñanza de la literatura española y latinoamericana: la caída de las tasas de inscripción y el aumento de los casos de plagio. El papel de ChatGPT fue doble: sirvió como componente de las discusiones en clase y como recurso para los trabajos finales. Un total de veintiún estudiantes participaron en una encuesta posterior diseñada para medir sus percepciones de ChatGPT como una herramienta de aprendizaje innovadora. El resultado reveló reacciones mixtas, pero prometedoras, ante el uso de la tecnología de IA en clases de literatura. Los datos sugieren que la incorporación de herramientas de vanguardia como ChatGPT en los cursos de literatura tiene el potencial de cautivar a un grupo diverso de estudiantes y aumentar potencialmente la inscripción. Además, los datos recopilados, junto con el análisis de los trabajos finales por parte de la profesora, indican que ChatGPT puede ser eficaz para controlar el plagio. El uso responsable de la IA se está volviendo cada vez más vital en varios mercados laborales y la integración de estas habilidades en los planes de estudio, desde STEM hasta Humanidades, es crucial para el futuro éxito profesional de nuestros estudiantes.
APA, Harvard, Vancouver, ISO, and other styles
41

Gül, Şanser, İsmail Erdemir, Volkan Hanci, Evren Aydoğmuş, and Yavuz Selim Erkoç. "How artificial intelligence can provide information about subdural hematoma: Assessment of readability, reliability, and quality of ChatGPT, BARD, and perplexity responses." Medicine 103, no. 18 (May 3, 2024): e38009. http://dx.doi.org/10.1097/md.0000000000038009.

Full text
Abstract:
Subdural hematoma is defined as blood collection in the subdural space between the dura mater and arachnoid. Subdural hematoma is a condition that neurosurgeons frequently encounter and has acute, subacute and chronic forms. The incidence in adults is reported to be 1.72–20.60/100.000 people annually. Our study aimed to evaluate the quality, reliability and readability of the answers to questions asked to ChatGPT, Bard, and perplexity about “Subdural Hematoma.” In this observational and cross-sectional study, we asked ChatGPT, Bard, and perplexity to provide the 100 most frequently asked questions about “Subdural Hematoma” separately. Responses from both chatbots were analyzed separately for readability, quality, reliability and adequacy. When the median readability scores of ChatGPT, Bard, and perplexity answers were compared with the sixth-grade reading level, a statistically significant difference was observed in all formulas (P < .001). All 3 chatbot responses were found to be difficult to read. Bard responses were more readable than ChatGPT’s (P < .001) and perplexity’s (P < .001) responses for all scores evaluated. Although there were differences between the results of the evaluated calculators, perplexity’s answers were determined to be more readable than ChatGPT’s answers (P < .05). Bard answers were determined to have the best GQS scores (P < .001). Perplexity responses had the best Journal of American Medical Association and modified DISCERN scores (P < .001). ChatGPT, Bard, and perplexity’s current capabilities are inadequate in terms of quality and readability of “Subdural Hematoma” related text content. The readability standard for patient education materials as determined by the American Medical Association, National Institutes of Health, and the United States Department of Health and Human Services is at or below grade 6. The readability levels of the responses of artificial intelligence applications such as ChatGPT, Bard, and perplexity are significantly higher than the recommended 6th grade level.
APA, Harvard, Vancouver, ISO, and other styles
42

Lo, Chung Kwan. "What Is the Impact of ChatGPT on Education? A Rapid Review of the Literature." Education Sciences 13, no. 4 (April 18, 2023): 410. http://dx.doi.org/10.3390/educsci13040410.

Full text
Abstract:
An artificial intelligence-based chatbot, ChatGPT, was launched in November 2022 and is capable of generating cohesive and informative human-like responses to user input. This rapid review of the literature aims to enrich our understanding of ChatGPT’s capabilities across subject domains, how it can be used in education, and potential issues raised by researchers during the first three months of its release (i.e., December 2022 to February 2023). A search of the relevant databases and Google Scholar yielded 50 articles for content analysis (i.e., open coding, axial coding, and selective coding). The findings of this review suggest that ChatGPT’s performance varied across subject domains, ranging from outstanding (e.g., economics) and satisfactory (e.g., programming) to unsatisfactory (e.g., mathematics). Although ChatGPT has the potential to serve as an assistant for instructors (e.g., to generate course materials and provide suggestions) and a virtual tutor for students (e.g., to answer questions and facilitate collaboration), there were challenges associated with its use (e.g., generating incorrect or fake information and bypassing plagiarism detectors). Immediate action should be taken to update the assessment methods and institutional policies in schools and universities. Instructor training and student education are also essential to respond to the impact of ChatGPT on the educational environment.
APA, Harvard, Vancouver, ISO, and other styles
43

Dergaa, Ismail, Helmi Ben Saad, Hatem Ghouili, Jordan M Glenn, Abdelfatteh El Omri, Ines Slim, Yosra Hasni, et al. "Evaluating the Applicability and Appropriateness of ChatGPT as a Source for Tailored Nutrition Advice: A Multi-Scenario Study." New Asian Journal of Medicine 2, no. 1 (2024): 1–16. http://dx.doi.org/10.61838/kman.najm.2.1.1.

Full text
Abstract:
Background: In the rapidly evolving domain of healthcare technology, the integration of advanced computational models has opened up new possibilities for personalized nutrition guidance. The emergence of sophisticated language models, such as Chat Generative Pre-training Transformer (ChatGPT), offers potential in providing interactive and tailored dietary advice. However, concerns remain about the applicability and appropriateness of ChatGPT's recommendations, especially for those with distinct health conditions. Objectives: This study aimed to evaluate the reliability of ChatGPT as a source of nutritional advice. Methods: Three hypothetical scenarios representing various health conditions were presented alongside precise dietary requirements. ChatGPT was tasked to generate personalized dietary programs, encompassing meal timing, specific caloric portions (measured in grams and spoons), as well as alternative meal options for each scenario. Following this, ChatGPT’s generated dietary programs underwent a thorough review by a multidisciplinary team of nutritionist, specialist physicians and clinical researchers. The evaluation focused on the programs' suitability, alignment with dietary standards, consideration of individual health factors, and additional guidance Safety. Results: ChatGPT demonstrated its ability to generate various options of meal plans in accordance with basic nutrition principles. However, there are apparent issues with the recommended individual macronutrient distribution, handling health conditions, drug interactions, and setting realistic weight loss goals. Conclusions: While ChatGPT exhibits promise as a dietary program generator, its application for intervention should be restricted to certified nutrition professionals. Until July 2023, it is not advisable for patients to engage in self-prescription using ChatGPT version 3.5, owing to its inability to provide professional knowledge and acceptable guidance, particularly for individuals with co-existing conditions. The prevailing absence of clinical reasoning highlights the importance of employing ChatGPT solely as a tool, rather than relying on it as an autonomous decision-maker. Its lack of clinical reasoning highlighted the need for human intervention and expert collaboration for precise personalized evaluations.
APA, Harvard, Vancouver, ISO, and other styles
44

Xu, Wanyu, Maulik Chhabilkumar Kotecha, and Daniel A. McAdams. "How good is ChatGPT? An exploratory study on ChatGPT's performance in engineering design tasks and subjective decision-making." Proceedings of the Design Society 4 (May 2024): 2307–16. http://dx.doi.org/10.1017/pds.2024.233.

Full text
Abstract:
AbstractThis study explores how large language models like ChatGPT comprehend language and assess information. Through two experiments, we compare ChatGPT's performance with humans', addressing two key questions: 1) How does ChatGPT compare with human raters in evaluating judgment-based tasks like speculative technology realization? 2) How well does ChatGPT extract technical knowledge from non-technical content, such as mining speculative technologies from text, compared to humans? Results suggest ChatGPT's promise in knowledge extraction but also reveal a disparity with humans in decision-making.
APA, Harvard, Vancouver, ISO, and other styles
45

Sihite, Putri, Anastasya Simorangkir, Nova Noor Kamala Sari, and Viktor Handrianus Pranatawijaya. "INTEGRASI CHATBOT CUSTOM CHATGPT DENGAN CHATBASE DALAM MENINGKATKAN PENGALAMAN PENGGUNA DAN EFISIENSI LAYANAN DALAM WEBSITE E-COMMERCE." JATI (Jurnal Mahasiswa Teknik Informatika) 8, no. 3 (May 9, 2024): 3532–36. http://dx.doi.org/10.36040/jati.v8i3.9733.

Full text
Abstract:
Dalam era digital yang serba cepat, website e-commerce menghadapi tantangan untuk memberikan layanan yang responsif dan berkualitas kepada pelanggan. Salah satu solusi yang muncul adalah penggunaan chatbot, yang memungkinkan interaksi otomatis dengan pelanggan. Meskipun chatbot semakin populer, banyak perusahaan masih menghadapi tantangan dalam mengoptimalkan kinerjanya untuk memastikan pengalaman pengguna yang konsisten dan efisiensi layanan yang optimal. Banyak chatbot di website e-commerce gagal memberikan respons yang akurat dan cepat, yang mengakibatkan penurunan tingkat kepuasan pengguna. Masalah ini muncul dari kurangnya pemahaman tentang bagaimana memanfaatkan teknologi chatbot secara efektif. Penelitian ini bertujuan untuk mengintegrasikan chatbot kustom berbasis ChatGPT dengan Google Chatbase untuk meningkatkan pengalaman pengguna dan efisiensi layanan di website e-commerce. Penelitian ini mengembangkan chatbot menggunakan teknologi ChatGPT dan platform analisis Google Chatbase. Pengembangan chatbot dilakukan dengan mempertimbangkan skenario umum dalam e-commerce, seperti pertanyaan tentang produk, status pesanan, dan pengembalian barang. Analisis interaksi pengguna dengan chatbot dilakukan melalui data yang dikumpulkan oleh Google Chatbase, termasuk frekuensi penggunaan, jenis pertanyaan, dan waktu respons. Hasil penelitian menunjukkan bahwa chatbot kustom ChatGPT yang diintegrasikan dengan Google Chatbase dapat meningkatkan pengalaman pengguna. Integrasi chatbot kustom ChatGPT dengan Google Chatbase memberikan manfaat nyata bagi website e-commerce dalam meningkatkan pengalaman pengguna dan efisiensi layanan.
APA, Harvard, Vancouver, ISO, and other styles
46

Uludag, Kadir. "Exploring the hidden aspects of ChatGPT: A study on concerns regarding plagiarism levels." SCIENTIFIC STUDIOS ON SOCIAL AND POLITICAL PSYCHOLOGY 29, no. 1 (June 6, 2023): 43–48. http://dx.doi.org/10.61727/sssppj/1.2023.43.

Full text
Abstract:
ChatGPT is a widely used chatbot model that has gained popularity. Concerns have been raised that ChatGPT may facilitate plagiarism. Therefore, it is necessary to determine whether ChatGPT can distinguish between plagiarised and non-plagiarised texts. The purpose of the study was to investigate the potential of ChatGPT in generating plagiarism. The sample included various types of texts, such as manuscripts. Several questions about plagiarism were asked. The study found that the first version of ChatGPT cannot successfully detect plagiarism, although it can distinguish sentences written in academic sources from ordinary sentences. ChatGPT assumes that providing a reference to a previous source is sufficient. However, this does not mean that it is free from plagiarism. The findings of this study indicate that ChatGPT cannot fully recognise plagiarism
APA, Harvard, Vancouver, ISO, and other styles
47

King, Ryan C., Jamil S. Samaan, Yee Hui Yeo, Yuxin Peng, David C. Kunkel, Ali A. Habib, and Roxana Ghashghaei. "A Multidisciplinary Assessment of ChatGPT’s Knowledge of Amyloidosis: Observational Study." JMIR Cardio 8 (April 19, 2024): e53421. http://dx.doi.org/10.2196/53421.

Full text
Abstract:
Background Amyloidosis, a rare multisystem condition, often requires complex, multidisciplinary care. Its low prevalence underscores the importance of efforts to ensure the availability of high-quality patient education materials for better outcomes. ChatGPT (OpenAI) is a large language model powered by artificial intelligence that offers a potential avenue for disseminating accurate, reliable, and accessible educational resources for both patients and providers. Its user-friendly interface, engaging conversational responses, and the capability for users to ask follow-up questions make it a promising future tool in delivering accurate and tailored information to patients. Objective We performed a multidisciplinary assessment of the accuracy, reproducibility, and readability of ChatGPT in answering questions related to amyloidosis. Methods In total, 98 amyloidosis questions related to cardiology, gastroenterology, and neurology were curated from medical societies, institutions, and amyloidosis Facebook support groups and inputted into ChatGPT-3.5 and ChatGPT-4. Cardiology- and gastroenterology-related responses were independently graded by a board-certified cardiologist and gastroenterologist, respectively, who specialize in amyloidosis. These 2 reviewers (RG and DCK) also graded general questions for which disagreements were resolved with discussion. Neurology-related responses were graded by a board-certified neurologist (AAH) who specializes in amyloidosis. Reviewers used the following grading scale: (1) comprehensive, (2) correct but inadequate, (3) some correct and some incorrect, and (4) completely incorrect. Questions were stratified by categories for further analysis. Reproducibility was assessed by inputting each question twice into each model. The readability of ChatGPT-4 responses was also evaluated using the Textstat library in Python (Python Software Foundation) and the Textstat readability package in R software (R Foundation for Statistical Computing). Results ChatGPT-4 (n=98) provided 93 (95%) responses with accurate information, and 82 (84%) were comprehensive. ChatGPT-3.5 (n=83) provided 74 (89%) responses with accurate information, and 66 (79%) were comprehensive. When examined by question category, ChatGTP-4 and ChatGPT-3.5 provided 53 (95%) and 48 (86%) comprehensive responses, respectively, to “general questions” (n=56). When examined by subject, ChatGPT-4 and ChatGPT-3.5 performed best in response to cardiology questions (n=12) with both models producing 10 (83%) comprehensive responses. For gastroenterology (n=15), ChatGPT-4 received comprehensive grades for 9 (60%) responses, and ChatGPT-3.5 provided 8 (53%) responses. Overall, 96 of 98 (98%) responses for ChatGPT-4 and 73 of 83 (88%) for ChatGPT-3.5 were reproducible. The readability of ChatGPT-4’s responses ranged from 10th to beyond graduate US grade levels with an average of 15.5 (SD 1.9). Conclusions Large language models are a promising tool for accurate and reliable health information for patients living with amyloidosis. However, ChatGPT’s responses exceeded the American Medical Association’s recommended fifth- to sixth-grade reading level. Future studies focusing on improving response accuracy and readability are warranted. Prior to widespread implementation, the technology’s limitations and ethical implications must be further explored to ensure patient safety and equitable implementation.
APA, Harvard, Vancouver, ISO, and other styles
48

Fang, Changchang, Yuting Wu, Wanying Fu, Jitao Ling, Yue Wang, Xiaolin Liu, Yuan Jiang, et al. "How does ChatGPT-4 preform on non-English national medical licensing examination? An evaluation in Chinese language." PLOS Digital Health 2, no. 12 (December 1, 2023): e0000397. http://dx.doi.org/10.1371/journal.pdig.0000397.

Full text
Abstract:
ChatGPT, an artificial intelligence (AI) system powered by large-scale language models, has garnered significant interest in healthcare. Its performance dependent on the quality and quantity of training data available for a specific language, with the majority of it being in English. Therefore, its effectiveness in processing the Chinese language, which has fewer data available, warrants further investigation. This study aims to assess the of ChatGPT’s ability in medical education and clinical decision-making within the Chinese context. We utilized a dataset from the Chinese National Medical Licensing Examination (NMLE) to assess ChatGPT-4’s proficiency in medical knowledge in Chinese. Performance indicators, including score, accuracy, and concordance (confirmation of answers through explanation), were employed to evaluate ChatGPT’s effectiveness in both original and encoded medical questions. Additionally, we translated the original Chinese questions into English to explore potential avenues for improvement. ChatGPT scored 442/600 for original questions in Chinese, surpassing the passing threshold of 360/600. However, ChatGPT demonstrated reduced accuracy in addressing open-ended questions, with an overall accuracy rate of 47.7%. Despite this, ChatGPT displayed commendable consistency, achieving a 75% concordance rate across all case analysis questions. Moreover, translating Chinese case analysis questions into English yielded only marginal improvements in ChatGPT’s performance (p = 0.728). ChatGPT exhibits remarkable precision and reliability when handling the NMLE in Chinese. Translation of NMLE questions from Chinese to English does not yield an improvement in ChatGPT’s performance.
APA, Harvard, Vancouver, ISO, and other styles
49

Arviani, Heidy, Ririn Puspita Tutiasri, Latif Ahmad Fauzan, and Ade Kusuma. "ChatGPT For Marketing Communications: Friend or Foe?" Kanal: Jurnal Ilmu Komunikasi 12, no. 1 (August 29, 2023): 1–7. http://dx.doi.org/10.21070/kanal.v12i1.1729.

Full text
Abstract:
The release of the ChatGPT chatbot in November 2022 received significant public attention. ChatGPT is an Artificial Intelligence (AI) powered chatbot that allows users to simulate human-like conversations with AI. GPT stands for Generative Pre-trained Transformer, a language processing model developed by the American artificial intelligence company OpenAI. These innovations and technologies are changing business interests, revolutionizing marketing communications strategies, and enhancing the consumer experience. ChatGPT is a powerful tool for marketers, but we need to understand the risks and place realistic expectations for the moment. The author uses data collection techniques using literature studies and observations on ChatGPT about their potential and impact on marketing communications. Authors analyze brand information, data search, reference services, cataloging, content creation, and ethical considerations such as privacy and bias. Result of this study, ChatGPT can provide and support creative content creation or copywriting, improve customer service, automate repetitive tasks, and support data analysis. However, humans are irreplaceable for examining outputs and creating marketing messages consistent with a company's strategy and brand vision. With good marketing strategies, ChatGPT can effectively enhance and support marketing processes.
APA, Harvard, Vancouver, ISO, and other styles
50

Saarna, Christopher. "Identifying Whether a Short Essay was written by a University Student or ChatGPT." International Journal of Technology in Education 7, no. 3 (May 30, 2024): 611–33. http://dx.doi.org/10.46328/ijte.773.

Full text
Abstract:
This study seeks to clarify whether teachers are able to distinguish between essays written by English L2 students or generated by ChatGPT. 47 instructors who hold experience teaching English to native speakers of Japanese in universities or other higher education institutions were tested on whether they could identify between human written essays and ChatGPT generated essays. The ICNALE written corpus (Ishikawa, 2013) was used to find and randomly select the essays of four Japanese university students’ written work who studied English at roughly CEFR A2 level. The AI chatbot, ChatGPT, was used to generate four essays utilizing prompts which directed the chatbot to mimic grammar mistakes common to nonnative speakers of English. Teachers were requested to identify which of the eight essays they believed to be human written or ChatGPT generated. On average, the teachers were able to identify 54.25% of items accurately. This result is slightly better than random chance, and implies that most teachers cannot make an accurate assessment on a ChatGPT generated essay when ChatGPT is prompted to make grammar mistakes.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography