Journal articles on the topic 'ChatGPT Login'

To see the other types of publications on this topic, follow the link: ChatGPT Login.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'ChatGPT Login.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Fitria, Tira Nur. "Artificial intelligence (AI) technology in OpenAI ChatGPT application: A review of ChatGPT in writing English essay." ELT Forum: Journal of English Language Teaching 12, no. 1 (March 31, 2023): 44–58. http://dx.doi.org/10.15294/elt.v12i1.64069.

Full text
Abstract:
ChatGPT is a product of AI that is currently being widely discussed on Twitter. This research reviews how ChatGPT writes English essays. This research is descriptive qualitative. The analysis shows that we can access ChatGPT on openai.com or chat.openai.com on the browser. If we do not have an account, we register via email, Google, or Microsoft account. After login, enter a question or statement in the conversation column provided. Send it and ChatGPT will respond and the answer appear quickly. The researcher tries ChatGPT “Can you help me in doing my English assignment?", and the ChatBot replies "Of course! I'd be happy to help you with your English assignment. What do you need help with? Do you have a specific question or task that you're working on, or is there a broader topic that you'd like help with? It would be helpful to have some more information so that I can better understand how I can assist you". Based on several tries, ChatGPT can answer all questions on various topics such as English essays including a descriptive text about Solo and My Family, recount text about personal experience and unforgettable moments, resolution in 2023, and future career. ChatGPT considers the event orders and writing order, including using main, explanatory sentences, and a conclusion. It uses two voices both active and passive voice. Besides, it considers tenses use related to the given topic essay. However, from examples of English essays produced by ChatGPT, it certainly requires further research to find out that the essay results are grammatically accurate.
APA, Harvard, Vancouver, ISO, and other styles
2

Idham, Arif Rahman, and M.Rizkillah. "ANALISIS KEEFEKTIFAN CHATGPT DALAM PERANCANGAN APLIKASI." Jurnal Informatika Teknologi dan Sains (Jinteks) 6, no. 2 (May 25, 2024): 115–21. http://dx.doi.org/10.51401/jinteks.v6i2.4050.

Full text
Abstract:
Berdasarkan hasil penelitian yang telah direview, ChatGPT mampu menghasilkan aplikasi sesuai dengan kebutuhan bisnis yang telah ditetapkan. Proses dimulai dari perancangan Use Case Diagram, pembuatan desain database dengan Teknik PDM, hingga penghasilan kode program sesuai dengan permintaan pengguna. Namun, kekurangan dari penelitian-penelitian tersebut terletak pak pembenaran hasi yang diperoleh, karena mayoritas penelitian yang berkaitan dengan ChatGPT dalam bidang IT lebih difokuskan pada analisis keefektifan pembuatan kode program. Sedikit penelitian yang mendetail pada tahapan SDLC menggunakan ChatGPT. Proses penelitian dimulai dengan pembuatan Use Case Diagram, desain database menggunakan Teknik PDM, pembuatan kode program, dan pembuatan skenario pengujian dengan menggunakan ChatGPT. Hasil penelitian menunjukan bahwa ChatGPT efektif dalam pembuatan Use Case Diagram, desain database dengan Teknik PDM, dan pembuatan skenario pengujian. Namun, efektivitasnya tergantung pada spesifikasi kebutuhan yang detail dan penjelasan fitur yang lengkap dari aplikasi yang ingin dirancang untuk pembuatan skenario pengujian. Dalam pembuatan program, ChatGPT efektif dalam pembuatan program sederhana seperti login, dengan yarat intruksi yang diberikan harus mencakup database yang diguunakan dan variable kode program yang diperlukan. Namun, untuk pembuatan program yang kompleks, ChatGPT belum spenuhnya efektif. Hal ini karena ChatGPT dapat menciptakan kode program tanpa masalah, programmer harus memberikan intruksi secara sistematis.
APA, Harvard, Vancouver, ISO, and other styles
3

NAGAR, Lavisha. "What Is the Impact of ChatGPT on Education?" INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 04 (April 28, 2024): 1–5. http://dx.doi.org/10.55041/ijsrem32175.

Full text
Abstract:
n artificial intelligence-based chatbot, it is invented by open AI it is one of the fascinating and amazing application and that is free of cost any one can access this; people have to just login your details. It is launched in November 30 2022. It is based on the GPT-3 language model, which is trained on a massive dataset of text and code. ChatGPT can generate human-like text in response to a wide range of prompts and questions, such as writing poems, answering questions, and creating stories. It is beginning in 2018 This AI is informative and comprehensive. its work like a human body . Although human design this chatgpt but still its power is more than a manual men. It generate the significant information it gives human like responses in which any one can input and it generates the output . How this chatGpt can use in education, what kinds of issues arises in initials. (i.e., December 2022 to February 2023).I search of the relevant databases and Google Scholar yielded 50 articles for content analysis (i.e., open coding, axial coding, and selective coding). ChatGPT has been used for a variety of purposes, including customer service, education, and entertainment. It has also been used to generate creative content, such as poems, stories, and scripts. ChatGPT is still under development, but it has the potential to revolutionize the way we interact with computers. The service is available 24x7 it helps in their assignments , academic exams and many more .
APA, Harvard, Vancouver, ISO, and other styles
4

Nguyen, Hong Nhung, Duy Nguyen, Luu Phuc Thinh Tran, and Thi Hoang Nguyen Tran. "Exploring English Vocabulary Learning of Vietnamese Secondary School Students with VoiceGPT Assistance." AsiaCALL Online Journal 15, no. 1 (February 28, 2024): 55–70. http://dx.doi.org/10.54855/acoj.241514.

Full text
Abstract:
With the advent of AI chatbots, many teachers’ teaching practices of English as a foreign language have undergone many changes. Many of them have become accustomed to employing ChatGPT to assist their work, bringing many benefits and potential challenges that, to date, have yet to be fully tested in any aspect. Particularly, two notable research gaps involve how Vietnamese secondary school students use VoiceGPT, the Vietnamese version of ChatGPT, to assist them in learning new English words and how they perceive this support. The current case study aimed to address these gaps by employing a quasi-experimental design at Lam Son Secondary School in Ho Chi Minh City with the participation of ten sixth-grade students in two English-intensive classes. In this investigation, the teacher used the Presentation-Practice-Production teaching method to teach vocabulary to her students, who were randomly assigned into two groups with the same number of members in each group, and the data for analysis was collected from their writing samples and semi-structured interviews. The findings indicate that sixth-grade students had different ways of using VoiceGPT to help them learn English words. The participants with VoiceGPT assistance outperformed those without this A.I. support in terms of lexical performance in the writing productions on five topics surveyed. In addition, they expressed favorable attitudes toward VoiceGPT’s benefits, but some concerns were raised about login difficulties, vocabulary range, and long response time.
APA, Harvard, Vancouver, ISO, and other styles
5

Gilson, Aidan, Conrad W. Safranek, Thomas Huang, Vimig Socrates, Ling Chi, Richard Andrew Taylor, and David Chartash. "How Does ChatGPT Perform on the United States Medical Licensing Examination? The Implications of Large Language Models for Medical Education and Knowledge Assessment." JMIR Medical Education 9 (February 8, 2023): e45312. http://dx.doi.org/10.2196/45312.

Full text
Abstract:
Background Chat Generative Pre-trained Transformer (ChatGPT) is a 175-billion-parameter natural language processing model that can generate conversation-style responses to user input. Objective This study aimed to evaluate the performance of ChatGPT on questions within the scope of the United States Medical Licensing Examination Step 1 and Step 2 exams, as well as to analyze responses for user interpretability. Methods We used 2 sets of multiple-choice questions to evaluate ChatGPT’s performance, each with questions pertaining to Step 1 and Step 2. The first set was derived from AMBOSS, a commonly used question bank for medical students, which also provides statistics on question difficulty and the performance on an exam relative to the user base. The second set was the National Board of Medical Examiners (NBME) free 120 questions. ChatGPT’s performance was compared to 2 other large language models, GPT-3 and InstructGPT. The text output of each ChatGPT response was evaluated across 3 qualitative metrics: logical justification of the answer selected, presence of information internal to the question, and presence of information external to the question. Results Of the 4 data sets, AMBOSS-Step1, AMBOSS-Step2, NBME-Free-Step1, and NBME-Free-Step2, ChatGPT achieved accuracies of 44% (44/100), 42% (42/100), 64.4% (56/87), and 57.8% (59/102), respectively. ChatGPT outperformed InstructGPT by 8.15% on average across all data sets, and GPT-3 performed similarly to random chance. The model demonstrated a significant decrease in performance as question difficulty increased (P=.01) within the AMBOSS-Step1 data set. We found that logical justification for ChatGPT’s answer selection was present in 100% of outputs of the NBME data sets. Internal information to the question was present in 96.8% (183/189) of all questions. The presence of information external to the question was 44.5% and 27% lower for incorrect answers relative to correct answers on the NBME-Free-Step1 (P<.001) and NBME-Free-Step2 (P=.001) data sets, respectively. Conclusions ChatGPT marks a significant improvement in natural language processing models on the tasks of medical question answering. By performing at a greater than 60% threshold on the NBME-Free-Step-1 data set, we show that the model achieves the equivalent of a passing score for a third-year medical student. Additionally, we highlight ChatGPT’s capacity to provide logic and informational context across the majority of answers. These facts taken together make a compelling case for the potential applications of ChatGPT as an interactive medical education tool to support learning.
APA, Harvard, Vancouver, ISO, and other styles
6

Hines, Jasara. "Review of "Writing in the Clouds: Inventing and Composing in Internetworked Writing Spaces by John Logie," Logie, J. (2021). Writing in the clouds: Inventing and composing in internetworked writing spaces. Parlor Press." Communication Design Quarterly 11, no. 3 (September 2023): 80–81. http://dx.doi.org/10.1145/3592367.3617935.

Full text
Abstract:
In the wake of the controversy surrounding the new AI chatbot application, ChatGPT, I wonder how Logie would seek to include this new technology in his work. I ponder this because, throughout the book, Logie presents compelling evidence for why the concepts of invention, composition, and internetworked writing should be embraced and not feared. While some denounce the application and take to social media to disparage the possible negative impact on students, creativity, and composition, ChatGPT, I believe Logie would argue, would be a powerful tool we can implement to become "composers." He believes that through cloud computing services we are now more apt to collaborate, use, remix, and create rhetorical modes that extend far beyond the formulaic argument, therefore we are composers. So, Logie applies the idea of a composer as someone who is a "prosumer" (Toffler). This composer is media literate and transforms traditional rhetorical canons into multimodal compositions such as memes, Google Docs, and digital collages. However, his overarching argument is that internetworked writing tools have democratized writing through that same offering of innovative outlets. His book is arranged in a way that walks the reader through this argument.
APA, Harvard, Vancouver, ISO, and other styles
7

Plevris, Vagelis, George Papazafeiropoulos, and Alejandro Jiménez Rios. "Chatbots Put to the Test in Math and Logic Problems: A Comparison and Assessment of ChatGPT-3.5, ChatGPT-4, and Google Bard." AI 4, no. 4 (October 24, 2023): 949–69. http://dx.doi.org/10.3390/ai4040048.

Full text
Abstract:
In an age where artificial intelligence is reshaping the landscape of education and problem solving, our study unveils the secrets behind three digital wizards, ChatGPT-3.5, ChatGPT-4, and Google Bard, as they engage in a thrilling showdown of mathematical and logical prowess. We assess the ability of the chatbots to understand the given problem, employ appropriate algorithms or methods to solve it, and generate coherent responses with correct answers. We conducted our study using a set of 30 questions. These questions were carefully crafted to be clear, unambiguous, and fully described using plain text only. Each question has a unique and well-defined correct answer. The questions were divided into two sets of 15: Set A consists of “Original” problems that cannot be found online, while Set B includes “Published” problems that are readily available online, often with their solutions. Each question was presented to each chatbot three times in May 2023. We recorded and analyzed their responses, highlighting their strengths and weaknesses. Our findings indicate that chatbots can provide accurate solutions for straightforward arithmetic, algebraic expressions, and basic logic puzzles, although they may not be consistently accurate in every attempt. However, for more complex mathematical problems or advanced logic tasks, the chatbots’ answers, although they appear convincing, may not be reliable. Furthermore, consistency is a concern as chatbots often provide conflicting answers when presented with the same question multiple times. To evaluate and compare the performance of the three chatbots, we conducted a quantitative analysis by scoring their final answers based on correctness. Our results show that ChatGPT-4 performs better than ChatGPT-3.5 in both sets of questions. Bard ranks third in the original questions of Set A, trailing behind the other two chatbots. However, Bard achieves the best performance, taking first place in the published questions of Set B. This is likely due to Bard’s direct access to the internet, unlike the ChatGPT chatbots, which, due to their designs, do not have external communication capabilities.
APA, Harvard, Vancouver, ISO, and other styles
8

Spennemann, Dirk H. R. "ChatGPT and the Generation of Digitally Born “Knowledge”: How Does a Generative AI Language Model Interpret Cultural Heritage Values?" Knowledge 3, no. 3 (September 18, 2023): 480–512. http://dx.doi.org/10.3390/knowledge3030032.

Full text
Abstract:
The public release of ChatGPT, a generative artificial intelligence language model, caused wide-spread public interest in its abilities but also concern about the implications of the application on academia, depending on whether it was deemed benevolent (e.g., supporting analysis and simplification of tasks) or malevolent (e.g., assignment writing and academic misconduct). While ChatGPT has been shown to provide answers of sufficient quality to pass some university exams, its capacity to write essays that require an exploration of value concepts is unknown. This paper presents the results of a study where ChatGPT-4 (released May 2023) was tasked with writing a 1500-word essay to discuss the nature of values used in the assessment of cultural heritage significance. Based on an analysis of 36 iterations, ChatGPT wrote essays of limited length with about 50% of the stipulated word count being primarily descriptive and without any depth or complexity. The concepts, which are often flawed and suffer from inverted logic, are presented in an arbitrary sequence with limited coherence and without any defined line of argument. Given that it is a generative language model, ChatGPT often splits concepts and uses one or more words to develop tangential arguments. While ChatGPT provides references as tasked, many are fictitious, albeit with plausible authors and titles. At present, ChatGPT has the ability to critique its own work but seems unable to incorporate that critique in a meaningful way to improve a previous draft. Setting aside conceptual flaws such as inverted logic, several of the essays could possibly pass as a junior high school assignment but fall short of what would be expected in senior school, let alone at a college or university level.
APA, Harvard, Vancouver, ISO, and other styles
9

Trepczyński, Marcin. "Religija, teologija i filozofske vještine automatiziranih programa za čavrljanje (chatbotova) pogonjenima velikim jezičnim modelima (LLM)." Disputatio philosophica 25, no. 1 (February 7, 2024): 19–36. http://dx.doi.org/10.32701/dp.25.1.2.

Full text
Abstract:
In this study, I demonstrate how religion and theology can be useful for testing the performance of LLMs or LLM–powered chatbots, focusing on the measurement of philosophical skills. I present the results of testing four selected chatbots: ChatGPT, Bing, Bard, and Llama2. I utilize three examples of possible sources of inspiration from religion or theology: 1) the theory of the four senses of Scripture; 2) abstract theological statements; 3) an abstract logic formula derived from a religious text, to show that these sources are good materials for tasks that can effectively measure philosophical skills such as interpretation of a given fragment, creative deductive reasoning, and identification of ontological limitations. This approach enabled sensitive testing, revealing differences among the performances of the four chatbots. I also provide an example showing how we can create a benchmark to rate and compare such skills, using the assessment criteria and simplified scales to rate each chatbot with respect to each criterion.
APA, Harvard, Vancouver, ISO, and other styles
10

McKee, Forrest, and David Noever. "The Evolving Landscape of Cybersecurity: Red Teams, Large Language Models, and the Emergence of New AI Attack Surfaces." International Journal on Cryptography and Information Security 13, no. 1 (March 30, 2023): 1–34. http://dx.doi.org/10.5121/ijcis.2023.13101.

Full text
Abstract:
This study explores cybersecurity questions using a question-and-answer format with the advanced ChatGPT model from OpenAI. Unlike previous chatbots, ChatGPT demonstrates an enhanced understanding of complex coding questions. We present thirteen coding tasks aligned with various stages of the MITRE ATT&CK framework, covering areas such as credential access and defense evasion. The experimental prompts generate keyloggers, logic bombs, obfuscated worms, and ransomware with payment fulfillment, showcasing an impressive range of functionality, including self-replication, self-modification, and evasion. Despite being a language-only model, a notable feature of ChatGPT showcases its coding approaches to produce images with obfuscated or embedded executable programming steps or links.
APA, Harvard, Vancouver, ISO, and other styles
11

Zashikhina, I. M. "Scientific Article Writing: Will ChatGPT Help?" Vysshee Obrazovanie v Rossii = Higher Education in Russia 32, no. 8-9 (September 13, 2023): 24–47. http://dx.doi.org/10.31992/0869-3617-2023-32-8-9-24-47.

Full text
Abstract:
The emergence of artificial intelligence language services has raised hopes related to facilitating the task of publication activity. Members of the academic community wondered whether chatbots could optimize the process of scientific writing. ChatGPT, a language model capable of, among other things, generating scholarly texts, received particular attention. The cases of writing academic papers using ChatGPT have led to a number of publications analyzing the pros and cons of using this neural network. In this paper, we investigate the possibility of using ChatGPT to write an introduction to a scientific paper on a topical issue of the Arctic governance. A set of queries to ChatGPT network, based on the logic of the commonly accepted in academia publication format IMRAD, has being developed. This format is characterized by structural and functional elements, which served as a logical basis for the queries. The responses received from ChatGPT were analyzed for their compliance with the requirements for a scientific article, according to the IMRAD publication format. The result of the analysis showed that ChatGPT is not able to meet the requirements for publishing a scientific article in the modern scientific publication discourse.
APA, Harvard, Vancouver, ISO, and other styles
12

Liao, Wenxiong, Zhengliang Liu, Haixing Dai, Shaochen Xu, Zihao Wu, Yiyang Zhang, Xiaoke Huang, et al. "Differentiating ChatGPT-Generated and Human-Written Medical Texts: Quantitative Study." JMIR Medical Education 9 (December 28, 2023): e48904. http://dx.doi.org/10.2196/48904.

Full text
Abstract:
Background Large language models, such as ChatGPT, are capable of generating grammatically perfect and human-like text content, and a large number of ChatGPT-generated texts have appeared on the internet. However, medical texts, such as clinical notes and diagnoses, require rigorous validation, and erroneous medical content generated by ChatGPT could potentially lead to disinformation that poses significant harm to health care and the general public. Objective This study is among the first on responsible artificial intelligence–generated content in medicine. We focus on analyzing the differences between medical texts written by human experts and those generated by ChatGPT and designing machine learning workflows to effectively detect and differentiate medical texts generated by ChatGPT. Methods We first constructed a suite of data sets containing medical texts written by human experts and generated by ChatGPT. We analyzed the linguistic features of these 2 types of content and uncovered differences in vocabulary, parts-of-speech, dependency, sentiment, perplexity, and other aspects. Finally, we designed and implemented machine learning methods to detect medical text generated by ChatGPT. The data and code used in this paper are published on GitHub. Results Medical texts written by humans were more concrete, more diverse, and typically contained more useful information, while medical texts generated by ChatGPT paid more attention to fluency and logic and usually expressed general terminologies rather than effective information specific to the context of the problem. A bidirectional encoder representations from transformers–based model effectively detected medical texts generated by ChatGPT, and the F1 score exceeded 95%. Conclusions Although text generated by ChatGPT is grammatically perfect and human-like, the linguistic characteristics of generated medical texts were different from those written by human experts. Medical text generated by ChatGPT could be effectively detected by the proposed machine learning algorithms. This study provides a pathway toward trustworthy and accountable use of large language models in medicine.
APA, Harvard, Vancouver, ISO, and other styles
13

Hammoda, Basel. "ChatGPT for Founding Teams: An Entrepreneurial Pedagogical Innovation." International Journal of Technology in Education 7, no. 1 (February 4, 2024): 154–73. http://dx.doi.org/10.46328/ijte.530.

Full text
Abstract:
ChatGPT is taking the world and the education sector by storm. Many educators are still hesitant to integrate it within their curricula, owing to the limited practical and theoretical guidance on its applications, despite early conceptual studies advocating for its potential benefits. This pedagogical innovation applied an effectual logic to implement ChatGPT for a founding team activity within an entrepreneurship course. Composing a founding team is an inundating task in venture creation, with long-lasting consequences. So far, there is yet to be an ideal approach proposed in literature or observed in real-life for doing it. In this pedagogical innovation, three student teams with varying business ideas prompted ChatGPT using different keywords and levels of details, to get recommendations on essential team members, their roles and equity split. Each team presented their findings, and then the classroom engaged in a collective discussion. The students were surveyed afterwards to assess the reception and effectiveness of the intervention. Their feedback showed an overwhelming favoritism of ChatGPT, as a convenient and resourceful learning tool. The study establishes the potential value of ChatGPT as a heutagogical tool that supports student-centric entrepreneurial learning across educational institutions and the entrepreneurship ecosystem that extends to the venture creation process.
APA, Harvard, Vancouver, ISO, and other styles
14

McKee, Forrest, and David Noever. "Chatbots in a Botnet World." International Journal on Cybernetics & Informatics 12, no. 2 (March 11, 2023): 77–95. http://dx.doi.org/10.5121/ijci.2023.120207.

Full text
Abstract:
Question-and-answer formats provide a novel experimental platform for investigating cybersecurity questions. Unlike previous chatbots, the latest ChatGPT model from OpenAI supports an advanced understanding of complex coding questions. The research demonstrates thirteen coding tasks that generally qualify as stages in the MITRE ATT&CK framework, ranging from credential access to defense evasion. With varying success, the experimental prompts generate examples of keyloggers, logic bombs, obfuscated worms, and payment-fulfilled ransomware. The empirical results illustrate cases that support the broad gain of functionality, including self-replication and self-modification, evasion, and strategic understanding of complex cybersecurity goals. One surprising feature of ChatGPT as a language-only model centers on its ability to spawn coding approaches that yield images that obfuscate or embed executable programming steps or links.
APA, Harvard, Vancouver, ISO, and other styles
15

Kim, Hwa-seon. "Ask ChatGPT: Kim Dae-sik and ChatGPT, translated by Choo Seo-yeon and others, “The Future of Humanity Ask ChatGPT” (East Asia, 2023)." Korean Association for Literacy 14, no. 3 (June 30, 2023): 233–50. http://dx.doi.org/10.37736/kjlr.2023.06.14.3.06.233.

Full text
Abstract:
This review examines the meaning of ChatGPT, a state-of-the-art natural language processing (NLP) model developed by OpenAI, and Professor Kim Dae-sik, a brain scientist, exchanged questions and answers on topics such as human relations, love, and happiness, risks facing mankind, God's existence and death. First, we asked ChatGPT, Google Bard, and Microsoft's search engine “Newbing” to write a book review on “The Future of Mankind Ask ChatGPT,” and then analyzed the contents and composition of each answer. The book review written by ChatGPT was presented in the form of “Overall Introduction-Book Composition Method and Theme-Limitations and Significance of ChatGPT-General Review,” and the flow of the article was generally smooth, but it was difficult to find the writer’s fresh perspective. Google’s Bard article confirmed that the text generation method of artificial intelligence chatbots focuses on universal summaries and simple and clear explanations rather than individuality and originality. Microsoft’s search engine “Newbing,” based on the GPT-4 model, specifically cited the contents of the book to increase reliability and reveal the source in consideration of the issue of intellectual property infringement. Looking at the conversation scenes between humans and machines, I thought about the significance of the conversation. In order for “ask-answer” to be a process of generating meaning, curiosity, and curiosity about not only oneself but also others and the world must be placed behind it. In addition, to continue the conversation, a literacy ability to discover meaning is required, and literacy presupposes the ability to read between the lines and grasp contextual knowledge. This eventually requires the imagination to fill the gap and the overall perspective to grasp the relationship between the part and the whole, the part and the part. The process of finding meaning is the ability to think about surplus and outside and grasp the context, and if you say one thing, it is no different from the sense of recognizing heat. The sensibility that knows how to capture nuances belongs to the realm of humanism, which is essential for humans to sense and judge an object or phenomenon. Therefore, Professor Kim Dae-sik’s conclusion in “Epilogue II” of “The Future of Humanity Ask ChatGPT” has great implications. The eye to distinguish the voices of individuals, which contain nuances subtly different from those that “probabilistic parrots” imitate plausibly, is based on humanities knowledge that cannot be easily filled with the names of efficiency, probability, and practicality. To exist as thinking human beings, we realize again that we must ask ‘why’ rather than ‘what’ and rely on the logic of coincidence and the specific context of life, not on the axis of stochastic thinking. And at the core of it, there is humanity that repeatedly asks and answers with a long-term perspective.
APA, Harvard, Vancouver, ISO, and other styles
16

Li, Fangjun, David C. Hogg, and Anthony G. Cohn. "Advancing Spatial Reasoning in Large Language Models: An In-Depth Evaluation and Enhancement Using the StepGame Benchmark." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 17 (March 24, 2024): 18500–18507. http://dx.doi.org/10.1609/aaai.v38i17.29811.

Full text
Abstract:
Artificial intelligence (AI) has made remarkable progress across various domains, with large language models like ChatGPT gaining substantial attention for their human-like text-generation capabilities. Despite these achievements, improving spatial reasoning remains a significant challenge for these models. Benchmarks like StepGame evaluate AI spatial reasoning, where ChatGPT has shown unsatisfactory performance. However, the presence of template errors in the benchmark has an impact on the evaluation results. Thus there is potential for ChatGPT to perform better if these template errors are addressed, leading to more accurate assessments of its spatial reasoning capabilities. In this study, we refine the StepGame benchmark, providing a more accurate dataset for model evaluation. We analyze GPT’s spatial reasoning performance on the rectified benchmark, identifying proficiency in mapping natural language text to spatial relations but limitations in multi-hop reasoning. We provide a flawless solution to the benchmark by combining template-to-relation mapping with logic-based reasoning. This combination demonstrates proficiency in performing qualitative reasoning on StepGame without encountering any errors. We then address the limitations of GPT models in spatial reasoning. To improve spatial reasoning, we deploy Chain-of-Thought and Tree-of-thoughts prompting strategies, offering insights into GPT’s cognitive process. Our investigation not only sheds light on model deficiencies but also proposes enhancements, contributing to the advancement of AI with more robust spatial reasoning capabilities.
APA, Harvard, Vancouver, ISO, and other styles
17

Di Bello, Fabio. "Unraveling the Enigma: how can ChatGPT perform so well with language understanding, reasoning, and knowledge processing without having real knowledge or logic?" AboutOpen 10 (June 20, 2023): 88–96. http://dx.doi.org/10.33393/ao.2023.2618.

Full text
Abstract:
Artificial Intelligence (AI) has made significant progress in various domains, but the quest for machines to truly understand natural language has been challenging. Traditional mainstream approaches to AI, while valuable, often struggled to achieve human-level language comprehension. However, the emergence of neural networks and the subsequent adoption of the downstream approach have revolutionized the field, as demonstrated by the powerful and successful language model, ChatGPT. The deep learning algorithms utilized in large language models (LLMs) differ significantly from those employed in traditional neural networks. This article endeavors to provide a valuable and insightful exploration of the functionality and performance of generative AI. It aims to accomplish this by offering a comprehensive, yet simplified, analysis of the underlying mathematical models used by systems such as ChatGPT. The primary objective is to explore the diverse performance capabilities of these systems across some important domains such as clinical practice. The article also sheds light on the existing gaps and limitations that impact the quality and reliability of generated answers. Furthermore, it delves into potential strategies aimed at improving the reliability and cognitive aspects of generative AI systems.
APA, Harvard, Vancouver, ISO, and other styles
18

Rajabu, Neema. "Interactive Health Information Chatbot for Non-Communicable Diseases in Swahili Language." Journal of Applied Science, Information and Computing 1, no. 2 (December 31, 2020): 32–38. http://dx.doi.org/10.59568/jasic-2020-1-2-05.

Full text
Abstract:
Current health awareness campaign strategies, efforts, and methods on NCDs have not produced improved outcomes in reducing the burden, spread, and deaths linked to NCDs in Tanzania. To support, compliment, and improve health literacy on NCDs and promote good healthcare, we designed and developed an interactive health Chatbot ‘afyaBot’ in Swahili language that can respond to user's requests concerning NCDs symptoms, prevention, management, and cure. The Chatbot was designed using Botsociety; a special tool for designing Chatbot prototypes; BotMan framework for coding the Chatbot logic and Google Dialogflow platform that offers high-level Natural Language Processing capabilities. The Chatbot was integrated into the Facebook Messenger platform which offers free public API access that eliminates the cost of the internet from the consumers. The Chatbot was tested for accuracy, usability, user experience, responsiveness, reliability, maintainability, and portability. The results of implementation were satisfactory and provide insights useful to stakeholders in the health sector. The interactive Chatbot was designed to provide real-time information on NCDs, create awareness, and educate users on preventive, control, and treatment measures of NCDs. It will likewise assist healthcare providers to collect accurate timely health data for monitoring, planning, and research purpose.
APA, Harvard, Vancouver, ISO, and other styles
19

Chinenye, Duru Juliet, Austine Ekekwe Duroha, and Nkwocha Mcdonald. "DEVELOPMENT OF THE NATURAL LANGUAGE PROCESSING-BASED CHATBOT FOR SHOPPRITE SHOPPING MALL." International Journal of Engineering Applied Sciences and Technology 7, no. 6 (October 1, 2022): 372–81. http://dx.doi.org/10.33564/ijeast.2022.v07i06.044.

Full text
Abstract:
Software-as-a-service (SaaS) solutions are frequently used to create chatbots, giving users the option to interact with them via desktop computers, mobile phones, and tablets. To increase customer accessibility, a chatbot is being developed for customers. The user can choose their own convenient language because the device offers text or audio support. This project's goal was to develop a chatbot for the Shopprite Shopping Mall. The chatbot's goal is to converse with the client in a clever, accurate, and timely manner using natural language processing. Customers can use this feature to communicate with the bot and ask questions about specific things they want to buy and the price before visiting the mall.Customers can access the chatbot from portable mobile devices or laptops at any time, making it possible to offer a round-the-clock online service. The discomfort customers feel when they visit the Shopprite shopping mall to look for things only to discover that they are either unavailable or out of stock will be lessened by the results of this study. The following methodologies were used to carry out the work: React.js to create the chatbot's front-end and admin login page; Spacy and React.ai to train the chatbot's NLP section andE-commerce datasets for the chatbot and MySQL to manage and create the data structure that will house the e-commerce datasets.It is recommended that new capabilities be added to the chatbot, such as the delivery of purchased things to a customer's home, more training phrases to give the chatbot a better social outlook, automated item addition to the chatbot database, and even adding a barcode reader option. Testing the chatbot using a bigger dataset would also be helpful.
APA, Harvard, Vancouver, ISO, and other styles
20

Wang, Youwei. "Discover two Neural Machine Translation model variables' effects on Chatbot's performance." Highlights in Science, Engineering and Technology 41 (March 30, 2023): 17–22. http://dx.doi.org/10.54097/hset.v41i.6737.

Full text
Abstract:
To offer automatic online advice and help, chatbots are sophisticated conversational computer systems that resemble a human conversation. As chatbots' advantages grew, a variety of sectors began to use them extensively to give customers virtual support. Chatbots take advantage of methods and algorithms from two Artificial Intelligence areas: Machine Learning and Natural Language Processing. There are still several obstacles and restrictions to their use. In order to discover two NMT model variables' effects on chatbot performance, this paper does several experiments on a deep neural network chatbot model. Two straightforward and useful kinds of attentional mechanisms are used in this chatbot model: a local technique that only considers a small subset of source words at a time, as opposed to a global approach that always pays attention to all source words. This paper conducts experiments to examine how different model variables affect chatbot performance. This paper created a question template with eight general questions to test chatbot performance. Through the whole experiment results, increasing the number of iterations and increasing the dataset scale can improve the vocabulary and logic of the chatbot dialog to achieve better performance.
APA, Harvard, Vancouver, ISO, and other styles
21

Mohanty, Rajinikanth. "Medication Recommendation System Using AI." International Journal for Research in Applied Science and Engineering Technology 12, no. 5 (May 31, 2024): 704–8. http://dx.doi.org/10.22214/ijraset.2024.61642.

Full text
Abstract:
Abstract: In an era marked by the convergence of personal and technical facets of life, the need for a versatile and intelligent companion becomes increasingly apparent. This project introduces the development of a novel Chatbot, aptly named the "Personal and Technical Companion," designed to seamlessly integrate into users' lives by offering personalized support and technical assistance. The Chatbot employs state-of-the-art Natural Language Processing (NLP) techniques to engage users in natural language conversations, providing a unique blend of personal and technical guidance. The system architecture encompasses a user-friendly interface, a robust NLP engine for intent recognition, a dynamic backend server handling logic and personalization, and integration with external services to enrich the user experience. Simultaneously, it serves as a technical companion by offering code assistance, troubleshooting guidance, and recommending relevant learning resources. Implementation leverages cutting-edge technologies, including Python for backend logic, Flask for the server, TensorFlow for NLP processing, and HTML/CSS for the user interface.
APA, Harvard, Vancouver, ISO, and other styles
22

Silva, Geovana Ramos Sousa, Genaína Nunes Rodrigues, and Edna Dias Canedo. "A Modeling Strategy for the Verification of Context-Oriented Chatbot Conversational Flows via Model Checking." JUCS - Journal of Universal Computer Science 29, no. 7 (July 28, 2023): 805–35. http://dx.doi.org/10.3897/jucs.91311.

Full text
Abstract:
Verification of chatbot conversational flows is paramount to capturing and understanding chatbot behavior and predicting problems that would cause the entire flow to be restructured from scratch. The literature on chatbot testing is scarce, and the few works that approach this subject do not focus on verifying the communication sequences in tandem with the functional requirements of the conversational flow itself. However, covering all possible conversational flows of context-oriented chatbots through testing is not feasible in practice given the many ramifications that should be covered by test cases. Alternatively, model checking provides a model-based verification in a mathematically precise and unambiguous manner. Moreover, it can anticipate design flaws early in the software design phase that could lead to incompleteness, ambiguities, and inconsistencies. We postulate that finding design flaws in chatbot conversational flows via model checking early in the design phase may overcome quite a few verification gaps that are not feasible via current testing techniques for context-oriented chatbot conversational flows. Therefore, in this work, we propose a modeling strategy to design and verify chatbot conversational flows via the Uppaal model checking tool. Our strategy is materialized in the form of templates and a mapping of chatbot elements into Uppaal elements. To evaluate this strategy, we invited a few chatbot developers with different levels of expertise. The feedback from the participants revealed that the strategy is a great ally in the phases of conversational prototyping and design, as well as helping to refine requirements and revealing branching logic that can be reused in the implementation phase.
APA, Harvard, Vancouver, ISO, and other styles
23

Omoregbe, Nicholas A. I., Israel O. Ndaman, Sanjay Misra, Olusola O. Abayomi-Alli, and Robertas Damaševičius. "Text Messaging-Based Medical Diagnosis Using Natural Language Processing and Fuzzy Logic." Journal of Healthcare Engineering 2020 (September 29, 2020): 1–14. http://dx.doi.org/10.1155/2020/8839524.

Full text
Abstract:
The use of natural language processing (NLP) methods and their application to developing conversational systems for health diagnosis increases patients’ access to medical knowledge. In this study, a chatbot service was developed for the Covenant University Doctor (CUDoctor) telehealth system based on fuzzy logic rules and fuzzy inference. The service focuses on assessing the symptoms of tropical diseases in Nigeria. Telegram Bot Application Programming Interface (API) was used to create the interconnection between the chatbot and the system, while Twilio API was used for interconnectivity between the system and a short messaging service (SMS) subscriber. The service uses the knowledge base consisting of known facts on diseases and symptoms acquired from medical ontologies. A fuzzy support vector machine (SVM) is used to effectively predict the disease based on the symptoms inputted. The inputs of the users are recognized by NLP and are forwarded to the CUDoctor for decision support. Finally, a notification message displaying the end of the diagnosis process is sent to the user. The result is a medical diagnosis system which provides a personalized diagnosis utilizing self-input from users to effectively diagnose diseases. The usability of the developed system was evaluated using the system usability scale (SUS), yielding a mean SUS score of 80.4, which indicates the overall positive evaluation.
APA, Harvard, Vancouver, ISO, and other styles
24

Alam, Cecep Nurul, and Imam Firdaus. "Implementation of Finite State Automata on e-Knows Telegram Chatbot." CoreID Journal 1, no. 1 (March 31, 2023): 33–41. http://dx.doi.org/10.60005/coreid.v1i1.3.

Full text
Abstract:
The State Islamic University of Sunan Gunung Djati Bandung has a bold learning system called e-Knows. So far, if the user has a school, he must contact the admin manually. The problems are diverse, and several issues can bring personal impact. Automata language theory is the basic logic for mapping the telegram e-Knows chatbot system. The mapping is done by dividing each system using finite state automata to facilitate the completion of the system.
APA, Harvard, Vancouver, ISO, and other styles
25

Perski, Olga, David Crane, Emma Beard, and Jamie Brown. "Does the addition of a supportive chatbot promote user engagement with a smoking cessation app? An experimental study." DIGITAL HEALTH 5 (January 2019): 205520761988067. http://dx.doi.org/10.1177/2055207619880676.

Full text
Abstract:
Objective The objective of this study was to assess whether a version of the Smoke Free app with a supportive chatbot powered by artificial intelligence (versus a version without the chatbot) led to increased engagement and short-term quit success. Methods Daily or non-daily smokers aged ≥18 years who purchased the ‘pro’ version of the app and set a quit date were randomly assigned (unequal allocation) to receive the app with or without the chatbot. The outcomes were engagement (i.e. total number of logins over the study period) and self-reported abstinence at a one-month follow-up. Unadjusted and adjusted negative binomial and logistic regression models were fitted to estimate incidence rate ratios (IRRs) and odds ratios (ORs) for the associations of interest. Results A total of 57,214 smokers were included (intervention: 9.3% (5339); control: 90.7% (51,875). The app with the chatbot compared with the standard version led to a 101% increase in engagement (IRRadj = 2.01, 95% confidence interval (CI) = 1.92–2.11, p < .001). The one-month follow-up rate was 10.6% (intervention: 19.9% (1,061/5,339); control: 9.7% (5,050/51,875). Smokers allocated to the intervention had greater odds of quit success (missing equals smoking: 844/5,339 vs. 3,704/51,875, ORadj = 2.38, 95% CI = 2.19–2.58, p < .001; follow-up only: 844/1,061 vs. 3,704/5,050, ORadj = 1.36, 95% CI = 1.16–1.61, p < .001). Conclusion The addition of a supportive chatbot to a popular smoking cessation app more than doubled user engagement. In view of very low follow-up rates, there is low quality evidence that the addition also increased self-reported smoking cessation.
APA, Harvard, Vancouver, ISO, and other styles
26

Wiangsamut, Samruan, Phatthanaphong Chomphuwiset, and Suchart Khummanee. "Chatting with Plants (Orchids) in Automated Smart Farming using IoT, Fuzzy Logic and Chatbot." Advances in Science, Technology and Engineering Systems Journal 4, no. 5 (2019): 163–73. http://dx.doi.org/10.25046/aj040522.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

NURHASANAH, Youllia Indrawaty, Mira Musrini BARMAWI, and Ramsza PRAKARSA. "Implementation of Mean of Maximum on Cigarette Smoke Control in a Room." Electrotehnica, Electronica, Automatica 70, no. 3 (September 15, 2022): 59–68. http://dx.doi.org/10.46904/eea.22.70.3.1108006.

Full text
Abstract:
Home automation is an IoT technology that has the ability to communicate between devices. This technology requires a method for making the decision process. Fuzzy logic method is a solution to this problem. Fuzzy Logic is a method that functions to replicate and run human knowledge to control a system. In fuzzy logic algorithm, there are three main processes, which are fuzzification, inference, and defuzzification. Defuzzification process requires a method to obtain the value of crisp solutions. One of the many methods is Mean of Maximum (MoM). MoM is an algorithm that calculates the average of fuzzy conclusions or outputs that have the highest degree. In this research, we apply the MoM method in the defuzzification process to build a smoke emission control system in a room using an exhaust fan. Wemos D1 mini is used as a controller and processor in the system. And using telegram chatbot for communication media between users and the system. There are 27 rules in this system with 100% functionality. This research topic is very useful to keep the air clean and fresh in the room and avoid the dangers of cigarette smoke that can damage health.
APA, Harvard, Vancouver, ISO, and other styles
28

Aris, Aris Afriyanto, and Yohanes Eka Wibawa. "Development of Chatbot Services for Ordering Media Support Using Fuzzy Logic Algorithm: Case Study of PT. Pemuda Cari Cuan (Mangkokku)." JEECS (Journal of Electrical Engineering and Computer Sciences) 9, no. 1 (June 30, 2024): 9–18. http://dx.doi.org/10.54732/jeecs.v9i1.2.

Full text
Abstract:
The importance of customer service in the modern business world cannot be ignored. Providing customers with good service can increase their satisfaction and strengthen the company's and customers' relationship. In the digital era, customer service increasingly shifts to online platforms, such as social media and instant messaging applications. However, providing efficient and responsive services on these platforms can take time and effort. At Mangkokku Restaurant, using social media platforms for promotions, discounts, and other information has become part of the business strategy. However, many customers need help understanding the context of the information in Mangkokku's social media posts. In addition, customers at dine-in outlets often repeat questions about promotions and discounts to the staff at the outlet, especially when the outlet is busy, and the staff needs help providing excellent and fast service. Therefore, research on chatbot services is considered a solution to overcome limitations in communication between staff and customers. With technological advances, chatbots are expected to provide positive benefits. This research uses a fuzzy logic algorithm to help chatbots find the expected response from customers. Apart from that, interviews with the marketing division, customer service, and outlet staff were conducted to collect data on questions and information that needed to be conveyed to customers. In this way, customers at Mangkokku Restaurant are expected to be able to quickly and efficiently ask about menus, promos, discounts, or other information via this chatbot service.
APA, Harvard, Vancouver, ISO, and other styles
29

Bockhorst, Joseph, Devin Conathan, and Glenn M. Fung. "Probabilistic-Logic Bots for Efficient Evaluation of Business Rules Using Conversational Interfaces." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 9422–27. http://dx.doi.org/10.1609/aaai.v33i01.33019422.

Full text
Abstract:
We present an approach for designing conversational interfaces (chatbots) that users interact with to determine whether or not a business rule applies in a context possessing uncertainty (from the point of view of the chatbot) as to the value of input facts. Our approach relies on Bayesian network models that bring together a business rule’s logical, deterministic aspects with its probabilistic components in a common framework. Our probabilistic-logic bots (PL-bots) evaluate business rules by iteratively prompting users to provide the values of unknown facts. The order facts are solicited is dynamic, depends on known facts, and is chosen using mutual information as a heuristic so as to minimize the number of interactions with the user. We have created a web-based content creation and editing tool that quickly enables subject matter experts to create and validate PL-bots with minimal training and without requiring a deep understanding of logic or probability. To date, domain experts at a well-known insurance company have successfully created and deployed over 80 PLbots to help insurance agents determine customer eligibility for policy discounts and endorsements.
APA, Harvard, Vancouver, ISO, and other styles
30

Shimpi, Vaishnavi. "Build a Job Application with Django." International Journal for Research in Applied Science and Engineering Technology 11, no. 11 (November 30, 2023): 2099–102. http://dx.doi.org/10.22214/ijraset.2023.56967.

Full text
Abstract:
Abstract: A job portal website is an essential tool for job seekers and employers alike. In today's digital age, having an online presence is more important than ever. Developing a job portal website using Python-Django can provide a robust platform to connect job seekers with potential employers. Python-Django is a powerful and efficient web development framework that can be used to create a feature-rich job portal website. This website can provide job seekers with a user-friendly interface to search, apply, and manage job applications. Employers can benefit from the website's advanced features, such as filtering, sorting, and applicant tracking. Some of the features are User Details are Protected By using encryption technique. Random Forest Regressor algorithm that will help to maximize the placement probability. Automated message services and chatbot will be implemented for the users. User have to register themselves, and then after login, these jobs are displayed to users on the basis of their search keywords.
APA, Harvard, Vancouver, ISO, and other styles
31

Le, Nhat, A. B. Siddique, Fuad Jamour, Samet Oymak, and Vagelis Hristidis. "Generating Predictable and Adaptive Dialog Policies in Single- and Multi-domain Goal-oriented Dialog Systems." International Journal of Semantic Computing 15, no. 04 (December 2021): 419–39. http://dx.doi.org/10.1142/s1793351x21400109.

Full text
Abstract:
Most existing commercial goal-oriented chatbots are diagram-based; i.e. they follow a rigid dialog flow to fill the slot values needed to achieve a user’s goal. Diagram-based chatbots are predictable, thus their adoption in commercial settings; however, their lack of flexibility may cause many users to leave the conversation before achieving their goal. On the other hand, state-of-the-art research chatbots use Reinforcement Learning (RL) to generate flexible dialog policies. However, such chatbots can be unpredictable, may violate the intended business constraints, and require large training datasets to produce a mature policy. We propose a framework that achieves a middle ground between the diagram-based and RL-based chatbots: we constrain the space of possible chatbot responses using a novel structure, the chatbot dependency graph, and use RL to dynamically select the best valid responses. Dependency graphs are directed graphs that conveniently express a chatbot’s logic by defining the dependencies among slots: all valid dialog flows are encapsulated in one dependency graph. Our experiments in both single-domain and multi-domain settings show that our framework quickly adapts to user characteristics and achieves up to 23.77% improved success rate compared to a state-of-the-art RL model.
APA, Harvard, Vancouver, ISO, and other styles
32

Schuszter, Ioan Cristian, and Marius Cioca. "Increasing the Reliability of Software Systems Using a Large-Language-Model-Based Solution for Onboarding." Inventions 9, no. 4 (July 15, 2024): 79. http://dx.doi.org/10.3390/inventions9040079.

Full text
Abstract:
Software systems are often maintained by a group of experienced software developers in order to ensure that faults that may bring the system down are less likely. Large turnover in organizations such as CERN makes it important to think of ways of onboarding newcomers on a technical project rapidly. This paper focuses on optimizing the way that people get up-to-speed on the business logic and technologies used on the project by using a knowledge-imbued large language model that is enhanced using domain-specific knowledge from the group or team’s internal documentation. The novelty of this approach is the gathering of all of these different open-source methods for developing a chatbot and using it in an industrial use-case.
APA, Harvard, Vancouver, ISO, and other styles
33

Petrunin, Yu Yu. "“Pioneers” of Managerial Thought: Woodrow Wilson Case." Public Administration. E-journal (Russia), no. 104, 2024 (June 30, 2024): 27–36. http://dx.doi.org/10.55959/msu2070-1381-104-2024-27-36.

Full text
Abstract:
The article focuses on the diversity of the struggle for priority in the history of managerial thought. The subject of the study is the role of the American political scientist and politician Woodrow Wilson in the development of the study of public administration. The article reveals the contradiction between historical facts and prevailing ideas about the priority of W. Wilson in the development of the public administration theory. This dichotomy is proven in particular through the use of generative artificial intelligence and large language models (ChatGPT). The article analyzes the struggle for W. Wilson’s priority in the creation of the public administration science and compares it with the outwardly similar story of the discovery of genetics by Gregor Mendel. The article shows that the external similarity of the “deferred priority” in the creation of two sciences — genetics and public administration — differs significantly in the internal logic and consequences of the “rediscovery” of conceptual innovations. We can say that the “deferred priority” of the American scientist turns into a false priority, but fixed by public opinion as real. A conclusion is drawn about the relevance of W. Wilson’s case for solving modern problems of economic cybernetics. The past forms the present and future and distorting the past we can’t understand the present and predict and manage the better future.
APA, Harvard, Vancouver, ISO, and other styles
34

Akbar, Alvijar, Martin Clinton, and Ilham Firman Ashari. "Analysis and Implementation Monitoring Flood System Based on IoT Using Sugeno Fuzzy Logic." Komputika : Jurnal Sistem Komputer 12, no. 1 (May 3, 2023): 25–34. http://dx.doi.org/10.34010/komputika.v12i1.7089.

Full text
Abstract:
Flood disasters can have a detrimental impact such as damage to infrastructure, materials, and loss of life. One of the efforts that can be made to carry out early detection of flood disasters is to use a flood prediction system, where this system can monitor water levels, water flow rates, and predict real-time water increases. Information is sent to every citizen using the telegram chatbot. This system is built using several sensors and integrated with Telegram. The sensors used are ultrasonic and water flow sensors. The ultrasonic sensor is used to read the water level in the range of 0-50 cm and the water flow sensor is used to calculate the flow of water entering the test container with an interval of 0-10 liters / minute. Data is sent to telegram in realtime using the firebase database through NodeMCU ESP8266 and the WiFi module. The results of reading water level and water discharge data are processed using Sugeno fuzzy logic. The results obtained in this study indicate that the average error reading from the ultrasonic sensor is 2.43% or 97.58%. The water flow sensor shows an average error of 0.206 liters/minute or the percentage of tool accuracy is 87.06 %.
APA, Harvard, Vancouver, ISO, and other styles
35

Li, Yan, Kit-Ching Lee, Daniel Bressington, Qiuyan Liao, Mengting He, Ka-Kit Law, Angela Y. M. Leung, Alex Molassiotis, and Mengqi Li. "A Theory and Evidence-Based Artificial Intelligence-Driven Motivational Digital Assistant to Decrease Vaccine Hesitancy: Intervention Development and Validation." Vaccines 12, no. 7 (June 25, 2024): 708. http://dx.doi.org/10.3390/vaccines12070708.

Full text
Abstract:
Vaccine hesitancy is one of the top ten threats to global health. Artificial intelligence-driven chatbots and motivational interviewing skills show promise in addressing vaccine hesitancy. This study aimed to develop and validate an artificial intelligence-driven motivational digital assistant in decreasing COVID-19 vaccine hesitancy among Hong Kong adults. The intervention development and validation were guided by the Medical Research Council’s framework with four major steps: logic model development based on theory and qualitative interviews (n = 15), digital assistant development, expert evaluation (n = 5), and a pilot test (n = 12). The Vaccine Hesitancy Matrix model and qualitative findings guided the development of the intervention logic model and content with five web-based modules. An artificial intelligence-driven chatbot tailored to each module was embedded in the website to motivate vaccination intention using motivational interviewing skills. The content validity index from expert evaluation was 0.85. The pilot test showed significant improvements in vaccine-related health literacy (p = 0.021) and vaccine confidence (p = 0.027). This digital assistant is effective in improving COVID-19 vaccine literacy and confidence through valid educational content and motivational conversations. The intervention is ready for testing in a randomized controlled trial and has high potential to be a useful toolkit for addressing ambivalence and facilitating informed decision making regarding vaccination.
APA, Harvard, Vancouver, ISO, and other styles
36

Gatti, Andrea, and Viviana Mascardi. "VEsNA, a Framework for Virtual Environments via Natural Language Agents and Its Application to Factory Automation." Robotics 12, no. 2 (March 21, 2023): 46. http://dx.doi.org/10.3390/robotics12020046.

Full text
Abstract:
Automating a factory where robots are involved is neither trivial nor cheap. Engineering the factory automation process in such a way that return of interest is maximized and risk for workers and equipment is minimized is hence, of paramount importance. Simulation can be a game changer in this scenario but requires advanced programming skills that domain experts and industrial designers might not have. In this paper, we present the preliminary design and implementation of a general-purpose framework for creating and exploiting Virtual Environments via Natural language Agents (VEsNA). VEsNA takes advantage of agent-based technologies and natural language processing to enhance the design of virtual environments. The natural language input provided to VEsNA is understood by a chatbot and passed to an intelligent cognitive agent that implements the logic behind displacing objects in the virtual environment. In the complete VEsNA vision, for which this paper provides the building blocks, the intelligent agent will be able to reason on this displacement and on its compliance with legal and normative constraints. It will also be able to implement what-if analysis and case-based reasoning. Objects populating the virtual environment will include active objects and will populate a dynamic simulation whose outcomes will be interpreted by the cognitive agent; further autonomous agents, representing workers in the factory, will be added to make the virtual environment even more realistic; explanations and suggestions will be passed back to the user by the chatbot.
APA, Harvard, Vancouver, ISO, and other styles
37

Ali, Syed Mustafa, Stephanie Dick, Sarah Dillon, Matthew L. Jones, Jonnie Penn, and Richard Staley. "Histories of artificial intelligence: a genealogy of power." BJHS Themes 8 (2023): 1–18. http://dx.doi.org/10.1017/bjt.2023.15.

Full text
Abstract:
Like the polar bear beleaguered by global warming, artificial intelligence (AI) serves as the charismatic megafauna of an entangled set of local and global histories of science, technology and economics. This Themes issue develops a new perspective on AI that moves beyond conventional origin myths – AI was invented at Dartmouth in the summer of 1956, or by Alan Turing in 1950 – and reframes contemporary critique by establishing plural genealogies that situate AI within deeper histories and broader geographies. ChatGPT and art produced by AI are described as generative but are better understood as forms of pastiche based upon the use of existing infrastructures, often in ways that reflect stereotypes. The power of these tools is predicated on the fact that the Internet was first imagined and framed as a ‘commons’ when actually it has created a stockpile for centralized control over (or the extraction and exploitation of) recursive, iterative and creative work. As with most computer technologies, the ‘freedom’ and ‘flexibility’ that these tools promise also depends on a loss of agency, control and freedom for many, in this case the artists, writers and researchers who have made their work accessible in this way. Thus, rather than fixate on the latest promissory technology or focus on a relatively small set of elite academic pursuits born out of a marriage between logic, statistics and modern digital computing, we explore AI as a diffuse set of technologies and systems of epistemic and political power that participate in broader historical trajectories than are traditionally offered, expanding the scope of what ‘history of AI’ is a history of.
APA, Harvard, Vancouver, ISO, and other styles
38

Kryazhych, Olha, Oleksandr Vasenko, Liudmyla Isak, Ihor Havrylov, and Yevhen Gren. "Construction of a model for matching user's linguistic structures to a chat-bot language model." Information technology. Industry control systems 3, no. 2 (129) (June 28, 2024): 34–41. http://dx.doi.org/10.15587/1729-4061.2024.304048.

Full text
Abstract:
The research object of this work is the linguistic structures of the user when constructing a request to a chatbot with generative artificial intelligence. The study solved the task of improving the communication mediation algorithms of chatbots through the comparison models of linguistic structures by users. Sometimes the user intentionally or due to lack of information forms an inaccurate request. Formally, this is described by logical operations "And" and "And or Not". As a result of the research, a model was built comparing linguistic structures at the input with the information model of the response at the output. The model was based on an approach with recursive creation of an answer. That has made it possible to determine the basic characteristics of the object of the request and form an answer on this basis. Using this approach improved the accuracy of the chatbot response. It also made it possible to consider the linguistic structure of the user through its formalization. The use of logic algebra made it possible to find typical errors of users during dialogs with generative artificial intelligence. A feature of the reported advancement is that the comparison of models of linguistic structures of query formation is carried out through a recurrent algorithm. As a result, it makes it possible to compare the query in such a way as to reduce the absolute error of the primary data by 0.02 % and simplify the process of mathematical calculations. At the same time, the received information becomes more accurate – the number of references increases from 2 to 6 sources. The proposal could be used in practical activities to improve the natural language recognition technologies of users in chatbots with generative artificial intelligence. On this basis, it is possible to devise various applications and services for training and practical activities
APA, Harvard, Vancouver, ISO, and other styles
39

Echa Oktamiani Maulana. "Deteksi Hunian Di Tempat Parkir (Occupancy Detection In Parking Lot)." Journal Islamic Global Network for Information Technology and Entrepreneurship 2, no. 2 (April 3, 2024): 45–60. http://dx.doi.org/10.59841/ignite.v2i2.1050.

Full text
Abstract:
MSIB (Certified Independent Study and Internship) is one of the activity programs at the Merdeka Campus which aims to help students improve their skills and develop themselves. MSIB appointed Orbit Future Academy as one of the partners in the Independent Study program. Founded in 2016 with the aim of improving the quality of life through innovation, education and skills training. In accordance with its mission, namely "We curate and localize international programs and courses for upskilling, re-skilling youth, and the workforce towards jobs of the future". Partners provide opportunities for students to take Artificial Intelligence programs and study online. Learning consists of eight material courses including Python Programming, AI Technology Logic and Concepts, AI Project Cycle, AI Research Methods, ChatGPT, Professional and Company Ethics, Financial Literacy and ending with a Final Project. The final project scope carried out is the Occupancy Detection in Parking Lot project. This project uses the Computer Vision domain with the selection of the YOLO model in detecting objects and pixel segmentation. The project begins with selecting a dataset using roboflow which then goes through data pre-processing for cloning, annotation and augmentation. Then the model is trained using machine learning and deep learning algorithms to understand patterns and characteristics related to parking spaces. Once trained, the AI model will be validated using test data. This aims to ensure that the model truly recognizes the presence of the vehicle. Next, form the application design in creating an informative interface using wireframes. Then enter the deployment stage so that the system can be accessed widely and easily via the web. Lastly, field testing is to find out the performance of the application that has been designed.
APA, Harvard, Vancouver, ISO, and other styles
40

Echa Oktamiani Maulana. "Deteksi Hunian Di Tempat Parkir (Occupancy Detection In Parking Lot)." Journal Islamic Global Network for Information Technology and Entrepreneurship 2, no. 2 (April 6, 2024): 45–61. http://dx.doi.org/10.59841/ignite.v2i2.1058.

Full text
Abstract:
MSIB (Certified Independent Study and Internship) is one of the activity programs at the Merdeka Campus which aims to help students improve their skills and develop themselves. MSIB appointed Orbit Future Academy as one of the partners in the Independent Study program. Founded in 2016 with the aim of improving the quality of life through innovation, education and skills training. In accordance with its mission, namely "We curate and localize international programs and courses for upskilling, re-skilling youth, and the workforce towards jobs of the future". Partners provide opportunities for students to take Artificial Intelligence programs and study online. Learning consists of eight material courses including Python Programming, AI Technology Logic and Concepts, AI Project Cycle, AI Research Methods, ChatGPT, Professional and Company Ethics, Financial Literacy and ending with a Final Project. The final project scope carried out is the Occupancy Detection in Parking Lot project. This project uses the Computer Vision domain with the selection of the YOLO model in detecting objects and pixel segmentation. The project begins with selecting a dataset using roboflow which then goes through data pre-processing for cloning, annotation and augmentation. Then the model is trained using machine learning and deep learning algorithms to understand patterns and characteristics related to parking spaces. Once trained, the AI model will be validated using test data. This aims to ensure that the model truly recognizes the presence of the vehicle. Next, form the application design in creating an informative interface using wireframes. Then enter the deployment stage so that the system can be accessed widely and easily via the web. Lastly, field testing is to find out the performance of the application that has been designed.
APA, Harvard, Vancouver, ISO, and other styles
41

Chauhan, Mrs Ruby. "SHIKSHAKENDRA – The Epitome Of Educational Elegance." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 04 (April 26, 2024): 1–5. http://dx.doi.org/10.55041/ijsrem31683.

Full text
Abstract:
In the world of computers, artificial intelligence (AI) is a revolutionary field of computer science that aims to create intelligent systems that can mimic cognitive functions similar to those of humans and make predictions, recommendations, and decisions that affect real or virtual environments. AI is intended to facilitate problem-solving. Machine learning, neural networks, and computer vision are just a few of the many methods and tools that are included in artificial intelligence (AI). With the use of these technologies, machines are able to process enormous volumes of data and identify patterns—often rather accurately. Particular objectives and the application of specific techniques, such as reasoning, knowledge representation, planning, learning, and perception, are the focal points of the many subfields of AI study. Researchers in artificial intelligence (AI) have adopted and combined a variety of approaches to solve problems, such as formal logic, artificial neural networks, search and mathematical optimization, and approaches based on statistics, probability, and economics. Philosophy, neurology, and many other disciplines are also incorporated. Subproblems have been identified within the generation problem of making or imitating intelligence. Key Words: Learning, Chatbot, Physical disabled students, Problem-solving, Planning, Learning, Subscription, Courses, Downloads.
APA, Harvard, Vancouver, ISO, and other styles
42

Small, William R., Batia Wiesenfeld, Beatrix Brandfield-Harvey, Zoe Jonassen, Soumik Mandal, Elizabeth R. Stevens, Vincent J. Major, et al. "Large Language Model–Based Responses to Patients’ In-Basket Messages." JAMA Network Open 7, no. 7 (July 16, 2024): e2422399. http://dx.doi.org/10.1001/jamanetworkopen.2024.22399.

Full text
Abstract:
ImportanceVirtual patient-physician communications have increased since 2020 and negatively impacted primary care physician (PCP) well-being. Generative artificial intelligence (GenAI) drafts of patient messages could potentially reduce health care professional (HCP) workload and improve communication quality, but only if the drafts are considered useful.ObjectivesTo assess PCPs’ perceptions of GenAI drafts and to examine linguistic characteristics associated with equity and perceived empathy.Design, Setting, and ParticipantsThis cross-sectional quality improvement study tested the hypothesis that PCPs’ ratings of GenAI drafts (created using the electronic health record [EHR] standard prompts) would be equivalent to HCP-generated responses on 3 dimensions. The study was conducted at NYU Langone Health using private patient-HCP communications at 3 internal medicine practices piloting GenAI.ExposuresRandomly assigned patient messages coupled with either an HCP message or the draft GenAI response.Main Outcomes and MeasuresPCPs rated responses’ information content quality (eg, relevance), using a Likert scale, communication quality (eg, verbosity), using a Likert scale, and whether they would use the draft or start anew (usable vs unusable). Branching logic further probed for empathy, personalization, and professionalism of responses. Computational linguistics methods assessed content differences in HCP vs GenAI responses, focusing on equity and empathy.ResultsA total of 16 PCPs (8 [50.0%] female) reviewed 344 messages (175 GenAI drafted; 169 HCP drafted). Both GenAI and HCP responses were rated favorably. GenAI responses were rated higher for communication style than HCP responses (mean [SD], 3.70 [1.15] vs 3.38 [1.20]; P = .01, U = 12 568.5) but were similar to HCPs on information content (mean [SD], 3.53 [1.26] vs 3.41 [1.27]; P = .37; U = 13 981.0) and usable draft proportion (mean [SD], 0.69 [0.48] vs 0.65 [0.47], P = .49, t = −0.6842). Usable GenAI responses were considered more empathetic than usable HCP responses (32 of 86 [37.2%] vs 13 of 79 [16.5%]; difference, 125.5%), possibly attributable to more subjective (mean [SD], 0.54 [0.16] vs 0.31 [0.23]; P &amp;lt; .001; difference, 74.2%) and positive (mean [SD] polarity, 0.21 [0.14] vs 0.13 [0.25]; P = .02; difference, 61.5%) language; they were also numerically longer (mean [SD] word count, 90.5 [32.0] vs 65.4 [62.6]; difference, 38.4%), but the difference was not statistically significant (P = .07) and more linguistically complex (mean [SD] score, 125.2 [47.8] vs 95.4 [58.8]; P = .002; difference, 31.2%).ConclusionsIn this cross-sectional study of PCP perceptions of an EHR-integrated GenAI chatbot, GenAI was found to communicate information better and with more empathy than HCPs, highlighting its potential to enhance patient-HCP communication. However, GenAI drafts were less readable than HCPs’, a significant concern for patients with low health or English literacy.
APA, Harvard, Vancouver, ISO, and other styles
43

Dhamange, Miss Apurva Pradip. "TRIUMPH OF SECURITY : FORTIFYING DIGITAL FORTRESSES WITH TRIPLE GUARD AUTHENTICATION." Gurukul International Multidisciplinary Research Journal, June 1, 2024, 27–36. http://dx.doi.org/10.69758/wnpq6244.

Full text
Abstract:
Abstract : Network security is crucial in today’s interconnected world, and traditional text-based passwords are insufficient. This paper introduces a Three-Level Security system to enhance protection with a multi-tiered approach: Level-1 Login-based Password, Level-2 Graphical Authentication, and Level-3 Image-Based Password. The system ensures authorized access, defines user permissions, and encrypts data. It includes monitoring, rate limiting, regular updates, and penetration testing, using Python-based tools for vulnerability scanning. Triple Guard encrypts data both at rest and in transit, ensuring confidentiality and integrity. It adaptive to emerging threats and vulnerabilities, integrating Python-based security testing tools for vulnerability scanning and code analysis. By providing developers and administrators with a robust and extensible password security solution . Triple Guard encrypts data both at rest and in transit, preserving its confidentiality and integrity. Index Terms :-. Login Password Authentication, Graphical Authentication, Image Authentication, Python, Chatgpt.
APA, Harvard, Vancouver, ISO, and other styles
44

Jain, Neil, Caleb Gottlich, John Fisher, Dominic Campano, and Travis Winston. "Assessing ChatGPT’s orthopedic in-service training exam performance and applicability in the field." Journal of Orthopaedic Surgery and Research 19, no. 1 (January 3, 2024). http://dx.doi.org/10.1186/s13018-023-04467-0.

Full text
Abstract:
Abstract Background ChatGPT has gained widespread attention for its ability to understand and provide human-like responses to inputs. However, few works have focused on its use in Orthopedics. This study assessed ChatGPT’s performance on the Orthopedic In-Service Training Exam (OITE) and evaluated its decision-making process to determine whether adoption as a resource in the field is practical. Methods ChatGPT’s performance on three OITE exams was evaluated through inputting multiple choice questions. Questions were classified by their orthopedic subject area. Yearly, OITE technical reports were used to gauge scores against resident physicians. ChatGPT’s rationales were compared with testmaker explanations using six different groups denoting answer accuracy and logic consistency. Variables were analyzed using contingency table construction and Chi-squared analyses. Results Of 635 questions, 360 were useable as inputs (56.7%). ChatGPT-3.5 scored 55.8%, 47.7%, and 54% for the years 2020, 2021, and 2022, respectively. Of 190 correct outputs, 179 provided a consistent logic (94.2%). Of 170 incorrect outputs, 133 provided an inconsistent logic (78.2%). Significant associations were found between test topic and correct answer (p = 0.011), and type of logic used and tested topic (p = < 0.001). Basic Science and Sports had adjusted residuals greater than 1.96. Basic Science and correct, no logic; Basic Science and incorrect, inconsistent logic; Sports and correct, no logic; and Sports and incorrect, inconsistent logic; had adjusted residuals greater than 1.96. Conclusions Based on annual OITE technical reports for resident physicians, ChatGPT-3.5 performed around the PGY-1 level. When answering correctly, it displayed congruent reasoning with testmakers. When answering incorrectly, it exhibited some understanding of the correct answer. It outperformed in Basic Science and Sports, likely due to its ability to output rote facts. These findings suggest that it lacks the fundamental capabilities to be a comprehensive tool in Orthopedic Surgery in its current form. Level of Evidence: II.
APA, Harvard, Vancouver, ISO, and other styles
45

Chen, Tse Chiang, Mitchell W. Couldwell, Jorie Singer, Alyssa Singer, Laila Koduri, Emily Kaminski, Khoa Nguyen, Evan Multala, Aaron S. Dumont, and Arthur Wang. "Assessing the clinical reasoning of ChatGPT for mechanical thrombectomy in patients with stroke." Journal of NeuroInterventional Surgery, January 6, 2024, jnis—2023–021163. http://dx.doi.org/10.1136/jnis-2023-021163.

Full text
Abstract:
BackgroundArtificial intelligence (AI) has become a promising tool in medicine. ChatGPT, a large language model AI Chatbot, shows promise in supporting clinical practice. We assess the potential of ChatGPT as a clinical reasoning tool for mechanical thrombectomy in patients with stroke.MethodsAn internal validation of the abilities of ChatGPT was first performed using artificially created patient scenarios before assessment of real patient scenarios from the medical center’s stroke database. All patients with large vessel occlusions who underwent mechanical thrombectomy at Tulane Medical Center between January 1, 2022 and December 31, 2022 were included in the study. The performance of ChatGPT in evaluating which patients should undergo mechanical thrombectomy was compared with the decisions made by board-certified stroke neurologists and neurointerventionalists. The interpretation skills, clinical reasoning, and accuracy of ChatGPT were analyzed.Results102 patients with large vessel occlusions underwent mechanical thrombectomy. ChatGPT agreed with the physician’s decision whether or not to pursue thrombectomy in 54.3% of the cases. ChatGPT had mistakes in 8.8% of the cases, consisting of mathematics, logic, and misinterpretation errors. In the internal validation phase, ChatGPT was able to provide nuanced clinical reasoning and was able to perform multi-step thinking, although with an increased rate of making mistakes.ConclusionChatGPT shows promise in clinical reasoning, including the ability to factor a patient’s underlying comorbidities when considering mechanical thrombectomy. However, ChatGPT is prone to errors as well and should not be relied on as a sole decision-making tool in its present form, but it has potential to assist clinicians with more efficient work flow.
APA, Harvard, Vancouver, ISO, and other styles
46

Chaves Fernandes, Alexandre, Maria Eduarda Varela Cavalcanti Souto, Thais Barros Felippe Jabour, Kleber G. Luz, and Eveline Pipolo Milan. "102. Assessing ChatGPT Performance in the Brazilian Infectious Disease Specialist Certification Examination." Open Forum Infectious Diseases 10, Supplement_2 (November 27, 2023). http://dx.doi.org/10.1093/ofid/ofad500.018.

Full text
Abstract:
Abstract Background Advances in artificial intelligence have the potential to impact medical fields, including the use of natural language processing-based models, such as ChatGPT. The ability of the ChatGPT to provide insightful responses across diverse fields of expertise could assist in medical decision-making and knowledge management processes. ChatGPT has already demonstrated high accuracy in medical examinations such as the USMLE. To explore the potential of this tool in various contexts, our study aimed to evaluate the accuracy of the ChatGPT in the 2022 Brazilian Infectious Disease Specialist Certification Examination. Methods We conducted a test to evaluate the performance of GPT-3.5 and GPT-4 on the 2022 Brazilian Infectious Disease Specialist Certification Exam. A theoretical exam, consisting of 80 multiple-choice questions with five alternatives, was used to test performance. The GPT was given a command containing the question statement and alternatives, and a brief comment on the logic behind the answer was requested. Descriptive statistics were used to analyze the absolute performance of the correct answers in the ChatGPT-3.5 and ChatGPT-4 models. In addition, the degree of correlation between answers and performance throughout the test was estimated using Spearman's coefficient and a logistic regression curve, respectively. Results Of the 80 questions in the exam, four were excluded because they were invalidated in the final answer key. ChatGPT-3.5 had an accuracy of 53.95% (41/76), whereas ChatGPT-4 had an accuracy of 73.68% (56/76). Spearman's correlation coefficient between the two models was 0.585. There was a slight trend towards improvement in ChatGPT-4 performance throughout the test, as observed in the logistic regression curve. Comparison of Accuracy between ChatGPT 3.5 and ChatGPT4 The graph shows the percentage of accuracy for the two GPT models. The performance of ChatGPT-4 was superior to ChatGPT-3.5. Distribution of Correct and Incorrect Responses by ChatGPT-4 in Medical Test Questions The graph displays the distribution of responses generated by ChatGPT-4. The logistic regression curve shows a slight upward trend, indicating a slight improvement in performance as the questions were answered. Conclusion ChatGPT-4 achieved performance above the 60% minimum threshold required for the certification exam. This indicates that it is a promising technology in various fields, including infectious diseases. However, its potential applications and associated ethical dilemmas must be thoroughly assessed. This advancement also highlights the need for medical education to concentrate on developing competence, skills, and critical thinking rather than relying solely on memorization Disclosures All Authors: No reported disclosures
APA, Harvard, Vancouver, ISO, and other styles
47

Liu, Siru, Aileen P. Wright, Barron L. Patterson, Jonathan P. Wanderer, Robert W. Turer, Scott D. Nelson, Allison B. McCoy, Dean F. Sittig, and Adam Wright. "Using AI-generated suggestions from ChatGPT to optimize clinical decision support." Journal of the American Medical Informatics Association, April 22, 2023. http://dx.doi.org/10.1093/jamia/ocad072.

Full text
Abstract:
Abstract Objective To determine if ChatGPT can generate useful suggestions for improving clinical decision support (CDS) logic and to assess noninferiority compared to human-generated suggestions. Methods We supplied summaries of CDS logic to ChatGPT, an artificial intelligence (AI) tool for question answering that uses a large language model, and asked it to generate suggestions. We asked human clinician reviewers to review the AI-generated suggestions as well as human-generated suggestions for improving the same CDS alerts, and rate the suggestions for their usefulness, acceptance, relevance, understanding, workflow, bias, inversion, and redundancy. Results Five clinicians analyzed 36 AI-generated suggestions and 29 human-generated suggestions for 7 alerts. Of the 20 suggestions that scored highest in the survey, 9 were generated by ChatGPT. The suggestions generated by AI were found to offer unique perspectives and were evaluated as highly understandable and relevant, with moderate usefulness, low acceptance, bias, inversion, redundancy. Conclusion AI-generated suggestions could be an important complementary part of optimizing CDS alerts, can identify potential improvements to alert logic and support their implementation, and may even be able to assist experts in formulating their own suggestions for CDS improvement. ChatGPT shows great potential for using large language models and reinforcement learning from human feedback to improve CDS alert logic and potentially other medical areas involving complex, clinical logic, a key step in the development of an advanced learning health system.
APA, Harvard, Vancouver, ISO, and other styles
48

Li, Jiakun, Hui Zong, Erman Wu, Rongrong Wu, Zhufeng Peng, Jing Zhao, Lu Yang, Hong Xie, and Bairong Shen. "Exploring the potential of artificial intelligence to enhance the writing of english academic papers by non-native english-speaking medical students - the educational application of ChatGPT." BMC Medical Education 24, no. 1 (July 9, 2024). http://dx.doi.org/10.1186/s12909-024-05738-y.

Full text
Abstract:
Abstract Background Academic paper writing holds significant importance in the education of medical students, and poses a clear challenge for those whose first language is not English. This study aims to investigate the effectiveness of employing large language models, particularly ChatGPT, in improving the English academic writing skills of these students. Methods A cohort of 25 third-year medical students from China was recruited. The study consisted of two stages. Firstly, the students were asked to write a mini paper. Secondly, the students were asked to revise the mini paper using ChatGPT within two weeks. The evaluation of the mini papers focused on three key dimensions, including structure, logic, and language. The evaluation method incorporated both manual scoring and AI scoring utilizing the ChatGPT-3.5 and ChatGPT-4 models. Additionally, we employed a questionnaire to gather feedback on students’ experience in using ChatGPT. Results After implementing ChatGPT for writing assistance, there was a notable increase in manual scoring by 4.23 points. Similarly, AI scoring based on the ChatGPT-3.5 model showed an increase of 4.82 points, while the ChatGPT-4 model showed an increase of 3.84 points. These results highlight the potential of large language models in supporting academic writing. Statistical analysis revealed no significant difference between manual scoring and ChatGPT-4 scoring, indicating the potential of ChatGPT-4 to assist teachers in the grading process. Feedback from the questionnaire indicated a generally positive response from students, with 92% acknowledging an improvement in the quality of their writing, 84% noting advancements in their language skills, and 76% recognizing the contribution of ChatGPT in supporting academic research. Conclusion The study highlighted the efficacy of large language models like ChatGPT in augmenting the English academic writing proficiency of non-native speakers in medical education. Furthermore, it illustrated the potential of these models to make a contribution to the educational evaluation process, particularly in environments where English is not the primary language.
APA, Harvard, Vancouver, ISO, and other styles
49

Herzog, Isabel, Dhruv Mendiratta, Ashok Para, Ari Berg, Neil Kaushal, and Michael Vives. "Assessing the potential role of ChatGPT in spine surgery research." Journal of Experimental Orthopaedics 11, no. 3 (June 13, 2024). http://dx.doi.org/10.1002/jeo2.12057.

Full text
Abstract:
AbstractPurposeSince its release in November 2022, Chat Generative Pre‐Trained Transformer 3.5 (ChatGPT), a complex machine learning model, has garnered more than 100 million users worldwide. The aim of this study is to determine how well ChatGPT can generate novel systematic review ideas on topics within spine surgery.MethodsChatGPT was instructed to give ten novel systematic review ideas for five popular topics in spine surgery literature: microdiscectomy, laminectomy, spinal fusion, kyphoplasty and disc replacement. A comprehensive literature search was conducted in PubMed, CINAHL, EMBASE and Cochrane. The number of nonsystematic review articles and number of systematic review papers that had been published on each ChatGPT‐generated idea were recorded.ResultsOverall, ChatGPT had a 68% accuracy rate in creating novel systematic review ideas. More specifically, the accuracy rates were 80%, 80%, 40%, 70% and 70% for microdiscectomy, laminectomy, spinal fusion, kyphoplasty and disc replacement, respectively. However, there was a 32% rate of ChatGPT generating ideas for which there were 0 nonsystematic review articles published. There was a 71.4%, 50%, 22.2%, 50%, 62.5% and 51.2% success rate of generating novel systematic review ideas, for which there were also nonsystematic reviews published, for microdiscectomy, laminectomy, spinal fusion, kyphoplasty, disc replacement and overall, respectively.ConclusionsChatGPT generated novel systematic review ideas at an overall rate of 68%. ChatGPT can help identify knowledge gaps in spine research that warrant further investigation, when used under supervision of an experienced spine specialist. This technology can be erroneous and lacks intrinsic logic; so, it should never be used in isolation.Level of EvidenceNot applicable.
APA, Harvard, Vancouver, ISO, and other styles
50

Gubelmann, Reto, Ioannis Katis, Christina Niklaus, and Siegfried Handschuh. "Capturing the Varieties of Natural Language Inference: A Systematic Survey of Existing Datasets and Two Novel Benchmarks." Journal of Logic, Language and Information, November 20, 2023. http://dx.doi.org/10.1007/s10849-023-09410-4.

Full text
Abstract:
AbstractTransformer-based Pre-Trained Language Models currently dominate the field of Natural Language Inference (NLI). We first survey existing NLI datasets, and we systematize them according to the different kinds of logical inferences that are being distinguished. This shows two gaps in the current dataset landscape, which we propose to address with one dataset that has been developed in argumentative writing research as well as a new one building on syllogistic logic. Throughout, we also explore the promises of ChatGPT. Our results show that our new datasets do pose a challenge to existing methods and models, including ChatGPT, and that tackling this challenge via fine-tuning yields only partly satisfactory results.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography