Segui questo link per vedere altri tipi di pubblicazioni sul tema: Prompt engineering.

Articoli di riviste sul tema "Prompt engineering"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Prompt engineering".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

Leung, Chi Hong. "Promoting optimal learning with ChatGPT: A comprehensive exploration of prompt engineering in education". Asian Journal of Contemporary Education 8, n. 2 (6 giugno 2024): 104–14. http://dx.doi.org/10.55493/5052.v8i2.5101.

Testo completo
Abstract (sommario):
The purpose of this paper is to study the topic of prompt engineering, which serves as a valuable tool for teachers in creating optimal prompts that effectively enhance students' learning experiences with ChatGPT. This paper explores a variety of strategies related to prompt engineering. These strategies include assigning specific roles to ChatGPT, clearly defining objectives, applying constraints, utilizing structural prompt formats, refining answers through dialogues, and integrating practice exercises. Moreover, this paper specifically delves into relevant approaches to prompt engineering in the field of education, such as close question prompts, open question prompts, role-playing prompts, and Socratic prompts. It also presents the outcomes derived from a comprehensive survey conducted to assess teachers' attitudes towards the implementation of prompt engineering with ChatGPT. The collected data indicates that prompt engineering significantly contributes to the enhancement of the learning experience. This is achieved by tailoring prompts to suit individual needs, fostering greater engagement, promoting critical thinking skills, and facilitating collaborative and interactive learning environments. The findings of this study hold significant practical implications for educators. By effectively implementing prompt engineering strategies, teachers can fully harness the potential of ChatGPT to enhance students' learning experiences. By customizing prompts to individual students, educators can foster engagement, stimulate reasoning, and facilitate collaboration among students.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Velásquez-Henao, Juan David, Carlos Jaime Franco-Cardona e Lorena Cadavid-Higuita. "Prompt Engineering: a methodology for optimizing interactions with AI-Language Models in the field of engineering". DYNA 90, n. 230 (3 novembre 2023): 9–17. http://dx.doi.org/10.15446/dyna.v90n230.111700.

Testo completo
Abstract (sommario):
ChatGPT is a versatile conversational Artificial Intelligence model that responds to user input prompts, with applications in academia and various sectors. However, crafting effective prompts can be challenging, leading to potentially inaccurate or contextually inappropriate responses, emphasizing the importance of prompt engineering in achieving accurate outcomes across different domains. This study aims to address this void by introducing a methodology for optimizing interactions with Artificial Intelligence language models, like ChatGPT, through prompts in the field of engineering. The approach is called GPEI and relies on the latest advancements in this area; and consists of four steps: define the objective, design the prompt, evaluate the response, and iterate. Our proposal involves two key aspects: data inclusion in prompt design for engineering applications and the integration of Explainable Artificial Intelligence principles to assess responses, enhancing transparency. It combines insights from various methodologies to address issues like hallucinations, emphasizing iterative prompt refinement techniques like posing opposing questions and using specific patterns for improvement. This methodology could improve prompt precision and utility in engineering.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Heston, Thomas F., e Charya Khun. "Prompt Engineering in Medical Education". International Medical Education 2, n. 3 (31 agosto 2023): 198–205. http://dx.doi.org/10.3390/ime2030019.

Testo completo
Abstract (sommario):
Artificial intelligence-powered generative language models (GLMs), such as ChatGPT, Perplexity AI, and Google Bard, have the potential to provide personalized learning, unlimited practice opportunities, and interactive engagement 24/7, with immediate feedback. However, to fully utilize GLMs, properly formulated instructions are essential. Prompt engineering is a systematic approach to effectively communicating with GLMs to achieve the desired results. Well-crafted prompts yield good responses from the GLM, while poorly constructed prompts will lead to unsatisfactory responses. Besides the challenges of prompt engineering, significant concerns are associated with using GLMs in medical education, including ensuring accuracy, mitigating bias, maintaining privacy, and avoiding excessive reliance on technology. Future directions involve developing more sophisticated prompt engineering techniques, integrating GLMs with other technologies, creating personalized learning pathways, and researching the effectiveness of GLMs in medical education.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

IŞIN, Zişan Cihangir, Hilal FIDAN, Beyşan Tarık IŞIN, Erşan IŞIN e Tamer IŞIN. "Is Prompt Engineering a Profession?" International Journal of Artificial Intelligence & Applications 15, n. 3 (29 maggio 2024): 29–39. http://dx.doi.org/10.5121/ijaia.2024.15303.

Testo completo
Abstract (sommario):
Prompt Engineering, the systematic design and construction of prompts for human-AI interaction, raises questions regarding its professional status. This paper examines Prompt Engineering and evaluates whether it qualifies as a distinct profession. Through an analysis of its defining characteristics, including specialized skills, ethical considerations, and societal impact, this study explores the parallels between Prompt Engineering and established professions. Drawing on examples from various fields, it argues for the recognition of Prompt Engineering as a legitimate profession. By addressing the complexities of human-AI interaction and the evolving demands of technology, this research contributes to the ongoing discourse on the professionalization of emerging disciplines
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Rathod, Jay Dinesh. "Systematic Study of Prompt Engineering". International Journal for Research in Applied Science and Engineering Technology 12, n. 6 (30 giugno 2024): 597–613. http://dx.doi.org/10.22214/ijraset.2024.63182.

Testo completo
Abstract (sommario):
Abstract: Now a days Generative Artificial Intelligence is the buzz in the field of technology and science it is the implementation of the Artificial intelligence to generate different types of contents with the help of its models and ease the human life to a extend. Prompt Engineering is one of the arts of crafting instructions to guide large language models (LLMs), and has emerged as a critical technique in natural language processing (NLP). This systematic study delves into the intricacies of prompt engineering, exploring its techniques, evaluation methods, and applications. The study categorizes prompt engineering techniques into instruction-based, information-based, reformulation, and metaphorical prompts. It emphasizes the importance of evaluating prompt effectiveness using metrics like accuracy, fluency, and relevance. Additionally, the study investigates factors influencing prompt effectiveness, including prompt length, complexity, specificity, phrasing, vocabulary choice, framing, and context. The study highlights the impact of prompt engineering in enhancing LLM performance for NLP tasks like machine translation, question answering, summarization, and text generation. It underscores the role of prompt engineering in developing domainspecific LLM applications, enabling knowledge extraction, creative content generation, and addressing domain-specific challenges. The study concludes by addressing ethical considerations in prompt engineering, emphasizing the need to mitigate bias and discrimination while ensuring transparency
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Sivarajkumar, Sonish, Mark Kelley, Alyssa Samolyk-Mazzanti, Shyam Visweswaran e Yanshan Wang. "An Empirical Evaluation of Prompting Strategies for Large Language Models in Zero-Shot Clinical Natural Language Processing: Algorithm Development and Validation Study". JMIR Medical Informatics 12 (8 aprile 2024): e55318. http://dx.doi.org/10.2196/55318.

Testo completo
Abstract (sommario):
Background Large language models (LLMs) have shown remarkable capabilities in natural language processing (NLP), especially in domains where labeled data are scarce or expensive, such as the clinical domain. However, to unlock the clinical knowledge hidden in these LLMs, we need to design effective prompts that can guide them to perform specific clinical NLP tasks without any task-specific training data. This is known as in-context learning, which is an art and science that requires understanding the strengths and weaknesses of different LLMs and prompt engineering approaches. Objective The objective of this study is to assess the effectiveness of various prompt engineering techniques, including 2 newly introduced types—heuristic and ensemble prompts, for zero-shot and few-shot clinical information extraction using pretrained language models. Methods This comprehensive experimental study evaluated different prompt types (simple prefix, simple cloze, chain of thought, anticipatory, heuristic, and ensemble) across 5 clinical NLP tasks: clinical sense disambiguation, biomedical evidence extraction, coreference resolution, medication status extraction, and medication attribute extraction. The performance of these prompts was assessed using 3 state-of-the-art language models: GPT-3.5 (OpenAI), Gemini (Google), and LLaMA-2 (Meta). The study contrasted zero-shot with few-shot prompting and explored the effectiveness of ensemble approaches. Results The study revealed that task-specific prompt tailoring is vital for the high performance of LLMs for zero-shot clinical NLP. In clinical sense disambiguation, GPT-3.5 achieved an accuracy of 0.96 with heuristic prompts and 0.94 in biomedical evidence extraction. Heuristic prompts, alongside chain of thought prompts, were highly effective across tasks. Few-shot prompting improved performance in complex scenarios, and ensemble approaches capitalized on multiple prompt strengths. GPT-3.5 consistently outperformed Gemini and LLaMA-2 across tasks and prompt types. Conclusions This study provides a rigorous evaluation of prompt engineering methodologies and introduces innovative techniques for clinical information extraction, demonstrating the potential of in-context learning in the clinical domain. These findings offer clear guidelines for future prompt-based clinical NLP research, facilitating engagement by non-NLP experts in clinical NLP advancements. To the best of our knowledge, this is one of the first works on the empirical evaluation of different prompt engineering approaches for clinical NLP in this era of generative artificial intelligence, and we hope that it will inspire and inform future research in this area.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Shepherd, Jennifer, e Donald Geisheimer. "FAQs: AI and prompt engineering". American Nurse Journal 19, n. 6 (8 giugno 2024): 14–19. http://dx.doi.org/10.51256/anj062414.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Haverkamp, Andrea. "Engineering in Crisis – Critical Reflection Writing Prompt". International Journal of Engineering, Social Justice, and Peace 8, n. 2 (18 ottobre 2021): 117. http://dx.doi.org/10.24908/ijesjp.v8i2.15135.

Testo completo
Abstract (sommario):
Writing Prompt sent to the International Engineering, Social Justice, and Peace community and other engineering education sub-communitiess (primarily in North America: Our objective is to capture your thoughts, experiences, and responses to intersecting crises of COVID-19, white supremacy, anti-blackness, police violence, late capitalism, technologies and engineerings, power formations, state violence, academia, and engineering education over the past year. We wish to break the mould and create a space for the entire engineering community - students, educators, and professionals to share varied perspectives. Being oral history, this project is free from the usual academic barriers or gatekeeping. No citations needed if you do not wish to do so. While we aim to keep editorial interference at a minimum, we do not intend to include entries that (in our aesthetic and axiological judgement) can cause significant structural, cultural, or emotional harm to marginalised communities. We recognise that such filtering is hard to fully specify. The "objectives" statement above could be a guide for providing you a sense for what we are looking for. Entries should align with IJESJP's focus on engendering dialog on engineering practices that enhance gender, racial, class, and cultural equity and are democratic, non-oppressive, and non-violent. We acknowledge that even this filter limits the expression of particular forms of knowing and being. Our commitments are available here: http://esjp.org/about-esjp/our-commitments We are inspired by the way stories are told and archived through oral history, and feel the need to capture these stories before they become lost in the flux of our ongoing crises. Such history can be a story, anger and frustrations through rant, back of the envelope ideas and theories, poems, prose, fiction, critiques. This history is anything and everything you wish to document in time. Instructions: Please provide the following information by August 15th, 2021. Entry. Title, optional File upload, optional. Name, gender pronouns, and affiliations of authors Do you want your submission anonymous?
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Schmidt, Douglas C., Jesse Spencer-Smith, Quchen Fu e Jules White. "Towards a Catalog of Prompt Patterns to Enhance the Discipline of Prompt Engineering". ACM SIGAda Ada Letters 43, n. 2 (6 giugno 2024): 43–51. http://dx.doi.org/10.1145/3672359.3672364.

Testo completo
Abstract (sommario):
The rapid advent of Large Language Models (LLMs), such as ChatGPT and Claude, is revolutionizing various fields, from education and healthcare to the engineering of reliable software systems. These LLMs operate through "prompts," which are natural language inputs that users employ to query and leverage the models' capabilities. Given the novelty of LLMs, the understanding of how to effectively use prompts remains largely anecdotal, based on isolated use cases. This fragmented approach limits the reliability and utility of LLMs, especially when they are applied in mission-critical software environments. To harness the full potential of LLMs in such crucial contexts, therefore, we need a systematic, disciplined approach to "prompt engineering" that guides interactions with and evaluations of these LLMs.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Kochanek, Mateusz, Igor Cichecki, Oliwier Kaszyca, Dominika Szydło, Michał Madej, Dawid Jędrzejewski, Przemysław Kazienko e Jan Kocoń. "Improving Training Dataset Balance with ChatGPT Prompt Engineering". Electronics 13, n. 12 (8 giugno 2024): 2255. http://dx.doi.org/10.3390/electronics13122255.

Testo completo
Abstract (sommario):
The rapid evolution of large language models, in particular OpenAI’s GPT-3.5-turbo and GPT-4, indicates a growing interest in advanced computational methodologies. This paper proposes a novel approach to synthetic data generation and knowledge distillation through prompt engineering. The potential of large language models (LLMs) is used to address the problem of unbalanced training datasets for other machine learning models. This is not only a common issue but also a crucial determinant of the final model quality and performance. Three prompting strategies have been considered: basic, composite, and similarity prompts. Although the initial results do not match the performance of comprehensive datasets, the similarity prompts method exhibits considerable promise, thus outperforming other methods. The investigation of our rebalancing methods opens pathways for future research on leveraging continuously developed LLMs for the enhanced generation of high-quality synthetic data. This could have an impact on many large-scale engineering applications.
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Cotroneo, Peter, e James Hutson. "Generative AI tools in art education: Exploring prompt engineering and iterative processes for enhanced creativity". Metaverse 4, n. 1 (5 giugno 2023): 14. http://dx.doi.org/10.54517/m.v4i1.2164.

Testo completo
Abstract (sommario):
<p>The rapid development and adoption of generative artificial intelligence (AI) tools in the art and design education landscape have introduced both opportunities and challenges. This timely study addresses the need to effectively integrate these tools into the classroom while considering ethical implications and the importance of prompt engineering. By examining the iterative process of refining original ideas through multiple iterations, verbal expansion, and the use of OpenAI’s DALL-E2 for generating diverse visual outcomes, researchers gain insights into the potential benefits and pitfalls of these tools in an educational context. Students in the digital at case study were taught prompt engineering techniques and were tasked with crafting multiple prompts, focusing on refining their ideas over time. Participants demonstrated an increased understanding of the potential and limitations of generative AI tools and how to manipulate subject matter for more effective results. The iterative process encouraged students to explore and experiment with their creative ideas, leading to a deeper understanding of the possibilities offered by AI tools. Despite acknowledging the ethical concerns regarding copyright and the potential replacement of artists, students appreciated the value of generative AI tools for enhancing their sketchbooks and ideation process. Through prompt engineering and iterative processes, students developed a more detail-oriented approach to their work. The challenge of using AI-generated images as final products was conceptually intriguing, requiring further investigation and consideration of the prompts. This study highlights the potential benefits and challenges of integrating generative AI tools into art and design classrooms, emphasizing the importance of prompt engineering, iterative processes, and ethical considerations as these technologies continue to evolve.</p>
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Bae, Jaehyeon, Seoryeong Kwon e Seunghwan Myeong. "Enhancing Software Code Vulnerability Detection Using GPT-4o and Claude-3.5 Sonnet: A Study on Prompt Engineering Techniques". Electronics 13, n. 13 (6 luglio 2024): 2657. http://dx.doi.org/10.3390/electronics13132657.

Testo completo
Abstract (sommario):
This study investigates the efficacy of advanced large language models, specifically GPT-4o, Claude-3.5 Sonnet, and GPT-3.5 Turbo, in detecting software vulnerabilities. Our experiment utilized vulnerable and secure code samples from the NIST Software Assurance Reference Dataset (SARD), focusing on C++, Java, and Python. We employed three distinct prompting techniques as follows: Concise, Tip Setting, and Step-by-Step. The results demonstrate that GPT-4o and Claude-3.5 Sonnet significantly outperform GPT-3.5 Turbo in vulnerability detection. GPT-4o showed the highest improvement with the Step-by-Step prompt, achieving an F1 score of 0.9072. Claude-3.5 Sonnet exhibited consistent high performance across all prompt types, with its Step-by-Step prompt yielding the best overall results (F1 score: 0.8933, AUC: 0.74). In contrast, GPT-3.5 Turbo showed minimal performance changes across prompts, with the Tip Setting prompt performing best (AUC: 0.65, F1 score: 0.6772), yet significantly lower than the other models. Our findings highlight the potential of advanced models in enhancing software security and underscore the importance of prompt engineering in optimizing their performance.
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Bevara, Ravi Varma Kumar, Nishith Reddy Mannuru, Sai Pranathi Karedla e Ting Xiao. "Scaling Implicit Bias Analysis across Transformer-Based Language Models through Embedding Association Test and Prompt Engineering". Applied Sciences 14, n. 8 (20 aprile 2024): 3483. http://dx.doi.org/10.3390/app14083483.

Testo completo
Abstract (sommario):
In the evolving field of machine learning, deploying fair and transparent models remains a formidable challenge. This study builds on earlier research, demonstrating that neural architectures exhibit inherent biases by analyzing a broad spectrum of transformer-based language models from base to x-large configurations. This article investigates movie reviews for genre-based bias, which leverages the Word Embedding Association Test (WEAT), revealing that scaling models up tends to mitigate bias, with larger models showing up to a 29% reduction in prejudice. Alternatively, this study also underscores the effectiveness of prompt-based learning, a facet of prompt engineering, as a practical approach to bias mitigation, as this technique reduces genre bias in reviews by more than 37% on average. This suggests that the refinement of development practices should include the strategic use of prompts in shaping model outputs, highlighting the crucial role of ethical AI integration to weave fairness seamlessly into the core functionality of transformer models. Despite the basic nature of the prompts employed in this research, this highlights the possibility of embracing structured prompt engineering to create AI systems that are ethical, equitable, and more responsible for their actions.
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Singh, Atulesh Pratap, Dr Ajay Singh e Dr Pushpneel Verma. "Prompt Engineering in AI driven Indian Healthcare". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, n. 07 (8 luglio 2024): 1–9. http://dx.doi.org/10.55041/ijsrem36348.

Testo completo
Abstract (sommario):
The phenomenal growth of artificial intelligence (AI) and machine learning (ML) in recent years has revolutionized various sectors, including healthcare. Particularly the large language models (LLMs) developed recently by various companies such as Google, Microsoft, Nvidia, OpenAI have demonstrated remarkable capabilities in understanding and generating contextualized text thus making them valuable tools that can be effectively utilized by healthcare industry and service providers. Prompt engineering plays a crucial role leveraging full potential of such LLMs. Prompt Engineering is the technique used to design and refine inputs to improve the performance of AI models and maximize the efficacy of AI applications. In India, with diverse population and varying healthcare needs and resource constraints specially in tier 3 cities, villages and remote areas. This paper highlights pivotal role that Prompt Engineering can play in supplementing solutions and initiatives to address healthcare needs. It explores applications in diagnostics, electronic health records (EHRs), virtual health assistants and the overall impact on patient care and administrative efficiency. Through case studies and theoretical analysis, this paper aims to demonstrate how prompt engineering can enhance diagnostic accuracy, streamline administrative tasks, and improve patient care. Keywords— AI, healthcare delivery, prompt engineering, LLM, EHRs, virtual health assistants, patient care
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Patel, Apurva, Maria-Vittoria Elena e Joshua Summers. "A Systematic Approach to Evaluating Design Prompts in Supporting Experimental Design Research". Proceedings of the Design Society: International Conference on Engineering Design 1, n. 1 (luglio 2019): 2755–64. http://dx.doi.org/10.1017/dsi.2019.282.

Testo completo
Abstract (sommario):
AbstractExperiments that study engineering behavior in design often rely on participants responding to a given design prompt or a problem statement. Moreover, researchers often find themselves testing multiple variables with a relatively small participant pool. In such situations multiple design prompts may be used to boost replication by giving each participant an equivalent problem with a different experimental condition. This paper presents a systematic approach to compare given design prompts using a two-step process that allows an initial comparison of the prompts and a post-experiment verification of the similarity of the given prompts. Comparison metrics are provided which can be used to evaluate a level of similarity of existing prompts as well as develop similar problems. These metrics include complexity (size, coupling, and solvability), familiarity, and prompt structure. Statistical methods are discussed for post-experiment verification. Guidelines are provided for a post-experiment survey which may be used for an additional perspective of prompt similarity. The proposed approach is demonstrated using an experiment where two design prompts were used for within-subject replication.
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Какун, Артем, e Сергій Титенко. "GENERATIVE AI AND PROMPT ENGINEERING IN EDUCATION". Modern engineering and innovative technologies, n. 29-01 (30 ottobre 2023): 117–21. http://dx.doi.org/10.30890/2567-5273.2023-29-01-052.

Testo completo
Abstract (sommario):
The development of generative AIs and the variability of their use are still at the level of research and active development simultaneously. However, it has already become clear that the emergence of generative AI significantly impacts many industries, in
Gli stili APA, Harvard, Vancouver, ISO e altri
17

LI, Huanzhen. "Improve Code Summarization via Prompt-Tuning CodeT5". Wuhan University Journal of Natural Sciences 28, n. 6 (dicembre 2023): 474–82. http://dx.doi.org/10.1051/wujns/2023286474.

Testo completo
Abstract (sommario):
Code comments are crucial in software engineering, aiding in program maintenance and code reuse. The process of generating clear and descriptive code comments, outlining code functionality, is called code summarization. Existing code summarization methods are typically trained using transformer-based models. However, these trained models often possess limited parameters and lack specific training tasks, hindering their ability to capture code semantics effectively. This paper uses a high-capacity pre-trained model, CodeT5, for code summarization. CodeT5 is designed with an encoder-decoder architecture that excels in code summarization tasks. Furthermore, we adopt a novel paradigm, "pre-train, prompt, predict", to unlock the knowledge embedded within CodeT5. We devise a prompt template to convert input code into code prompts and fine-tune CodeT5 with these prompts—a process we term prompt tuning. Our effectiveness experiments demonstrate that prompt tuning CodeT5 with only 40% of the dataset can achieve comparable performance to fine-tuning CodeT5 with 100% of the dataset. This means our approach is applicable in few-shot learning scenarios. Additionally, our prompt learning method is not sensitive to the size of the tuning dataset. Our practicality experiments show that the performance of prompt-tuned CodeT5 far surpasses that of transformer-based models trained on code-comment datasets collected from Stack Overflow.
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Klinkhammer, Dennis. "Misuse of large language models: Exploiting weaknesses for target-specific outputs". TATuP - Zeitschrift für Technikfolgenabschätzung in Theorie und Praxis 33, n. 2 (28 giugno 2024): 29–34. http://dx.doi.org/10.14512/tatup.33.2.29.

Testo completo
Abstract (sommario):
Prompt engineering in large language models (LLMs) in combination with external context can be misused for jailbreaks in order to generate malicious outputs. In the process, jailbreak prompts are apparently amplified in such a way that LLMs can generate malicious outputs on a large scale despite their initial training. As social bots, these can contribute to the dissemination of misinformation, hate speech, and discriminatory content. Using GPT4-x-Vicuna-13b-4bit from NousResearch, we demonstrate in this article the effectiveness of jailbreak prompts and external contexts via Jupyter Notebook based on the Python programming language. In addition, we highlight the methodological foundations of prompt engineering and its potential to create malicious content in order to sensitize researchers, practitioners, and policymakers to the importance of responsible development and deployment of LLMs.
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Hei, Nailei, Qianyu Guo, Zihao Wang, Yan Wang, Haofen Wang e Wenqiang Zhang. "A User-Friendly Framework for Generating Model-Preferred Prompts in Text-to-Image Synthesis". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 3 (24 marzo 2024): 2139–47. http://dx.doi.org/10.1609/aaai.v38i3.27986.

Testo completo
Abstract (sommario):
Well-designed prompts have demonstrated the potential to guide text-to-image models in generating amazing images. Although existing prompt engineering methods can provide high-level guidance, it is challenging for novice users to achieve the desired results by manually entering prompts due to a discrepancy between novice-user-input prompts and the model-preferred prompts. To bridge the distribution gap between user input behavior and model training datasets, we first construct a novel Coarse-Fine Granularity Prompts dataset (CFP) and propose a novel User-Friendly Fine-Grained Text Generation framework (UF-FGTG) for automated prompt optimization. For CFP, we construct a novel dataset for text-to-image tasks that combines coarse and fine-grained prompts to facilitate the development of automated prompt generation methods. For UF-FGTG, we propose a novel framework that automatically translates user-input prompts into model-preferred prompts. Specifically, we propose a prompt refiner that continually rewrites prompts to empower users to select results that align with their unique needs. Meanwhile, we integrate image-related loss functions from the text-to-image model into the training process of text generation to generate model-preferred prompts. Additionally, we propose an adaptive feature extraction module to ensure diversity in the generated results. Experiments demonstrate that our approach is capable of generating more visually appealing and diverse images than previous state-of-the-art methods, achieving an average improvement of 5% across six quality and aesthetic metrics. Data and code are available at https://github.com/Naylenv/UF-FGTG.
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Ahmed, Awais, Xiaoyang Zeng, Rui Xi, Mengshu Hou e Syed Attique Shah. "MED-Prompt: A novel prompt engineering framework for medicine prediction on free-text clinical notes". Journal of King Saud University - Computer and Information Sciences 36, n. 2 (febbraio 2024): 101933. http://dx.doi.org/10.1016/j.jksuci.2024.101933.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Bistroń, Marta. "ZWIĘKSZANIE POTENCJAŁU SZTUCZNEJ INTELIGENCJI GENERATYWNEJ DZIĘKI PROMPT ENGINEERING". PRZEGLĄD TELEKOMUNIKACYJNY - WIADOMOŚCI TELEKOMUNIKACYJNE 1, n. 4 (21 agosto 2023): 343–46. http://dx.doi.org/10.15199/59.2023.4.77.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Huang, Shize, Qianhui Fan, Zhaoxin Zhang, Xiaowen Liu, Guanqun Song e Jinzhe Qin. "Segment Shards: Cross-Prompt Adversarial Attacks against the Segment Anything Model". Applied Sciences 14, n. 8 (15 aprile 2024): 3312. http://dx.doi.org/10.3390/app14083312.

Testo completo
Abstract (sommario):
Foundation models play an increasingly pivotal role in the field of deep neural networks. Given that deep neural networks are widely used in real-world systems and are generally susceptible to adversarial attacks, securing foundation models becomes a key research issue. However, research on adversarial attacks against the Segment Anything Model (SAM), a visual foundation model, is still in its infancy. In this paper, we propose the prompt batch attack (PBA), which can effectively attack SAM, making it unable to capture valid objects or even generate fake shards. Extensive experiments were conducted to compare the adversarial attack performance among optimizing without prompts, optimizing all prompts, and optimizing batches of prompts as in PBA. Numerical results on multiple datasets show that the cross-prompt attack success rate (ASR∗) of the PBA method is 17.83% higher on average, and the attack success rate (ASR) is 20.84% higher. It is proven that PBA possesses the best attack capability as well as the highest cross-prompt transferability. Additionally, we introduce a metric to evaluate the cross-prompt transferability of adversarial attacks, effectively fostering research on cross-prompt attacks. Our work unveils the pivotal role of the batched prompts technique in cross-prompt adversarial attacks, marking an early and intriguing exploration into this area against SAM.
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Kim, Kang-Min, Mingyu Lee, Hyun-Sik Won, Min-Ji Kim, Yeachan Kim e SangKeun Lee. "Multi-Stage Prompt Tuning for Political Perspective Detection in Low-Resource Settings". Applied Sciences 13, n. 10 (19 maggio 2023): 6252. http://dx.doi.org/10.3390/app13106252.

Testo completo
Abstract (sommario):
Political perspective detection in news media—identifying political bias in news articles—is an essential but challenging low-resource task. Prompt-based learning (i.e., discrete prompting and prompt tuning) achieves promising results in low-resource scenarios by adapting a pre-trained model to handle new tasks. However, these approaches suffer performance degradation when the target task involves a textual domain (e.g., a political domain) different from the pre-training task (e.g., masked language modeling on a general corpus). In this paper, we develop a novel multi-stage prompt tuning framework for political perspective detection. Our method involves two sequential stages: a domain- and task-specific prompt tuning stage. In the first stage, we tune the domain-specific prompts based on a masked political phrase prediction (MP3) task to adjust the language model to the political domain. In the second task-specific prompt tuning stage, we only tune task-specific prompts with a frozen language model and domain-specific prompts for downstream tasks. The experimental results demonstrate that our method significantly outperforms fine-tuning (i.e., model tuning) methods and state-of-the-art prompt tuning methods on the SemEval-2019 Task 4: Hyperpartisan News Detection and AllSides datasets.
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Wada, Akihiko, Toshiaki Akashi, George Shih, Akifumi Hagiwara, Mitsuo Nishizawa, Yayoi Hayakawa, Junko Kikuta et al. "Optimizing GPT-4 Turbo Diagnostic Accuracy in Neuroradiology through Prompt Engineering and Confidence Thresholds". Diagnostics 14, n. 14 (17 luglio 2024): 1541. http://dx.doi.org/10.3390/diagnostics14141541.

Testo completo
Abstract (sommario):
Background and Objectives: Integrating large language models (LLMs) such as GPT-4 Turbo into diagnostic imaging faces a significant challenge, with current misdiagnosis rates ranging from 30–50%. This study evaluates how prompt engineering and confidence thresholds can improve diagnostic accuracy in neuroradiology. Methods: We analyze 751 neuroradiology cases from the American Journal of Neuroradiology using GPT-4 Turbo with customized prompts to improve diagnostic precision. Results: Initially, GPT-4 Turbo achieved a baseline diagnostic accuracy of 55.1%. By reformatting responses to list five diagnostic candidates and applying a 90% confidence threshold, the highest precision of the diagnosis increased to 72.9%, with the candidate list providing the correct diagnosis at 85.9%, reducing the misdiagnosis rate to 14.1%. However, this threshold reduced the number of cases that responded. Conclusions: Strategic prompt engineering and high confidence thresholds significantly reduce misdiagnoses and improve the precision of the LLM diagnostic in neuroradiology. More research is needed to optimize these approaches for broader clinical implementation, balancing accuracy and utility.
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Shi, Cong, Rui Zhai, Yalin Song, Junyang Yu, Han Li, Yingqi Wang e Longge Wang. "Few-shot Sentiment Analysis Based on Adaptive Prompt Learning and Contrastive Learning". Information Technology and Control 52, n. 4 (22 dicembre 2023): 1058–72. http://dx.doi.org/10.5755/j01.itc.52.4.34021.

Testo completo
Abstract (sommario):
Traditional deep learning-based strategies for sentiment analysis rely heavily on large-scale labeled datasets for model training, but these methods become less effective when dealing with small-scale datasets. Fine-tuning large pre-trained models on small datasets is currently the most commonly adopted approach to tackle this issue. Recently, prompt-based learning has gained significant attention as a promising research area. Although prompt-based learning has the potential to address data scarcity problems by utilizing prompts to reformulate downstream tasks, the current prompt-based methods for few-shot sentiment analysis are still considered inefficient. To tackle this challenge, an adaptive prompt-based learning method is proposed, which includes two aspects. Firstly, an adaptive prompting construction strategy is proposed, which can capture the semantic information of texts by utilizing a dot-product attention structure, improving the quality of the prompt templates. Secondly, contrastive learning is applied to the implicit word vectors obtained twice during the training stage to alleviate over-fitting in few-shot learning processes. This improves the model’s generalization ability by achieving data enhancement while keeping the semantic information of input sentences unchanged. Experimental results on the ERPSTMT datasets of FewCLUE demonstrate that the proposed method have great ability to construct suitable adaptive prompts and outperforms the state-of-the-art baselines.
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Zhang, Zhipeng, Shengquan Liu e Jianming Cheng. "Exploring Prompts in Few-Shot Cross-Linguistic Topic Classification Scenarios". Applied Sciences 13, n. 17 (2 settembre 2023): 9944. http://dx.doi.org/10.3390/app13179944.

Testo completo
Abstract (sommario):
In recent years, large-scale pretrained language models have become widely used in natural language processing tasks. On this basis, prompt learning has achieved excellent performance in specific few-shot classification scenarios. The core idea of prompt learning is to convert a downstream task into a masked language modelling task. However, different prompt templates can greatly affect the results, and finding an appropriate template is difficult and time-consuming. To this end, this study proposes a novel hybrid prompt approach, which combines discrete prompts and continuous prompts, to motivate the model to learn more semantic knowledge from a small number of training samples. By comparing the performance difference between discrete prompts and continuous prompts, we find that hybrid prompts achieve the best results, reaching a 73.82% F1 value in the test set. In addition, we analyze the effect of different virtual token lengths in continuous prompts and hybrid prompts in a few-shot cross-language topic classification scenario. The results demonstrate that there is a threshold for the length of virtual tokens, and too many virtual tokens decrease the performance of the model. It is better not to exceed the average length of the training set corpus. Finally, this paper designs a method based on vector similarity to explore the real meanings represented by virtual tokens. The experimental results show that the prompt automatically learnt from the virtual token has a certain correlation with the input text.
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Totuk, Onat Halis. "Enhancing the Outcome of Mechanical Engineering Design Course with the SCAMPER Methodology Framework". Journal of Higher Education and Science 14, n. 1 (30 aprile 2024): 76–83. http://dx.doi.org/10.5961/higheredusci.1355827.

Testo completo
Abstract (sommario):
Methodology frameworks in engineering education are structured approaches or models used to design, implement, and evaluate educational practices in engineering disciplines. These frameworks provide a systematic way to organize and optimize the teaching and learning process, ensuring that engineering students receive a comprehensive and effective education. SCAMPER is a creative thinking technique used to generate new ideas and approaches for problem-solving or project development. Each letter in SCAMPER stands for a different prompt to enhance the process targeted. These prompts stand for the substitute, combine, adapt, modify, put to another use, eliminate, and reverse. Evolving SCAMPER as a methodology framework for Mechanical Engineering education involves adapting its principles and prompts to cater specifically to the needs and challenges of engineering students. This study presents the use of SCAMPER methodology as an educational framework for mechanical engineering and application to the Mechanical Engineering Design course given for senior year students with proposed examples based on experiences collected from previous classes thought.
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Sinha, Rishi. "Statistical Analysis of Bias in ChatGPT Using Prompt Engineering". International Journal for Research in Applied Science and Engineering Technology 11, n. 6 (30 giugno 2023): 1483–89. http://dx.doi.org/10.22214/ijraset.2023.53885.

Testo completo
Abstract (sommario):
Abstract: ChatGPT is a leading Large Language Model trained on an extensive and diverse assortment of text data. However, the utilization of potentially biased training data from the internet corpora could lead to fundamental bias introduced in the model, which will subsequently reflect on its generated output. This paper quantifies bias present in GPT-3.0 model responses on various controversial topics using carefully engineered prompts. We measured raw bias in each generated response by leveraging the Bipartisan Press API. Using statistical methods such as the T-test and ANOVA on raw bias measurements, we tested our hypothesis. Our results demonstrate that there is statistically significant left leaning bias present in 9 out of the 11 controversial topics we tested. Further, ANOVA analysis shows that the bias present varies based on topics. We posit that our findings could be instrumental in guiding future efforts to mitigate training bias and address the larger alignment problem present in generative AI.
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Sim, Woochang, Hyebin Jin, Sejin Kim e Sundong Kim. "The Possibility of Prompt Engineering for ARC Problem Solving". KIISE Transactions on Computing Practices 30, n. 2 (29 febbraio 2024): 63–69. http://dx.doi.org/10.5626/ktcp.2024.30.2.063.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Wang, Ning, Jiahao Xie, Jihao Wu, Mingbo Jia e Linlin Li. "Controllable Image Captioning via Prompting". Proceedings of the AAAI Conference on Artificial Intelligence 37, n. 2 (26 giugno 2023): 2617–25. http://dx.doi.org/10.1609/aaai.v37i2.25360.

Testo completo
Abstract (sommario):
Despite the remarkable progress of image captioning, existing captioners typically lack the controllable capability to generate desired image captions, e.g., describing the image in a rough or detailed manner, in a factual or emotional view, etc. In this paper, we show that a unified model is qualified to perform well in diverse domains and freely switch among multiple styles. Such a controllable capability is achieved by embedding the prompt learning into the image captioning framework. To be specific, we design a set of prompts to fine-tune the pre-trained image captioner. These prompts allow the model to absorb stylized data from different domains for joint training, without performance degradation in each domain. Furthermore, we optimize the prompts with learnable vectors in the continuous word embedding space, avoiding the heuristic prompt engineering and meanwhile exhibiting superior performance. In the inference stage, our model is able to generate desired stylized captions by choosing the corresponding prompts. Extensive experiments verify the controllable capability of the proposed method. Notably, we achieve outstanding performance on two diverse image captioning benchmarks including COCO Karpathy split and TextCaps using a unified model.
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Bozkurt, Aras. "Tell Me Your Prompts and I Will Make Them True: The Alchemy of Prompt Engineering and Generative AI". Open Praxis 16, n. 2 (2024): 111–18. http://dx.doi.org/10.55982/openpraxis.16.2.661.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Zhao, Liangbing, Zicheng Zhang, Xuecheng Nie, Luoqi Liu e Si Liu. "Cross-Attention and Seamless Replacement of Latent Prompts for High-Definition Image-Driven Video Editing". Electronics 13, n. 1 (19 dicembre 2023): 7. http://dx.doi.org/10.3390/electronics13010007.

Testo completo
Abstract (sommario):
Recently, text-driven video editing has received increasing attention due to the surprising success of the text-to-image model in improving video quality. However, video editing based on the text prompt is facing huge challenges in achieving precise and controllable editing. Herein, we propose Latent prompt Image-driven Video Editing (LIVE) with a precise and controllable video editing function. The important innovation of LIVE is to utilize the latent codes from reference images as latent prompts to rapidly enrich visual details. The novel latent prompt mechanism endows two powerful capabilities for LIVE: one is a comprehensively interactive ability between video frame and latent prompt in the spatial and temporal dimensions, achieved by revisiting and enhancing cross-attention, and the other is the efficient expression ability of training continuous input videos and images within the diffusion space by fine-tuning various components such as latent prompts, textual embeddings, and LDM parameters. Therefore, LIVE can efficiently generate various edited videos with visual consistency by seamlessly replacing the objects in each frame with user-specified targets. The high-definition experimental results from real-world videos not only confirmed the effectiveness of LIVE but also demonstrated important potential application prospects of LIVE in image-driven video editing.
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Chen, Jimmy S., e David B. Granet. "Prompt Engineering: Helping ChatGPT Respond Better to Patients and Parents". Journal of Pediatric Ophthalmology & Strabismus 61, n. 2 (marzo 2024): 148–49. http://dx.doi.org/10.3928/01913913-20240124-02.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Acharyya, Siddhartha, Soumyadeep Mukherjee, Mukherjee, Srinjoy Saha e Debrupa Pal. "Revolutionizing Natural Language Understanding with Prompt Engineering: A Comprehensive Study". International Research Journal of Innovations in Engineering and Technology 07, n. 10 (2023): 692–95. http://dx.doi.org/10.47001/irjiet/2023.710091.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Sabbatella, Antonio, Andrea Ponti, Ilaria Giordani, Antonio Candelieri e Francesco Archetti. "Prompt Optimization in Large Language Models". Mathematics 12, n. 6 (21 marzo 2024): 929. http://dx.doi.org/10.3390/math12060929.

Testo completo
Abstract (sommario):
Prompt optimization is a crucial task for improving the performance of large language models for downstream tasks. In this paper, a prompt is a sequence of n-grams selected from a vocabulary. Consequently, the aim is to select the optimal prompt concerning a certain performance metric. Prompt optimization can be considered as a combinatorial optimization problem, with the number of possible prompts (i.e., the combinatorial search space) given by the size of the vocabulary (i.e., all the possible n-grams) raised to the power of the length of the prompt. Exhaustive search is impractical; thus, an efficient search strategy is needed. We propose a Bayesian Optimization method performed over a continuous relaxation of the combinatorial search space. Bayesian Optimization is the dominant approach in black-box optimization for its sample efficiency, along with its modular structure and versatility. We use BoTorch, a library for Bayesian Optimization research built on top of PyTorch. Specifically, we focus on Hard Prompt Tuning, which directly searches for an optimal prompt to be added to the text input without requiring access to the Large Language Model, using it as a black-box (such as for GPT-4 which is available as a Model as a Service). Albeit preliminary and based on “vanilla” Bayesian Optimization algorithms, our experiments with RoBERTa as a large language model, on six benchmark datasets, show good performances when compared against other state-of-the-art black-box prompt optimization methods and enable an analysis of the trade-off between the size of the search space, accuracy, and wall-clock time.
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Wang, Lei, Wenshuai Bi, Suling Zhao, Yinyao Ma, Longting Lv, Chenwei Meng, Jingru Fu e Hanlin Lv. "Investigating the Impact of Prompt Engineering on the Performance of Large Language Models for Standardizing Obstetric Diagnosis Text: Comparative Study". JMIR Formative Research 8 (8 febbraio 2024): e53216. http://dx.doi.org/10.2196/53216.

Testo completo
Abstract (sommario):
Background The accumulation of vast electronic medical records (EMRs) through medical informatization creates significant research value, particularly in obstetrics. Diagnostic standardization across different health care institutions and regions is vital for medical data analysis. Large language models (LLMs) have been extensively used for various medical tasks. Prompt engineering is key to use LLMs effectively. Objective This study aims to evaluate and compare the performance of LLMs with various prompt engineering techniques on the task of standardizing obstetric diagnostic terminology using real-world obstetric data. Methods The paper describes a 4-step approach used for mapping diagnoses in electronic medical records to the International Classification of Diseases, 10th revision, observation domain. First, similarity measures were used for mapping the diagnoses. Second, candidate mapping terms were collected based on similarity scores above a threshold, to be used as the training data set. For generating optimal mapping terms, we used two LLMs (ChatGLM2 and Qwen-14B-Chat [QWEN]) for zero-shot learning in step 3. Finally, a performance comparison was conducted by using 3 pretrained bidirectional encoder representations from transformers (BERTs), including BERT, whole word masking BERT, and momentum contrastive learning with BERT (MC-BERT), for unsupervised optimal mapping term generation in the fourth step. Results LLMs and BERT demonstrated comparable performance at their respective optimal levels. LLMs showed clear advantages in terms of performance and efficiency in unsupervised settings. Interestingly, the performance of the LLMs varied significantly across different prompt engineering setups. For instance, when applying the self-consistency approach in QWEN, the F1-score improved by 5%, with precision increasing by 7.9%, outperforming the zero-shot method. Likewise, ChatGLM2 delivered similar rates of accurately generated responses. During the analysis, the BERT series served as a comparative model with comparable results. Among the 3 models, MC-BERT demonstrated the highest level of performance. However, the differences among the versions of BERT in this study were relatively insignificant. Conclusions After applying LLMs to standardize diagnoses and designing 4 different prompts, we compared the results to those generated by the BERT model. Our findings indicate that QWEN prompts largely outperformed the other prompts, with precision comparable to that of the BERT model. These results demonstrate the potential of unsupervised approaches in improving the efficiency of aligning diagnostic terms in daily research and uncovering hidden information values in patient data.
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Shehri, Firdous Ahmed, Raj Maham, Alia Malik e Osman Bin Saif. "Effects of ChatGPT on Students Academic Performance:Mediating Role of Prompt Engineering". Asian Bulletin of Big Data Management 3, n. 2 (31 dicembre 2023): 137–47. http://dx.doi.org/10.62019/abbdm.v3i2.58.

Testo completo
Abstract (sommario):
This paper focuses on the use of advanced machine learning models, such as ChatGPT, in education. ChatGPT, a language learning model by OpenAI, has shown potential for reshaping education through enhanced teaching methods, increased student engagement, and personalised learning experiences. Despite these promising features, the potential drawbacks of the technology remain unclear. This paper seeks to examine the impacts of ChatGPT on students' academic performance, with a special emphasis on the role of 'prompt engineering' in this context. This research employs a quantitative method to examine the impact of ChatGPT on student academic performance in various universities across Pakistan. An online survey was distributed, garnering responses from 37 students at several institutions. The questions focused on demographic details and the impact of ChatGPT on academic performance parameters such as learning, quality of work, and creativity, with the role of prompt engineering as a mediating factor. Data was analysed using SPSS, with Cronbach's alpha used to ensure reliability and internal consistency of the responses, which showed a high degree of correlation among responses. The results indicated a positive relationship between the use of ChatGPT and academic performance (Naveed et al., 2023). However, while most students were aware of ChatGPT's capabilities, the majority did not use it for examination preparation. The study also found that prompt engineering played a mediating role between ChatGPT usage and student academic performance, highlighting the importance of effective prompt design in optimising the benefits of AI in educational settings.
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Černý, Jan. "Implications of Large Language Models for OSINT: Assessing the Impact on Information Acquisition and Analyst Expertise in Prompt Engineering". European Conference on Cyber Warfare and Security 23, n. 1 (21 giugno 2024): 116–24. http://dx.doi.org/10.34190/eccws.23.1.2261.

Testo completo
Abstract (sommario):
This paper explores the potential use of large language models (LLMs) in Open Source Intelligence (OSINT), with a focus on integrating information acquisition and the increasing importance of prompt engineering for analysts. The research includes a comprehensive literature review, which highlights the widespread use of AI in OSINT and the related challenges, such as data validity and ethical concerns. The study emphasizes the significance of prompt engineering as a crucial skill that demands a profound comprehension of LLMs to generate validated intelligence. A model of the OSINT lifecycle that incorporates LLMs is proposed. The paper further discusses updated training in critical thinking, search techniques, and prompt engineering for intelligence professionals. The findings indicate a noteworthy shift in OSINT procedures, highlighting the importance of continuous research and education to fully utilize AI in intelligence gathering.
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Liu, Mingxing, Junfeng Wang, Tao Lin, Quan Ma, Zhiyang Fang e Yanqun Wu. "An Empirical Study of the Code Generation of Safety-Critical Software Using LLMs". Applied Sciences 14, n. 3 (26 gennaio 2024): 1046. http://dx.doi.org/10.3390/app14031046.

Testo completo
Abstract (sommario):
In the digital era of increasing software complexity, improving the development efficiency of safety-critical software is a challenging task faced by academia and industry in domains such as nuclear energy, aviation, the automotive industry, and rail transportation. Recently, people have been excited about using pre-trained large language models (LLMs) such as ChatGPT and GPT-4 to generate code. Professionals in the safety-critical software field are intrigued by the code generation capabilities of LLMs. However, there is currently a lack of systematic case studies in this area. Aiming at the need for automated code generation in safety-critical domains such as nuclear energy and the automotive industry, this paper conducts a case study on generating safety-critical software code using GPT-4 as the tool. Practical engineering cases from the industrial domain are employed. We explore different approaches, including code generation based on overall requirements, specific requirements, and augmented prompts. We propose a novel prompt engineering method called Prompt-FDC that integrates basic functional requirements, domain feature generalization, and domain constraints. This method improves code completeness from achieving 30% functions to 100% functions, increases the code comment rate to 26.3%, and yields better results in terms of code compliance, readability, and maintainability. The code generation approach based on LLMs also introduces a new software development process and V-model lifecycle for safety-critical software. Through systematic case studies, we demonstrate that, with appropriate prompt methods, LLMs can auto-generate safety-critical software code that meets practical engineering application requirements. It is foreseeable that LLMs can be applied to various engineering domains to improve software safety and development efficiency.
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Bosman, Lisa, Nathalie Duval-Couetil, Brooke Mayer e Patrick McNamara. "Using Online Discussions to Develop the Entrepreneurial Mindset in Environmental Engineering Undergraduates: A Case Study". International Journal of Engineering Pedagogy (iJEP) 9, n. 3 (11 giugno 2019): 4. http://dx.doi.org/10.3991/ijep.v9i3.9491.

Testo completo
Abstract (sommario):
Entrepreneurship is an important aspect of the U.S. and global economy. As such, developing an entrepreneurial mindset is crucial for both engineering students and practicing engineers. The purpose of this paper is investigate the role of online discussions, as a pedagogical approach, in the development of the entrepreneurial mindset, and explore a variety of approaches to assess student learning outcomes. Online discussions prompts were created for environmental engineering courses using the Kern Engineering Entrepreneurial Network (KEEN) framework. The framework proposes that an entrepreneurial mindset can be fostered in students by promoting curiosity, encouraging connections, and creating value. This paper describes the methodology and rationale that served as the foundation for this exploratory study. Examples are provided for online discussion prompts developed and administered in two different environmental engineering undergraduate courses: Introduction to Environmental Engineering (three credit, undergraduate, online course offered during two different summer sessions) and Seminar in Environmental Engineering (one credit, undergraduate level, face-to-face course offered during one semester). Quantitative and qualitative methods were used to analyze and assess potential impacts of online discussion prompt use. The findings provide lessons learned for applying the KEEN framework in an engineering classroom via online discussions.
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Saito, Hiroko. "Teachers' Practices and Students' Preferences for Feedback on Second Language Writing: A Case Study of Adult ESL Learners". TESL Canada Journal 11, n. 2 (26 giugno 1994): 46. http://dx.doi.org/10.18806/tesl.v11i2.633.

Testo completo
Abstract (sommario):
The first part of this study investigated the fit between teachers' practices and students' preferences for feedback and the students' strategies for handling feedback on their written work. The second part of this study focused on students' perception of "thinking prompts" for their writing, an innovative approach used in their ESL writing classes, following Bereiter and Scardamalia's idea of "procedural facilitation" (1987). Thirty-nine students in ESL intensive courses and an ESL Engineering writing class were asked to fill out a questionnaire concerning feedback and thinking prompts. In addition, three classes were observed to see how each teacher used feedback and thinking prompts in their classes and for responding to students' writings. The results show that students preferred teacher feedback (teacher correction, teacher correction with comments, error identification, commentary, teacher-students conferencing) to non-teacher feedback (peer correction and self correction), though the three teachers used non-teacher feedback frequently in their classes. These students' strategies for handling feedback varied depending on the type of feedback each teacher gave on the student's paper. Among the thinking prompts, students found the rule prompt most useful and the LUL2 comparison prompt least useful. The results suggest that the extent to which the thinking prompts are integrated in the class and students conceptualize them is reflected in their attitudes toward thinking prompts.
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Brahmavar, Shreyas Bhat, Ashwin Srinivasan, Tirtharaj Dash, Sowmya Ramaswamy Krishnan, Lovekesh Vig, Arijit Roy e Raviprasad Aduri. "Generating Novel Leads for Drug Discovery Using LLMs with Logical Feedback". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 1 (24 marzo 2024): 21–29. http://dx.doi.org/10.1609/aaai.v38i1.27751.

Testo completo
Abstract (sommario):
Large Language Models (LLMs) can be used as repositories of biological and chemical information to generate pharmacological lead compounds. However, for LLMs to focus on specific drug targets typically requires experimentation with progressively more refined prompts. Results thus become dependent not just on what is known about the target, but also on what is known about the prompt- engineering. In this paper, we separate the prompt into domain-constraints that can be written in a standard logical form and a simple text-based query. We investigate whether LLMs can be guided, not by refining prompts manually, but by refining the logical component automatically, keeping the query unchanged. We describe an iterative procedure LMLF (“Language Model with Logical Feedback”) in which the constraints are progressively refined using a logical notion of generalisation. On any iteration, newly generated instances are verified against the constraint, providing "logical-feedback" for the next iteration's refinement of the constraints. We evaluate LMLF using two well-known targets (inhibition of the Janus Kinase 2; and Dopamine Receptor D2); and two different LLMs (GPT-3 and PaLM). We show that LMLF, starting with the same logical constraints and query text, can be used to guide both LLMs to generate potential leads. We find: (a) Binding affinities of LMLF-generated molecules are skewed towards higher binding affinities than those from existing baselines; (b) LMLF results in generating molecules that are skewed towards higher binding affinities than without logical feedback; (c) Assessment by a computational chemist suggests that LMLF generated compounds may be novel inhibitors. These findings suggest that LLMs with logical feedback may provide a mechanism for generating new leads without requiring the domain-specialist to acquire sophisticated skills in prompt-engineering.
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Bai, Shuanghao, Min Zhang, Wanqi Zhou, Siteng Huang, Zhirong Luan, Donglin Wang e Badong Chen. "Prompt-Based Distribution Alignment for Unsupervised Domain Adaptation". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 2 (24 marzo 2024): 729–37. http://dx.doi.org/10.1609/aaai.v38i2.27830.

Testo completo
Abstract (sommario):
Recently, despite the unprecedented success of large pre-trained visual-language models (VLMs) on a wide range of downstream tasks, the real-world unsupervised domain adaptation (UDA) problem is still not well explored. Therefore, in this paper, we first experimentally demonstrate that the unsupervised-trained VLMs can significantly reduce the distribution discrepancy between source and target domains, thereby improving the performance of UDA. However, a major challenge for directly deploying such models on downstream UDA tasks is prompt engineering, which requires aligning the domain knowledge of source and target domains, since the performance of UDA is severely influenced by a good domain-invariant representation. We further propose a Prompt-based Distribution Alignment (PDA) method to incorporate the domain knowledge into prompt learning. Specifically, PDA employs a two-branch prompt-tuning paradigm, namely base branch and alignment branch. The base branch focuses on integrating class-related representation into prompts, ensuring discrimination among different classes. To further minimize domain discrepancy, for the alignment branch, we construct feature banks for both the source and target domains and propose image-guided feature tuning (IFT) to make the input attend to feature banks, which effectively integrates self-enhanced and cross-domain features into the model. In this way, these two branches can be mutually promoted to enhance the adaptation of VLMs for UDA. We conduct extensive experiments on three benchmarks to demonstrate that our proposed PDA achieves state-of-the-art performance. The code is available at https://github.com/BaiShuanghao/Prompt-based-Distribution-Alignment.
Gli stili APA, Harvard, Vancouver, ISO e altri
44

HILEMAN, BETTE. "Fertilizer Concerns Prompt New Standards". Chemical & Engineering News 76, n. 19 (11 maggio 1998): 24–25. http://dx.doi.org/10.1021/cen-v076n019.p024.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Seo, Jaehyung, Hyeonseok Moon, Chanhee Lee, Sugyeong Eo, Chanjun Park, Jihoon Kim, Changwoo Chun e Heuiseok Lim. "Plain Template Insertion: Korean-Prompt-Based Engineering for Few-Shot Learners". IEEE Access 10 (2022): 107587–97. http://dx.doi.org/10.1109/access.2022.3213027.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Zhang, Kunpeng, Feng Zhou, Lan Wu, Na Xie e Zhengbing He. "Semantic understanding and prompt engineering for large-scale traffic data imputation". Information Fusion 102 (febbraio 2024): 102038. http://dx.doi.org/10.1016/j.inffus.2023.102038.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Short, Cole E., e Jeremy C. Short. "The artificially intelligent entrepreneur: ChatGPT, prompt engineering, and entrepreneurial rhetoric creation". Journal of Business Venturing Insights 19 (giugno 2023): e00388. http://dx.doi.org/10.1016/j.jbvi.2023.e00388.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Acar, Oguz A. "Beyond Prompt Engineering: Skills Marketers Need to Deploy Generative AI Successfully". NIM Marketing Intelligence Review 16, n. 1 (25 aprile 2024): 18–23. http://dx.doi.org/10.2478/nimmir-2024-0003.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Yamaguchi, Naoki, Tomoya Shiba, Kosei Isomoto e Hakaru Tamukoh. "A Rapidly Adjustable Object Recognition System through Language Based Prompt Engineering". Proceedings of International Conference on Artificial Life and Robotics 29 (22 febbraio 2024): 425–29. http://dx.doi.org/10.5954/icarob.2024.os15-3.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Baik, Jisoo, Namo Bang, Heuiyeen Yeen, MinJu Kim e Myoung-wan Koo. "Prompt Engineering for Cross-Model Symbolic Knowledge Distillation between T5-GPT". KIISE Transactions on Computing Practices 30, n. 3 (31 marzo 2024): 137–42. http://dx.doi.org/10.5626/ktcp.2024.30.3.137.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia