Top Research Papers on Prompt Engineering
Discover the leading research papers on Prompt Engineering, a dynamic field at the intersection of AI and natural language processing. Uncover innovative approaches, methodologies, and insights that are driving advancements in AI technologies. Perfect for researchers, developers, and enthusiasts eager to expand their knowledge and stay ahead in the AI landscape.
Looking for research-backed answers?Try AI Search
A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT
758 Citations 2023Jules White, Quchen Fu, Sam Hays + 6 more
arXiv (Cornell University)
Prompt engineering is an increasingly important skill set needed to converse effectively with large language models (LLMs), such as ChatGPT. Prompts are instructions given to an LLM to enforce rules, automate processes, and ensure specific qualities (and quantities) of generated output. Prompts are also a form of programming that can customize the outputs and interactions with an LLM. This paper describes a catalog of prompt engineering techniques presented in pattern form that have been applied to solve common problems when conversing with LLMs. Prompt patterns are a knowledge transfer method...
Prompt Engineering in Medical Education
172 Citations 2023Thomas F Heston, Charya Khun
International Medical Education
Artificial intelligence-powered generative language models (GLMs), such as ChatGPT, Perplexity AI, and Google Bard, have the potential to provide personalized learning, unlimited practice opportunities, and interactive engagement 24/7, with immediate feedback. However, to fully utilize GLMs, properly formulated instructions are essential. Prompt engineering is a systematic approach to effectively communicating with GLMs to achieve the desired results. Well-crafted prompts yield good responses from the GLM, while poorly constructed prompts will lead to unsatisfactory responses. Besides the chal...
Prompt Engineering for Healthcare: Methodologies and Applications
115 Citations 2023Jiaqi Wang, Enze Shi, Sigang Yu + 21 more
arXiv (Cornell University)
The development of prompt engineering will be provided and its significant contributions to healthcare natural language processing applications such as question-answering systems, text summarization, and machine translation will be emphasized.
Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study
100 Citations 2023Yi Liu, Gelei Deng, Zhengzi Xu + 7 more
arXiv (Cornell University)
The study underscores the importance of prompt structures in jailbreaking LLMs and discusses the challenges of robust jailbreak prompt generation and prevention.
Review of large vision models and visual prompt engineering
165 Citations 2023Jiaqi Wang, Zhengliang Liu, Lin Zhao + 18 more
Meta-Radiology
Visual prompt engineering is a fundamental methodology in the field of visual and image artificial general intelligence. As the development of large vision models progresses, the importance of prompt engineering becomes increasingly evident. Designing suitable prompts for specific visual tasks has emerged as a meaningful research direction. This review aims to summarize the methods employed in the computer vision domain for large vision models and visual prompt engineering, exploring the latest advancements in visual prompt engineering. We present influential large models in the visual domain ...
Unleashing the potential of prompt engineering for large language models
127 Citations 2023B.‐C. CHEN, Zhaofeng Zhang, Nicolas Langrené + 1 more
arXiv (Cornell University)
This comprehensive review delves into the pivotal role of prompt engineering in unleashing the capabilities of Large Language Models (LLMs). The development of Artificial Intelligence (AI), from its inception in the 1950s to the emergence of advanced neural networks and deep learning architectures, has made a breakthrough in LLMs, with models such as GPT-4o and Claude-3, and in Vision-Language Models (VLMs), with models such as CLIP and ALIGN. Prompt engineering is the process of structuring inputs, which has emerged as a crucial technique to maximize the utility and accuracy of these models. ...
Large Language Models Are Human-Level Prompt Engineers
297 Citations 2022Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han + 4 more
arXiv (Cornell University)
It is shown that APE-engineered prompts can be applied to steer models toward truthfulness and/or informativeness, as well as to improve few-shot learning performance by simply prepending them to standard in-context learning prompts.
Unleashing the potential of prompt engineering for large language models
140 Citations 2025B.‐C. CHEN, Zhaofeng Zhang, Nicolas Langrené + 1 more
Patterns
Critical to this discussion is the role of prompt engineering in artificial intelligence (AI) security, particularly in terms of defending against adversarial attacks that exploit vulnerabilities in LLMs.
AI literacy and its implications for prompt engineering strategies
211 Citations 2024Nils Knoth, Antonia Tolzin, Andreas Janson + 1 more
Computers and Education Artificial Intelligence
It is argued for the integration of AI educational content into current curricula to enable a hybrid intelligent society in which students can effectively use generative AI tools such as ChatGPT.
Prompting Change: Exploring Prompt Engineering in Large Language Model AI and Its Potential to Transform Education
132 Citations 2023William Cain
TechTrends
The paper underscores how the natural language capabilities of LLM AI tools can help students and educators transition from passive recipients to active co-creators of their learning experiences, and charts the evolving trajectory of LLM AI as a tool poised to reshape educational practices and assumptions.
Prompt engineering in consistency and reliability with the evidence-based guideline for LLMs
262 Citations 2024Wang Li, Xi Chen, Xiangwen Deng + 5 more
npj Digital Medicine
It is revealed that different prompts had variable effects across various models, and the gpt-4-Web with ROT prompt was the most consistent.
Prompt Engineering as an Important Emerging Skill for Medical Professionals: Tutorial
583 Citations 2023Bertalan Meskó
Journal of Medical Internet Research
This paper summarizes the current state of research about prompt engineering and aims at providing practical recommendations for the wide range of health care professionals to improve their interactions with LLMs.
Design Guidelines for Prompt Engineering Text-to-Image Generative Models
500 Citations 2022Vivian Liu, Lydia B. Chilton
CHI Conference on Human Factors in Computing Systems
A study exploring what prompt keywords and model hyperparameters can help produce coherent outputs from text-to-image generative models, structured to include subject and style keywords and investigates success and failure modes of these prompts.
The artificially intelligent entrepreneur: ChatGPT, prompt engineering, and entrepreneurial rhetoric creation
193 Citations 2023Cole Evan Short, Jeremy C. Short
Journal of Business Venturing Insights
To better understand the role of artificial intelligence in the development of entrepreneurial rhetoric, we examine how generative language models such as ChatGPT serve as viable tools for content creation. Using an established framework for examining CEO celebrity (Creator, Transformer, Rebel, and Savior), we illustrate how such models can effectively produce and refine elevator pitches, social media pitches, and crowdfunding pitches commonly used in the study of entrepreneurial rhetoric. We demonstrate ChatGPT's ability to mimic each celebrity CEO archetype by prompting language in the style...
A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications
164 Citations 2024Pranab Sahoo, Ayush Singh, Sriparna Saha + 3 more
arXiv (Cornell University)
A systematic analysis of recent advancements in prompt engineering enables a better understanding of this rapidly developing field and facilitates future research by illuminating open challenges and opportunities for prompt engineering.
The CLEAR path: A framework for enhancing information literacy through prompt engineering
246 Citations 2023Leo S. Lo
The Journal of Academic Librarianship
This article introduces the CLEAR Framework for Prompt Engineering, designed to optimize interactions with AI language models like ChatGPT. The framework encompasses five core principles—Concise, Logical, Explicit, Adaptive, and Reflective—that facilitate more effective AI-generated content evaluation and creation. Additionally, the article discusses technical aspects of prompts, such as tokens, temperature, and top-p settings. By integrating the CLEAR Framework into information literacy instruction, academic librarians can empower students with critical thinking skills for the ChatGPT era and...
Prompt-aligned Gradient for Prompt Tuning
199 Citations 2023Beier Zhu, Yulei Niu, Yucheng Han + 2 more
journal unavailable
This paper presents Prompt-aligned Gradient, dubbed ProGrad to prevent prompt tuning from forgetting the general knowledge learned from VLMs, and demonstrates the stronger few-shot generalization ability of ProGrad over state-of-the-art prompt tuning methods.
Interactive and Visual Prompt Engineering for Ad-hoc Task Adaptation With Large Language Models
171 Citations 2022Hendrik Strobelt, Albert Webson, Victor Sanh + 4 more
IEEE Transactions on Visualization and Computer Graphics
A workflow that allows users to first focus on model feedback using small data before moving on to a large data regime that allows empirical grounding of promising prompts using quantitative measures of the task, and then allows easy deployment of the newly created ad-hoc models.
Improving large language models for clinical named entity recognition via prompt engineering
245 Citations 2024Yan Hu, Qingyu Chen, Jingcheng Du + 9 more
Journal of the American Medical Informatics Association
A clinical task-specific prompt framework, incorporating medical knowledge and training samples, significantly enhances GPT models' feasibility for potential clinical applications and suggests a promising direction in leveraging LLMs for clinical NER tasks.
Conversing with Copilot: Exploring Prompt Engineering for Solving CS1 Problems Using Natural Language
238 Citations 2023Paul Denny, Viraj Kumar, Nasser Giacaman
journal unavailable
Evaluating the performance of Copilot on a publicly available dataset of 166 programming problems finds that it successfully solves around half of these problems on its very first attempt, and that it solves 60% of the remaining problems using only natural language changes to the problem description.
Extracting accurate materials data from research papers with conversational language models and prompt engineering
249 Citations 2024Maciej P. Polak, Dane Morgan
Nature Communications
This work proposes the ChatExtract method, a method that can fully automate very accurate data extraction with minimal initial effort and background, using an advanced conversational LLM, and shows that approaches similar to ChatExtract are likely to become powerful tools for data extraction in the near future.
Artificial intelligence prompt engineering as a new digital competence: Analysis of generative AI technologies such as ChatGPT
171 Citations 2023Paweł Korzyński, Grzegorz Mazurek, Pamela Krzypkowska + 1 more
Entrepreneurial Business and Economics Review
This study aimed to provide a deeper understanding of the intricacies involved in AI prompt engineering and its role as a digital competence, and contributed the AI PROMPT framework to the field, providing clear and actionable guidelines for text‐to‐text prompt engineering.
Prompt-to-Prompt Image Editing with Cross Attention Control
360 Citations 2022Amir Hertz, Ron Mokady, Jay M. Tenenbaum + 3 more
arXiv (Cornell University)
This paper analyzes a text-conditioned model in depth and observes that the cross-attention layers are the key to controlling the relation between the spatial layout of the image to each word in the prompt, and presents several applications which monitor the image synthesis by editing the textual prompt only.
Unleashing ChatGPT's Power: A Case Study on Optimizing Information Retrieval in Flipped Classrooms via Prompt Engineering
116 Citations 2023Mo Wang, Minjuan Wang, Xin Xu + 3 more
IEEE Transactions on Learning Technologies
Examining the information quality obtained from ChatGPT in a flipped classroom by evaluating its effectiveness in task completion among 26 novice undergraduates from the same major and cohort provides evidence that proficient mastery of prompt engineering improves the quality of information obtained by students using ChatGPT.
Do Prompt-Based Models Really Understand the Meaning of Their Prompts?
171 Citations 2022Albert Webson, Ellie Pavlick
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
It is found that models can learn just as fast with many prompts that are intentionally irrelevant or even pathologically misleading as they do with instructively “good” prompts, and instruction-tuned models often produce good predictions with irrelevant and misleading prompts even at zero shots.
Few-shot is enough: exploring ChatGPT prompt engineering method for automatic question generation in english education
169 Citations 2023Unggi Lee, H. C. Jung, Younghoon Jeon + 4 more
Education and Information Technologies
The study findings indicate that the combined use of LLMs and prompt engineering in AQG produces questions with statistically significant validity, and ChatGPT sheds light on the potential for collaborative AI-teacher efforts in question generation, especially within English education.
Prompt for Extraction? PAIE: Prompting Argument Interaction for Event Argument Extraction
117 Citations 2022Yubo Ma, Zehao Wang, Yixin Cao + 4 more
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
An effective yet efficient model PAIE for both sentence-level and document-level Event Argument Extraction (EAE), which also generalizes well when there is a lack of training data is proposed.
Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text Classification
243 Citations 2022Shengding Hu, Ning Ding, Huadong Wang + 5 more
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
This work focuses on incorporating external knowledge into the verbalizer, forming a knowledgeable prompt Tuning (KPT), to improve and stabilize prompttuning.
Engineering Robust Ag‐Decorated Polydopamine Nano‐Photothermal Platforms to Combat Bacterial Infection and Prompt Wound Healing
479 Citations 2022Xiaoliang Qi, Yijing Huang, Shengye You + 8 more
Advanced Science
A facile approach to boost the PCE of Pda for photothermal antibacterial therapy is developed, providing a significant step forward in advancing the application of PDA nano‐photothermal agents.
Embracing the future of Artificial Intelligence in the classroom: the relevance of AI literacy, prompt engineering, and critical thinking in modern education
613 Citations 2024Yoshija Walter
International Journal of Educational Technology in Higher Education
This discussion examines the transformative impact of Artificial Intelligence in educational settings, focusing on the necessity for AI literacy, prompt engineering proficiency, and enhanced critical thinking skills, and detailed analysis of strategies for embedding these skills within educational curricula and pedagogical practices.
Prompt Distribution Learning
204 Citations 2022Yuning Lu, Jianzhuang Liu, Yonggang Zhang + 2 more
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
This work presents prompt distribution learning for effectively adapting a pre-trained vision-language model to address downstream recognition tasks and employs a Gaussian distribution to model them effectively and derive a surrogate loss for efficient training.
CODA-Prompt: COntinual Decomposed Attention-Based Prompting for Rehearsal-Free Continual Learning
206 Citations 2023James Smith, Leonid Karlinsky, Vyshnavi Gutta + 6 more
journal unavailable
This work proposes to learn a set of prompt components which are assembled with input- Conditioned weights to produce input-conditioned prompts, resulting in a novel attention-based end-to-end key-query scheme.
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
3361 Citations 2022Pengfei Liu, Weizhe Yuan, Jinlan Fu + 3 more
ACM Computing Surveys
The basics of this promising paradigm in natural language processing are introduced, a unified set of mathematical notations that can cover a wide variety of existing work are described, and existing work is organized along several dimensions.
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
510 Citations 2021Pengfei Liu, Weizhe Yuan, Jinlan Fu + 3 more
arXiv (Cornell University)
This paper surveys and organizes research works in a new paradigm in natural language processing, which we dub "prompt-based learning". Unlike traditional supervised learning, which trains a model to take in an input x and predict an output y as P(y|x), prompt-based learning is based on language models that model the probability of text directly. To use these models to perform prediction tasks, the original input x is modified using a template into a textual string prompt x' that has some unfilled slots, and then the language model is used to probabilistically fill the unfilled information to ...
Polycrisis: Prompts for an emerging worldview
148 Citations 2023David Henig, Daniel M. Knight
Anthropology Today
Taking the realms of business, finance and economic history by storm, polycrisis captures the complexity of an increasingly uncertain world in a state of flux and transition. Proponents of the polycrisis model, such as prominent economic historian and Financial Times contributing editor Adam Tooze, propose polycrisis as a marker of our age, capturing overlapping and interconnected crises beyond cause and effect. In his article, the authors offer some prompts for considering the usefulness and limitations of polycrisis for the anthropological toolkit. The authors cautiously welcome the polycris...
Prompting for Multi-Modal Tracking
131 Citations 2022Jinyu Yang, Zhe Li, Feng Zheng + 2 more
Proceedings of the 30th ACM International Conference on Multimedia
A novel multi-modal prompt tracker (ProTrack), which can transfer the multi- modal inputs to a single modality by the prompt paradigm, and can achieve high-performance multi- Modal tracking by only altering the inputs, even without any extra training on multi-Modal data.
Learning to Prompt for Continual Learning
588 Citations 2022Zifeng Wang, Zizhao Zhang, Chen‐Yu Lee + 7 more
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
This work presents a new paradigm for continual learning that aims to train a more succinct memory system without accessing task identity at test time, and achieves competitive results against rehearsal-based methods even without a re-hearsal buffer.
Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity
119 Citations 2021Yao Lu, Max Bartolo, A. S. Moore + 2 more
arXiv (Cornell University)
When primed with only a handful of training samples, very large, pretrained language models such as GPT-3 have shown competitive results when compared to fully-supervised, fine-tuned, large, pretrained language models. We demonstrate that the order in which the samples are provided can make the difference between near state-of-the-art and random guess performance: essentially some permutations are "fantastic" and some not. We analyse this phenomenon in detail, establishing that: it is present across model sizes (even for the largest current models), it is not related to a specific subset of sa...
Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity
412 Citations 2022Yao Lu, Max Bartolo, Alastair Moore + 2 more
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
This work uses the generative nature of language models to construct an artificial development set and based on entropy statistics of the candidate permutations on this set, it identifies performant prompts and yields a 13% relative improvement for GPT-family models across eleven different established text classification tasks.
Why Johnny Can’t Prompt: How Non-AI Experts Try (and Fail) to Design LLM Prompts
655 Citations 2023J.D. Zamfirescu-Pereira, Richmond Y. Wong, Bjoern Hartmann + 1 more
journal unavailable
This work explores whether non-AI-experts can successfully engage in “end-user prompt engineering” using a design probe—a prototype LLM-based chatbot design tool supporting development and systematic evaluation of prompting strategies.