prompt engineering
What is prompt engineering?
Prompt engineering is an artificial intelligence engineering technique that serves several purposes. It encompasses the process of refining large language models, or LLMs, with specific prompts and recommended outputs, as well as the process of refining input to various generative AI services to generate text or images. As generative AI tools improve, prompt engineering will also be important in generating other kinds of content, including robotic process automation bots, 3D assets, scripts, robot instructions and other types of content and digital artifacts.
This AI engineering technique helps tune LLMs for specific use cases and uses zero-shot learning examples, combined with a particular data set, to measure and improve LLM performance. However, prompt engineering for various generative AI tools tends to be a more widespread use case, simply because there are far more users of existing tools than developers working on new ones.
Prompt engineering combines elements of logic, coding, art and -- in some cases -- special modifiers. The prompt can include natural language text, images or other types of input data. Although the most common generative AI tools can process natural language queries, the same prompt will likely generate different results across AI services and tools. It is also important to note that each tool has its own special modifiers to make it easier to describe the weight of words, styles, perspectives, layout or other properties of the desired response.
Why is prompt engineering important to AI?
Prompt engineering is essential for creating better AI-powered services and getting better results from existing generative AI tools.
In terms of creating better AI, prompt engineering can help teams tune LLMs and troubleshoot workflows for specific results. For example, enterprise developers might experiment with this aspect of prompt engineering when tuning an LLM like GPT-3 to power a customer-facing chatbot or to handle enterprise tasks such as creating industry-specific contracts.
This article is part of
What is Gen AI? Generative AI explained
In an enterprise use case, a law firm might want to use a generative model to help lawyers automatically generate contracts in response to a specific prompt. They might have specific requirements that all new clauses in the new contracts reflect existing clauses found across the firm's existing library of contract documentation, rather than including new summaries that could introduce legal issues. In this case, prompt engineering would help fine-tune the AI systems for the highest level of accuracy.
On the other hand, an AI model being trained for customer service might use prompt engineering to help consumers find solutions to problems from across an extensive knowledge base more efficiently. In this case, it might be desirable to use natural language processing (NLP) to generate summaries in order to help people with different skill levels analyze the problem and solve it on their own. For example, a skilled technician might only need a simple summary of key steps, while a novice would need a longer step-by-step guide elaborating on the problem and solution using more basic terms.
Prompt engineering can also play a role in identifying and mitigating various types of prompt injection attacks. These kinds of attacks are a modern variant of SQL injection attacks, in which malicious actors or curious experimenters try to break the logic of generative AI services, such as ChatGPT, Microsoft Bing Chat or Google Bard. Experimenters have found that the models can exhibit erratic behavior if asked to ignore previous commands, enter a special mode or make sense of contrary information. In these cases, enterprise developers can recreate the problem by exploring the prompts in question and then fine-tune the deep learning models to mitigate the problem.
In other cases, researchers have found ways to craft particular prompts for the purpose of interpreting sensitive information from the underlying generative AI engine. For example, experimenters have found that the secret name of Microsoft Bing's chatbot is Sydney and that ChatGPT has a special DAN -- aka "Do Anything Now" -- mode that can break normal rules. Prompt engineering could help craft better protections against unintended results in these cases.
This is not necessarily a trivial process. Microsoft's Tay chatbot started spewing out inflammatory content in 2016, shortly after being connected to Twitter, now known as the X platform. More recently, Microsoft simply reduced the number of interactions with Bing Chat within a single session after other problems started emerging. However, since longer-running interactions can lead to better results, improved prompt engineering will be required to strike the right balance between better results and safety.
In terms of improved results for existing generative AI tools, prompt engineering can help users identify ways to reframe their query to home in on the desired results. A writer, for instance, could experiment with different ways of framing the same question to tease out how to format text in a particular style and within various constraints. For example, in tools such as OpenAI's ChatGPT, variations in word order and the number of times a single modifier is used (e.g., very vs. very, very, very) can significantly affect the final text.
Developers can also use prompt engineering to combine examples of existing code and descriptions of problems they are trying to solve for code completion. Similarly, the right prompt can help them interpret the purpose and function of existing code to understand how it works and how it could be improved or extended.
In the case of text-to-image synthesis, prompt engineering can help fine-tune various characteristics of generated imagery. Users can request that the AI model create images in a particular style, perspective, aspect ratio, point of view or image resolution. The first prompt is usually just the starting point, as subsequent requests enable users to downplay certain elements, enhance others and add or remove objects in an image.
Examples of prompt engineering
There are vast differences in the types of prompts one might use for generating text, code or images. Here are some examples for different types of content:
Text: ChatGPT, GPT
- What's the difference between generative AI and traditional AI?
- What are 10 compelling variations for the headline, "Top generative AI use cases for the enterprise"?
- Write an outline for an article about the benefits of generative AI for marketing.
- Now write 300 words for each section.
- Create an engaging headline for each section.
- Write a 100-word product description for ProductXYZ in five different styles.
- Define prompt engineering in iambic pentameter in the style of Shakespeare.
Code: ChatGPT, Codex
- Act as an American Standard Code for Information Interchange (ASCII) artist that translates object names into ASCII code.
- Find mistakes in the following code snippet.
- Write a function that multiplies two numbers and returns the result.
- Create a basic REST API in Python.
- What function is the following code doing?
- Simplify the following code.
- Continue writing the following code.
Images: Stable Diffusion, Midjourney, Dall-E 2
- A dog in a car wearing sunglasses and a hat in the style of Salvador Dali.
- A lizard on the beach in the style of claymation art.
- A man using a phone on the subway, 4K, bokeh (a higher, 4K resolution image with bokeh blurring).
- A sticker illustration of a woman drinking coffee at a table with a checkered tablecloth.
- A jungle forest with cinematic lighting and nature photography.
- A first-person image looking out at orange clouds during a sunrise.
Tips and best practices for writing prompts
The No. 1 tip is to experiment first by phrasing a similar concept in diverse ways to see how they work. Explore different ways of requesting variations based on elements such as modifiers, styles, perspectives, authors or artists and formatting. This will enable you to tease apart the nuances that will produce the more interesting result for a particular type of query.
Next, find best practices for a specific workflow. For example, if you write marketing copy for product descriptions, explore different ways of asking for different variations, styles and levels of detail. On the other hand, if you are trying to understand a difficult concept, it might be helpful to ask how it compares and contrasts with a related concept as a way to help understand the differences.
It's also helpful to play with the different types of input you can include in a prompt. A prompt could consist of examples, input data, instructions or questions. You might want to explore different ways of combining these. Even though most tools limit the amount of input, it's possible to provide instructions in one round that apply to subsequent prompts.
Once you have some basic familiarity with a tool, then it's worth exploring some of its special modifiers. Many generative AI apps have short keywords for describing properties such as style, level of abstraction, resolution and aspect ratio, as well as methods for weighing the importance of words in the prompt. These can make it easier to describe specific variations more precisely and reduce time spent writing prompts.
It can also be worth exploring prompt engineering integrated development environments (IDEs). These tools help organize prompts and results for engineers to fine-tune generative AI models and for users looking to find ways to achieve a particular type of result. Engineering-oriented IDEs include tools such as Snorkel, PromptSource and PromptChainer. More user-focused prompt engineering IDEs include GPT-3 Playground, DreamStudio and Patience.