What is Prompt Engineering?
143 segments
Today we'll explore prompt engineering
and understand why it's such an
essential skill for AI engineers.
Traditionally, a user provides a query
and this query is called a prompt. It is
passed to the LLM which processes it and
generates an output. Prompt engineering
improves this process by crafting clear
effective instructions and adding them
to the user's query before sending it to
the LLM. The instruction defines rules
to follow, the communication style, the
purpose, and the overall goal. This
greatly improves the quality of the
output produced by the LLM. So, we can
think of prompt engineering as an
intermediate step before sending the
user's query to the LLM. In this step,
we apply various techniques to enhance
the query with clear and effective
instructions.
So, what are the main techniques used in
prompt engineering? Let's go over the
most important ones. One, few shot
prompting. In few shot prompting, we
include a few input output examples in
the prompt to guide the LLM's behavior
and help it understand the desired
pattern. For example, if we want to use
an LLM as an English to French
translator, instead of just sending the
query, where is the trainer station, we
can first provide a few English to
French translation examples and include
them in the prompt as part of the
instruction. This way, the LLM can
follow the same format, tone, and style
we desire by following the format of
provided examples.
Two, zero shot prompting. In zero shot
prompting, we only provide an
instruction for the translation task
without including any specific examples.
For example, we can give an instruction
like translate the following sentence
into French. This makes the task clear
and helps the language model understand
exactly what to do. Three, chain of
thought prompting. In this technique, we
ask the model to reason a step by step
before giving the final answer. In its
few shot setup, we encourage a
step-by-step reasoning by including
examples that demonstrate this reasoning
process. In a zero shot setup, we ask
the model just to reason a step by step
before answering. This often improves
the quality of the LLM's output.
Four, role a specific prompting. In role
a specific prompting, we instruct the
model to take on a specific role or
persona. For example, we can instruct a
model by saying, "You are a financial
advisor." This helps the LLM generate
more accurate and context appropriate
responses.
Five, prompt hierarchy. In prompt
hierarchy, we establish different levels
of authority within the instructions.
A typical setup includes a system
message where hidden instructions define
the model's behavior, guards, and highle
goals. This instruction is known as the
system prompt. A developer prompt
contains instructions from the
application developer that define
formatting roles and customize the LLM's
behavior. And finally, a user prompt,
which is just the user's direct input or
question. Together, these three levels
form the complete prompt that is sent to
the LLM. While these are the most common
prompt engineering techniques, there are
many others as well, such as negative
prompting, where we include do not
instructions that specify what the model
should avoid doing, iterative prompting,
which relies on trial and error to
refine and discover the most effective
instruction.
and prompt training where we break a
complex task into a smaller more
manageable steps and guide the model
through them in sequence. For example,
instead of directly asking an LLM to
determine if a product is healthy from
an image of its ingredients, we can
first ask the LLM to extract the text
from the image and then decide if the
product is healthy. What are the
principles and best practices?
Here are some key principles and best
practices to keep in mind when designing
your prompts.
One, start simple. Begin with a
straightforward prompts, then gradually
move to more complex ones. For example,
start with a simple request like
summarize this article. Once that works
well, expand it to something more
detailed, such as summarize this article
in two sentences for a startup founder.
Two, break down tasks. Divide complex
tasks into a smaller manageable steps.
For example, instead of asking write a
full research proposal on education,
start by prompting a step by step. List
three main challenges of the education
system. Then suggest possible research
directions for each challenge. And
finally, combine these ideas into a
short proposal. Three, be specific.
Clearly state what you expect in terms
of format, style, and desired outcomes.
For example, say write a two paragraph
summary in a formal tone. Use bullet
points for key facts and conclude with
one recommendation.
Four, include necessary information.
Adjust the prompt length to strike a
balance. Be concise but detailed enough
for the model to understand the task
clearly. For example, instead of saying
write a report, provide essential
context like write a one-page report
summarizing the key findings from the
attached experiment results, focusing on
model accuracy and failure cases. That
wraps up a quick overview of prompt
engineering, what it is, how it works,
and how to use it effectively. To learn
more about prompt engineering, check out
the links in the description.
Ask follow-up questions or revisit key timestamps.
The video provides an overview of prompt engineering, an essential skill for AI engineers. It explains how prompt engineering improves LLM output by crafting clear, effective instructions added to user queries. The discussion covers various techniques, including few-shot, zero-shot, chain of thought, role-specific prompting, and prompt hierarchy, as well as negative, iterative, and prompt training. Additionally, it highlights best practices for designing prompts, such as starting simple, breaking down complex tasks, being specific, and including necessary information for the LLM to understand the task clearly.
Videos recently processed by our community