HomeVideos

What is Prompt Engineering?

Now Playing

What is Prompt Engineering?

Transcript

143 segments

0:00

Today we'll explore prompt engineering

0:02

and understand why it's such an

0:04

essential skill for AI engineers.

0:07

Traditionally, a user provides a query

0:10

and this query is called a prompt. It is

0:13

passed to the LLM which processes it and

0:15

generates an output. Prompt engineering

0:18

improves this process by crafting clear

0:22

effective instructions and adding them

0:23

to the user's query before sending it to

0:26

the LLM. The instruction defines rules

0:28

to follow, the communication style, the

0:31

purpose, and the overall goal. This

0:33

greatly improves the quality of the

0:35

output produced by the LLM. So, we can

0:38

think of prompt engineering as an

0:40

intermediate step before sending the

0:42

user's query to the LLM. In this step,

0:45

we apply various techniques to enhance

0:47

the query with clear and effective

0:49

instructions.

0:51

So, what are the main techniques used in

0:53

prompt engineering? Let's go over the

0:55

most important ones. One, few shot

0:58

prompting. In few shot prompting, we

1:00

include a few input output examples in

1:03

the prompt to guide the LLM's behavior

1:06

and help it understand the desired

1:08

pattern. For example, if we want to use

1:11

an LLM as an English to French

1:13

translator, instead of just sending the

1:15

query, where is the trainer station, we

1:18

can first provide a few English to

1:20

French translation examples and include

1:23

them in the prompt as part of the

1:25

instruction. This way, the LLM can

1:27

follow the same format, tone, and style

1:30

we desire by following the format of

1:32

provided examples.

1:34

Two, zero shot prompting. In zero shot

1:37

prompting, we only provide an

1:39

instruction for the translation task

1:41

without including any specific examples.

1:44

For example, we can give an instruction

1:47

like translate the following sentence

1:48

into French. This makes the task clear

1:51

and helps the language model understand

1:53

exactly what to do. Three, chain of

1:56

thought prompting. In this technique, we

1:59

ask the model to reason a step by step

2:01

before giving the final answer. In its

2:04

few shot setup, we encourage a

2:06

step-by-step reasoning by including

2:08

examples that demonstrate this reasoning

2:10

process. In a zero shot setup, we ask

2:13

the model just to reason a step by step

2:15

before answering. This often improves

2:18

the quality of the LLM's output.

2:21

Four, role a specific prompting. In role

2:24

a specific prompting, we instruct the

2:26

model to take on a specific role or

2:28

persona. For example, we can instruct a

2:31

model by saying, "You are a financial

2:33

advisor." This helps the LLM generate

2:36

more accurate and context appropriate

2:38

responses.

2:40

Five, prompt hierarchy. In prompt

2:43

hierarchy, we establish different levels

2:45

of authority within the instructions.

2:48

A typical setup includes a system

2:50

message where hidden instructions define

2:53

the model's behavior, guards, and highle

2:56

goals. This instruction is known as the

2:58

system prompt. A developer prompt

3:01

contains instructions from the

3:03

application developer that define

3:05

formatting roles and customize the LLM's

3:07

behavior. And finally, a user prompt,

3:11

which is just the user's direct input or

3:13

question. Together, these three levels

3:16

form the complete prompt that is sent to

3:18

the LLM. While these are the most common

3:21

prompt engineering techniques, there are

3:23

many others as well, such as negative

3:25

prompting, where we include do not

3:28

instructions that specify what the model

3:31

should avoid doing, iterative prompting,

3:34

which relies on trial and error to

3:37

refine and discover the most effective

3:40

instruction.

3:41

and prompt training where we break a

3:44

complex task into a smaller more

3:46

manageable steps and guide the model

3:48

through them in sequence. For example,

3:51

instead of directly asking an LLM to

3:53

determine if a product is healthy from

3:55

an image of its ingredients, we can

3:58

first ask the LLM to extract the text

4:01

from the image and then decide if the

4:03

product is healthy. What are the

4:06

principles and best practices?

4:08

Here are some key principles and best

4:10

practices to keep in mind when designing

4:12

your prompts.

4:14

One, start simple. Begin with a

4:17

straightforward prompts, then gradually

4:19

move to more complex ones. For example,

4:23

start with a simple request like

4:25

summarize this article. Once that works

4:28

well, expand it to something more

4:30

detailed, such as summarize this article

4:32

in two sentences for a startup founder.

4:35

Two, break down tasks. Divide complex

4:39

tasks into a smaller manageable steps.

4:42

For example, instead of asking write a

4:44

full research proposal on education,

4:47

start by prompting a step by step. List

4:50

three main challenges of the education

4:52

system. Then suggest possible research

4:55

directions for each challenge. And

4:58

finally, combine these ideas into a

5:00

short proposal. Three, be specific.

5:04

Clearly state what you expect in terms

5:06

of format, style, and desired outcomes.

5:09

For example, say write a two paragraph

5:12

summary in a formal tone. Use bullet

5:15

points for key facts and conclude with

5:18

one recommendation.

5:20

Four, include necessary information.

5:23

Adjust the prompt length to strike a

5:26

balance. Be concise but detailed enough

5:29

for the model to understand the task

5:31

clearly. For example, instead of saying

5:34

write a report, provide essential

5:37

context like write a one-page report

5:40

summarizing the key findings from the

5:42

attached experiment results, focusing on

5:45

model accuracy and failure cases. That

5:48

wraps up a quick overview of prompt

5:50

engineering, what it is, how it works,

5:52

and how to use it effectively. To learn

5:55

more about prompt engineering, check out

5:57

the links in the description.

Interactive Summary

The video provides an overview of prompt engineering, an essential skill for AI engineers. It explains how prompt engineering improves LLM output by crafting clear, effective instructions added to user queries. The discussion covers various techniques, including few-shot, zero-shot, chain of thought, role-specific prompting, and prompt hierarchy, as well as negative, iterative, and prompt training. Additionally, it highlights best practices for designing prompts, such as starting simple, breaking down complex tasks, being specific, and including necessary information for the LLM to understand the task clearly.

Suggested questions

5 ready-made prompts