HomeVideos

How AWS S3 is built

Now Playing

How AWS S3 is built

Transcript

2064 segments

0:00

AWS [music] S3 is the world's largest

0:02

cloud storage service, but just how big

0:04

is it and how is it engineered [music]

0:05

to be as reliable as it is at such a

0:08

massive scale? Milon is the VP of data

0:10

and analytics at AWS and has been

0:12

running S3 for 13 years. Today we

0:14

discuss the sheer scale of S3 [music] in

0:17

the data stored and the number of

0:18

servers it runs on. How seemingly

0:20

overnight AWS went from an eventually

0:22

consistent [music] data store to a

0:24

strongly consistent one and the massive

0:26

injury and complexity behind this move.

0:28

What is correlated failure, crash

0:30

consistency, and failure allowances, and

0:32

why engineers on S3 live [music] and

0:34

breathe these concepts, the importance

0:36

of formal methods to ensure correctness

0:37

at S3 scale, and many more. A lot of

0:40

these topics are ones that AWS engineing

0:42

rarely talks about in public. I hope you

0:44

enjoy these rare details shared. [music]

0:46

If you're interested in how one of the

0:47

largest systems in the world is built

0:49

and keeps evolving, this episode is for

0:51

you. This episode is presented by

0:52

Statsig, the Unified platform for flags,

0:54

analytics, [music] experiments, and

0:56

more. Check out the show notes to learn

0:57

more about them and our other season

0:59

sponsors. So, Milan, welcome to the

1:01

podcast.

1:03

>> Thanks for having me.

1:04

>> To kick things off, can you tell me the

1:07

scale of S3 today?

1:09

>> Well, if you want to take a step back

1:11

and just think about S3, it is a place

1:14

where you put an incredible amount of

1:18

data. And so, right now, S3 holds over

1:20

500 trillion objects. We have hundreds

1:24

of exabytes of data. And we serve

1:27

hundreds of millions of transactions per

1:30

second worldwide. And if you want

1:32

another fun stat, we process over a

1:35

quadrillion requests every single year.

1:41

And what's under the hood of all that is

1:44

also pretty amazing scale. If you think

1:47

about, you know, what's underneath the

1:48

hood of S3 at the fundamentally were

1:51

we're discs and servers which sit in

1:52

racks and those sit in buildings. And if

1:55

you try to think about all of the scale

1:57

of what is under the hood, we manage

2:00

tens of millions of hard drives across

2:02

millions of servers. And that is in 120

2:05

availability zones across 38

2:08

regions, which is pretty amazing if you

2:12

think about it.

2:12

>> So deep down it it all starts with hard

2:14

drives sitting inside servers, sitting

2:16

inside racks, and then you have a bunch

2:18

of these racks and then rows of them,

2:20

buildings of them, right? And that's

2:21

what you said. So there's tens of

2:22

millions of hard drives deep down in in

2:25

in the bottom of this.

2:27

>> That's right. In fact, if you think

2:29

about the scale of this, if you imagine

2:32

stacking all of our drives one on top of

2:34

another, it would go all the way to the

2:37

International Space Station and just

2:39

about back. And so like that, I mean,

2:42

it's kind of a fun visual to have for us

2:44

who work on the service, but you know,

2:46

kind of fundamentally, it's it's really

2:48

hard to get your brain around the scale

2:49

of S3. And so a lot of our customers

2:52

they they don't they they assume the

2:54

scale is there. They assume that you

2:56

know all of the drives are always there

2:58

and they just focus on what S3 is to

3:00

them which is it just works. It just

3:02

works for any type of data and all of

3:04

your data.

3:05

>> Yeah. Even I mean even for me for the

3:07

scale when you talk about exabytes I

3:09

actually had to look up exabytes because

3:11

I know of pabytes which is already

3:13

massive. If if a company has like one or

3:15

two or three pabytes of data it's it's

3:17

tons. And exabyte is it is is it a yes

3:20

it's a thousand pabytes is an exabyte

3:22

and and you told me that you're you're

3:24

you're thinking in that level. It's just

3:25

hard hard to hard to fathom.

3:27

>> Yeah, we I mean we have individual

3:29

customers that have exabytes of data.

3:32

Individual customers who have exabytes

3:34

of data and what they call a data lake.

3:36

Although last week I heard a great term.

3:38

We had the um Sony group CEO talk about

3:42

what Sony is doing with data and they

3:44

refer to it as a data ocean and not a

3:46

data lake but a data ocean and so like

3:49

if you have exabytes of data in your

3:52

data lake it is in fact a data ocean and

3:54

that ocean is is kind of fundamentally

3:57

S3.

3:57

>> Can you tell me how S3 started? I I did

4:00

some research and there was a story

4:02

about a distinguished engineer sitting

4:04

in a pub in Seattle. who knows it was

4:06

true or not but I read that this this

4:08

was a story that he was a bit frustrated

4:11

with engineers at Amazon building a lot

4:13

of infrastructure again and again.

4:15

>> Yeah. If you think back into you know S3

4:18

development really started in 2005 and

4:20

we launched as the first AWS service in

4:22

2006 and if you think about the

4:25

technical problems of 2006 you know a

4:28

lot of customers were building things

4:30

like like e-commerce websites right like

4:32

Amazon.com

4:34

and so the engineers at Amazon knew that

4:36

they had a lot of data that at the time

4:40

was very unstructured data it was PDFs

4:42

it was images it was backups and they

4:45

needed wanted a place where they could

4:46

store that at an economic price point

4:49

that let them not think about the growth

4:51

of storage. And so they built S3 and

4:53

they really built it for a certain type

4:54

of storage. And so the original design

4:56

of S3 in 2006 was really anchored around

5:00

eventual consistency. And the idea of

5:02

eventual consistency is that when you

5:04

put data in storage for S3, you know,

5:07

we're not going to give you an act back

5:09

on your put unless we actually have your

5:11

data. So, we have your data, but the

5:13

eventual consistency part is that if you

5:15

were to list your data, it might not

5:17

show up because it's being eventually

5:19

consistent. It's there, but it might not

5:21

show up on a list. And so, we did that

5:24

at the time that consistency model at

5:26

the time, uh, we built that because, you

5:28

know, we were really optimizing for

5:30

things like durability and availability.

5:32

And it worked like a champ for, you

5:34

know, e-commerce sites and things like

5:36

that because, you know, when a human was

5:38

interacting with an e-commerce site and

5:39

an image happened to not show up exactly

5:42

at the moment where you put the data

5:44

into storage, it was okay because a

5:46

human would just refresh. And so when we

5:48

launched in 2006, here's a a fun fact

5:51

for you. 2006 is actually when Apache

5:54

Hadoop first began as a community as

5:56

well. And so we had a set of what I

5:59

think of as frontier data customers like

6:01

Netflix and uh Pinterest who took a look

6:04

at things like Hadoop and they put it

6:07

together with the economics and the

6:09

attributes of S3 which is you know

6:12

unlimited storage with pretty good

6:15

performance at a great price point. And

6:18

they um they decided to build their you

6:21

know what we first began to call data

6:23

lakes at the time. they decided to build

6:26

to extend the idea of unstructured

6:28

storage and include things like tabular

6:31

data. And so the first wave of frontier

6:34

data customers were adopting quote

6:36

unquote data links um in about 2013 to

6:41

2015. Those were the frontier data

6:44

customers born in the cloud. And around

6:46

2015 to I would say 2020, we started to

6:50

see all the enterprises take that same

6:52

data pattern of how can I use S3 the

6:56

home of all the unstructured data you

6:58

know on the planet and extend it to

7:00

tabular data and that's when about five

7:03

years ago 2020 I started to see a ton of

7:07

exabytes of you know basically parquet

7:10

files and you know I I have worked on S3

7:12

for a minute I started working on S3 in

7:15

20 I guess it was 2013.

7:18

I'd been at AWS since 2010, so kind of a

7:22

while. And the rise of parquet was

7:24

really interesting because what people

7:26

did is they said, "Oh, okay. I like the

7:29

traits and the attributes of S3 and I

7:32

want to apply it to a table." And so I

7:34

am going to run my own parquet data in

7:37

S3. And then you know around 20 I would

7:40

say 19 2020 we started to see basically

7:43

the rise of iceberg and iceberg at the

7:46

time you know is incredibly popular and

7:50

it gives the table attributes to the

7:53

underlying parquet data and customers

7:55

started to do it in you know many of my

7:58

largest data links across different

8:00

industries and different customers and

8:02

so one of the things that we did in 2024

8:05

is we introduced S3 tables

8:07

>> just for those who don't know what

8:08

iceberg is. So, it's it's an open-

8:10

source data format for like massive

8:11

analytic workflows. Right.

8:13

>> That's right. If I ask our customers of

8:16

these data oceans why they care so much

8:18

about iceberg, it's because they want to

8:21

be able to have what a lot of customers

8:23

are calling this decentralized analytics

8:26

architecture where, you know, they can

8:28

have lines of businesses or different

8:30

teams within their company that pick

8:32

what type of analytics to use as long as

8:35

it's Icebro compliant. And so if iceberg

8:38

is the common metaphor for data for

8:40

tabular data then you have choice you

8:43

have flexibility and choice for what

8:45

type of analytics engines you use in a

8:47

decentralized analytics architecture and

8:50

so I think that's one of the reasons why

8:52

iceberg has just taken off is that it

8:54

makes it easy to use data at scale but

8:56

it also gives a business owner this you

8:58

know the chief data officers or the CTOs

9:00

of the world it gives them future

9:03

proofing for analytics they can replace

9:04

their analytics they can change it out.

9:06

They can adopt new types of analytics

9:08

and AI because you have this iceberg at

9:12

the bottom turtle of S3. We lost S3

9:15

tables in 2000 um in December 2024. This

9:19

year we've had over 15 um new features

9:22

that we've added to S3 tables. Um and uh

9:26

and then this year of course we launched

9:29

the preview of S3 vectors in July and

9:32

then last week we were generally

9:33

available and so you know the story of

9:36

S3 it's like a story that our customers

9:38

have written for data but it's been

9:41

super fun to work on all these different

9:43

evolving attributes

9:45

>> as an engineer. What is the kind of

9:47

basic architecture and the basic

9:48

terminology I should know about when I'm

9:50

starting to work with S3? When we first

9:52

launched in 2006, the whole goal for SRE

9:55

is to provide a very simple developer

9:59

experience and we've really tried to

10:02

stick with that. In fact, when the

10:04

engineers and you know when we're

10:05

sitting around and we're talking about

10:06

what do we build next, we always go back

10:08

to that idea of how do you make things

10:11

really simple to use to use S3. And so

10:14

fundamentally S3 we have a lot of

10:16

different capabilities now, but it's

10:18

really about the put and the get. the

10:21

put of the storage in and the get of the

10:24

storage out and where we can do that

10:26

really well at scale that that is kind

10:28

of the heart of S3. Now we have a ton of

10:31

extra capabilities that we've launched

10:33

over time but you know fundamentally

10:35

when customers think about using S3 they

10:38

think about the put and the get.

10:40

>> Yeah. So like put data get data and I

10:42

guess some of the other like operations

10:45

it's a bit like HTTP right? There's also

10:47

delete, list, copy, a few kind of other

10:50

like I guess primitives

10:51

>> there is and you know if I think about

10:54

where we have gone over time we've added

10:57

capabilities on top of that just based

11:00

on what developers are trying to do.

11:01

Okay, let's just take put. Okay, um we

11:04

recently added a set of conditionals to

11:07

the put capability and like last year we

11:10

did put if absent or put if match. Um

11:12

this year we did a copy if absent or a

11:15

put if match and we did delete if match.

11:17

And the the core thing about for for us

11:20

with conditionals is that we can give

11:22

developers the capabilities of doing

11:25

things like the put but to do it based

11:28

on the behaviors of their application.

11:31

>> Outside of the the get and put the basic

11:33

operations I guess the base terminology

11:34

that you should just know about is the

11:36

buckets, objects and keys, right? That's

11:38

how we think about our data.

11:40

>> Yeah. And now it's not just objects. If

11:43

you think about um the two latest um

11:46

primitives or building blocks we've

11:47

introduced as as native to S3, one of

11:51

them is the iceberg table with our S3

11:54

tables and the other one is is vectors.

11:56

And you know under the hood of an S3

11:58

table is a set of parquet files that

12:01

we're managing on your behalf. But

12:02

that's not the case for vectors. A

12:04

vector is just basically a long string

12:06

of numbers. And that is a new data

12:09

structure for us and it's sitting um in

12:12

S3 just like your objects.

12:14

>> Milan was talking about the building

12:15

blocks of S3 like the put, get, tables

12:18

and vectors. Speaking of primitives for

12:20

building applications leads nicely to

12:21

our season sponsor, work OS. Work OS is

12:24

a set of primitives to make your

12:25

application enterprise ready. Primitives

12:27

like single sign on authentication,

12:29

directory sync, MCP authentication and

12:32

many others. One feature does not make

12:34

an app enterprise ready. Rather, it's

12:36

the combination of primitives altogether

12:38

that solves enterprise needs. When your

12:41

product grows in scale, you can always

12:42

reach for new building blocks for

12:44

infrastructure from places like AWS or

12:46

similar. Similarly, when you need to go

12:48

up market and sell to larger

12:50

enterprises, work provides the

12:52

application level building blocks that

12:54

you need for this. Work has seen the

12:56

edge cases, the enterprise complexity

12:58

and solves this for you so you can focus

13:00

on your core product. One example of

13:02

such a building block is adding

13:03

authentication to your MCP server. This

13:05

is a typical screen when you're about to

13:07

authenticate with an MCP server. If you

13:09

would have to build it from scratch, it

13:11

gets pretty complex to set up the ooth

13:13

flows behind the scenes. But with work,

13:15

it's a few simple steps. Add the AltKit

13:18

component to your project, configure it

13:19

via the UI, then you just direct clients

13:22

of your MTP server to authorize via

13:24

AltKit, verify the response you get via

13:26

some code, and that's pretty much it.

13:28

This is the power of well-built

13:29

primitives. To learn more, head to

13:31

works.com. And with this, let's get back

13:34

to S3 and how it all started. So I I I'd

13:37

like to still go back to the beginning

13:39

of of S3. When it was launched, it was

13:42

pretty shocking for the broader

13:43

community because S3 launched with a

13:46

pricing of 15 cents per gigabyte per

13:48

month, which was about a third to fifth

13:51

cheaper than anything else. The going

13:53

rate at the time was something like 50

13:55

cents or 75 cents. And on the first day,

13:58

I I read that like 12,000 developers

14:00

signed up immediately. A lot of

14:02

companies immediately or very quickly

14:03

moved over and then the surprising thing

14:06

was that S3 kept cutting prices. It was

14:09

unheard of before. You were there in the

14:11

2010s when some large price cuts happen.

14:14

Can you tell me what was the thinking

14:15

inside the S3 team on the this unusual

14:18

pricing it seemed customers would have

14:19

been willing to pay more and also the

14:21

the cutting of prices continuously even

14:23

today? I think today it's something like

14:25

2 cents or 2.3 cents, something like

14:27

that for the same storage as as as it

14:30

was 15 cents on launch.

14:32

>> Yeah. You know, I think part of this

14:34

goes back to what the goal is for S3.

14:38

Okay. And so the mission of S3 is to

14:40

provide the best storage service on the

14:42

planet. Okay. And our goal too is that

14:46

if you think about the growth of data,

14:47

IDC says that data is growing at a rate

14:49

of 27% year-over-year. But I have to

14:52

tell you, we have so many customers that

14:54

are growing so much faster than that.

14:56

>> Yeah, I was about to say it sounds

14:57

pretty low.

14:58

>> I know like that. But that's an average

15:00

across everything. We have a lot of

15:02

customers that grow twice or three times

15:04

that that rate. But if you think about

15:06

that, okay, you think about all the data

15:08

that's being generated from sensors,

15:10

from applications, from, you know, AI,

15:13

from all these different

15:14

>> from just taking photos. I mean, every

15:16

day, right?

15:17

>> Photos. That's right. like you know and

15:19

you know if you think about your phone

15:20

too think about the resolution and how

15:23

the resolution of the cameras on their

15:25

phone have grown you just have this like

15:27

kind of what Sony talked about with the

15:29

data ocean. Okay. And in order to have

15:33

all that data and to grow it you have to

15:36

be able to grow it economically. You

15:38

have to be able to grow it at a price

15:39

point where you don't really think okay

15:41

what data am I going to delete now

15:44

because I'm running out of space. You

15:46

don't have that conversation with S3

15:48

customers because of of two things. One

15:51

is, you know, we do lower the price of

15:53

either storage or the capabilities of

15:55

what we're doing. Like for example, we

15:57

lowered the cost of compaction for S3

15:59

tables pretty dramatically within a year

16:01

after launching S3 tables. It's not just

16:04

that it's like the overall total cost of

16:06

ownership of your storage. We give you

16:08

the ability to tier and to archive,

16:10

right? Storage. We give you the ability

16:13

to do something called intelligent

16:14

taring, which is if you don't touch your

16:17

data for a month, we'll give you an

16:18

automatic discount on that data because

16:21

we're watching your storage and you

16:23

don't touch it for much, we'll give you

16:24

up to 40% discount on that storage. And

16:26

it's like dynamic discounting so you

16:28

don't even have to think about it. And

16:30

so our whole goal is that you can grow

16:33

the data that you need to grow because

16:35

we know that's being used to pre-train

16:38

models. We know it's being used to

16:40

fine-tune and do any type of

16:41

post-raining of AI. We know you're using

16:43

it for analytics. We know you're using

16:45

it for all these different things either

16:46

now and in the future. And so our goal

16:49

is so that you can keep your data and

16:51

you can use it in a way that advances

16:55

whatever the thing is that you're doing,

16:57

whether it's life sciences or you're an

16:59

enterprise, you know, in in

17:01

manufacturing, right? whatever you need

17:03

the data should be there and you should

17:05

be able to grow it and keep it and use

17:07

it any way you want.

17:09

>> Yeah, I I did want to ask you about this

17:11

part. So there's intelligent taring

17:13

which was launched in 2018. So like 12

17:15

years after S3 was launched. One thing

17:17

that really got my attention Amazon

17:18

Glacier which is was launched in 2012.

17:20

So a long time ago and it's you can

17:23

store data that you don't need immediate

17:25

access to. You're okay waiting for some

17:27

time to uh to get access to it. I think

17:28

maybe even hours. when it launched it

17:30

was only one cent per gigabyte per month

17:32

which was again this was something back

17:34

then the going rate for storage was like

17:36

15 cents so almost like almost 10 times

17:38

cheaper. How do you do that? Like what

17:40

what is the architecture and thinking

17:42

behind how you're able to have this

17:45

trade-off of like look if you don't need

17:47

your data quickly we can do it a lot

17:49

cheaper. How h how could I imagine the

17:51

kind of trade-offs that that you and the

17:53

engineering team were were were thinking

17:55

of making? Well, you know, I mean, a as

17:58

you know, I mean, you're an engineer

18:00

yourself and you know, as you know, a

18:02

lot of engineering is about constraints,

18:04

right? And that is the fun part about

18:07

working on on S3 is that when you think

18:10

about constraints, you think about

18:12

constraints that we have for

18:13

availability, you consider you think

18:15

about constraints that we have around,

18:17

you know, the cost of storage, we start

18:19

to get really really creative. Okay? And

18:23

in S3, because you know we build all the

18:26

way down to the metal of S, you know, of

18:30

the drives and the capabilities that we

18:32

have in our hardware, we're able to

18:34

drive, you know, efficiencies at every

18:38

single part of our stack. Okay? And so

18:41

our engineers when they get together and

18:42

they and they talk about the

18:44

constraints, they talk about the design

18:45

goals, we'll do something like we'll set

18:48

a a a target for you know the cost of a

18:53

bite and we'll drive for that and we'll

18:56

drive for it at every single part of the

18:58

process. And the part of the process

19:00

that we are also including is is you

19:03

know it includes a data center. How do

19:05

our data center tech uh technicians be

19:08

able to operate the the service of S3

19:11

from a hardware and a data center

19:13

perspective like the physical buildings

19:15

just like we do the same thing for the

19:18

software and the layers of S3 itself and

19:22

when you have that when you have that

19:24

ability to to run across the whole stack

19:26

all the way down to the physical

19:27

buildings and we're thinking about so

19:30

deeply about the cost and the lifetime

19:34

of every bite it you're able to do

19:35

things like like Glacier. You mentioned

19:37

something really interesting that when

19:39

S3 started it was eventually consistent

19:42

which means that you know data

19:43

eventually arrives it it might not be

19:45

there and you might be behind and

19:47

there's a lot of things that you can do

19:49

with this and and it gives you some

19:50

constraints but you mentioned that the

19:52

reason that the team launched this

19:54

because durability and availability was

19:57

more important and I I assume of course

20:00

cost as well but during those initial

20:02

phases while S3 was eventually

20:04

consistent what what kind of benefits

20:06

does it give to have eventual

20:08

consistency? Is it a cost uh constraint?

20:10

Is it just easier to do high

20:12

availability systems from from an

20:13

engineering perspective?

20:14

>> Well, I mean from an engineering

20:16

perspective, the the main optimization

20:18

was it was availability. It was not

20:21

necessarily durability, but it was

20:22

availability. Okay. So, if you take a

20:25

step back and and um look at the

20:28

original design of S3, we were really

20:30

focused very hard on availability. So,

20:33

so let's take a step back. Okay. So when

20:35

you talk about consistency, it's the

20:37

property where the object retrieval, the

20:40

object get reflects the most recent put

20:43

to that same object. Okay? And so if you

20:47

think about, you know, what parts of the

20:50

system of S3 that really hits, a lot of

20:52

it just kind of starts with our indexing

20:54

subsystem. So if you think about the

20:56

indexing subsystem in S3, that holds all

20:58

of your object metadata. And so that's

21:00

like its name, its tags, its creation

21:03

time, and the index, our index is

21:06

accessed on every single get or put or

21:10

list or head or delete, any API call

21:12

like that. And so um every single data

21:16

plane request where you go back into our

21:18

storage system to go get an object goes

21:20

through our index. And if you think

21:23

about it, more requests go through our

21:25

index in our storage system because for

21:27

example, it's serving thing like head

21:29

requests and list requests that don't

21:30

actually end up going back into our

21:32

storage system at all. That's those are,

21:34

you know, metadata or index requests.

21:37

So, you know, if you think about our

21:39

indexing system, we have a um a storage

21:42

system in there. Okay? And that is a

21:44

really central concept, a storage system

21:46

in the middle of our indexing system. So

21:49

you need you need a storage system for

21:50

your index in index system, right?

21:52

>> That's right. And so um we have to

21:56

configure and size the system to deliver

22:00

on our you know our design promise for

22:02

our for both availability and and

22:04

durability. Okay. And so the data is

22:08

basically in our in our in our index

22:10

system is stored across a set of

22:12

replicas and it uses something called

22:14

you know it's it's basically a quorum

22:15

based algorithm. Okay. And a quorum

22:18

based algorithm tends to be very

22:20

forgiving to to failures. And so if you

22:22

think about how we implemented quorum in

22:24

our index system, we start first from

22:27

servers that are running in these

22:28

separate availability zones. And the

22:30

reason we do that is that it a it lets

22:33

us avoid correlation on a single fault

22:37

domain. Okay. And since the failure of

22:40

like a single disk, a server, a rack, a

22:42

zone, it only affects a subset of data,

22:45

it never affects all of the data for a

22:48

single object or even a majority of the

22:50

data for a single object which we have

22:53

sharded across you know a wide um spread

22:57

of um of servers. So like this this core

23:00

of availability for us is this idea that

23:02

we spread everything. And so when a read

23:05

comes in, it's coming into the S3 front

23:08

end and we just heavily cache objects

23:10

across our systems. When a read comes

23:12

in,

23:13

>> it could route at random and you could

23:16

create a situation where you're creating

23:18

an an inconsistent read.

23:20

>> And so when we have quorum at the index

23:23

storage layer, we can see reads and

23:25

writes overlap, but in in the cache,

23:27

they don't because we're optimizing for

23:29

availability. So, so, so ju just so I

23:31

understand the the first part the

23:33

eventual consistency correct me if I'm

23:35

wrong that you can just you know write

23:38

to all these distributed nodes and you

23:39

ask one of them and if it doesn't have

23:41

it no problem because it will be

23:42

eventually consistent you now have high

23:44

availability because you don't need to

23:46

worry about all of them being in the

23:47

same correct and that's

23:49

>> phase one of AWS and it was it it gives

23:52

you availability and now you're now

23:54

explaining how you're able to behind the

23:58

scenes turn this a strongly consistent

24:01

the strong consistency means that it's

24:02

guaranteed to have the the whole systems

24:06

state which is hard to do because you

24:07

could have distributed failures etc

24:10

>> and this replicated journal you know it

24:12

took us a while to build I won't lie we

24:14

don't talk about this stuff very very

24:16

much okay because this is kind of the

24:18

the secret sauce of S3 um but you know

24:23

again like our engineers who are in the

24:25

room they were thinking about how do you

24:27

deliver on both the strong consistency

24:30

without compromising availability. So I

24:32

go back to constraints. Okay. So in in

24:35

that case we were not trading off the um

24:39

consistency and availability anymore.

24:40

And so the engineers had to come up with

24:43

a new data structure. Basically we do

24:46

this in S3. Uh vectors basically is a

24:49

new data structure that we came up with

24:50

as well. But you know if you think about

24:52

what we had to invent for strong

24:55

consistency at S3 scale without relaxing

24:58

the constraint of availability is we had

25:01

to build this replicated journal. Okay.

25:03

And the replicated journal is basically

25:05

a distributed data structure where we're

25:07

chaining nodes together so that when

25:10

this write is coming into the system

25:12

it's flowing through the nodes

25:13

sequentially. Okay. And so a reader

25:16

write in a strongly consistent system

25:19

for S3, it flows through these storage

25:21

nodes in the journal sequentially. And

25:23

so every node is forwarding to the next

25:25

node. And when the storage roads get

25:27

written to, they learn the sequence

25:29

number of the value along with the value

25:31

itself. And therefore on a subsequent

25:34

read like through our our cache, the

25:38

sequence number can be retrieved and

25:40

stored. And so now you have this

25:43

[clears throat] strongly consistent and

25:45

highly available capability in S3. And

25:48

the heart of that is actually this

25:49

replicated journey.

25:51

>> Okay. But what's what's the catch on one

25:52

on one end because there's always always

25:55

something with trade-offs. You always

25:57

have something. So on one end you

25:58

obviously have more complicated business

25:59

logic. And then I guess the second

26:02

obvious question is what about failures?

26:03

Because in the case of eventual

26:04

consistency, you don't worry too much

26:06

about one failure. Clearly in this case

26:09

uh what if a node in the sequence fails

26:12

either at the time at the first time or

26:14

or later or how does the system monitor

26:17

this recover because that that's I guess

26:19

that's going to be the tricky part right

26:20

>> there's another piece to this puzzle

26:22

that we implemented which is um you know

26:25

it's it's basically a cash coherency

26:28

protocol and the idea is that um this is

26:32

where we built what we think of as a

26:34

failure allowance where in in this mode,

26:37

we uh needed to retain the property that

26:40

like multiple servers can receive

26:42

requests and some are allowed to fail.

26:44

And so it's kind of this combination of

26:47

this replicated journal as a as a new

26:50

data structure plus we implemented this

26:52

new cache coherency protocol that gave

26:55

us a failure allowance and those two

26:57

things working in concert gave us this

27:00

uh strong consistency. I will say too

27:03

this does come at some um actual cost. I

27:06

was about to say like you you nothing is

27:08

free on engineering, right?

27:10

>> There's hardware cost in this because

27:12

you can imagine we're we're we we've

27:14

done some more engineering behind the

27:16

scenes, but I I remember um sitting in

27:19

the room with our engineers on S3 and we

27:22

did a debate on this. We we debated it.

27:24

We said, you know, there's costs there's

27:27

like actual costs to the underlying

27:29

hardware for this and do we pass it

27:30

along to customers or not? And we made

27:32

that explicit decision not to. We said

27:35

>> really?

27:36

>> Yeah. We said that when we launch this,

27:39

we should launch strong consistency. We

27:42

should make it free of charge to

27:43

customers and it should just work for

27:46

any request that comes into S3. We

27:48

shouldn't sort of say it's only

27:50

available on this bucket type or what

27:52

have you. This should be true for every

27:54

request made to S3. And part of that

27:58

mindset for S3 is like how can we

28:02

provide these type of capabilities and

28:05

how can we make it something that

28:07

becomes a building block like part of

28:09

the building block of S3 and you

28:12

shouldn't have to think about the cost

28:13

of it. This was the very surprising

28:16

thing of this launch by the way that

28:17

suddenly AWS said like okay everything

28:19

is strong existent it does not cost you

28:22

more latency wise your latencies have

28:25

shouldn't have changed significantly I

28:27

mean I'm sure when you roll out

28:30

initially you do your measurements etc

28:31

but but that was the promise and that

28:33

was why I I couldn't really believe it

28:35

when I I I reread history because it

28:39

typically doesn't happen typically

28:40

strong consistency does add latency or

28:42

it increases cost if it doesn't have

28:44

latency. There's always these

28:45

trade-offs. And I mean, sounds like you

28:47

either swallowed the cost or or cost

28:49

caught up, but it's it's very unusual.

28:51

So,

28:51

>> if I think about that, one of the things

28:53

that was also very important for us, and

28:56

we haven't really talked about this as

28:57

much, but it's it's we think about it a

28:59

lot on the S3 team is correctness. Okay?

29:03

So, it's one thing to say that you're

29:04

strongly consistent on every request.

29:07

It's another thing to know it. And so

29:10

when we built this strong consistency,

29:13

you know, I I I talked about our new

29:14

caching protocol. I talked about this

29:16

replicated journal as a new data

29:18

structure. You know, that took a little

29:19

bit of time to to do and to get right.

29:22

But at S3 scale, we could not say that

29:26

we were strongly consistent unless we

29:29

actually knew we were strongly

29:30

consistent. Okay. And so what does that

29:33

mean? How do you do that at S3 scale

29:35

when everybody is using it for every

29:37

last workload? In fact, one of the

29:39

reasons why people use it is because our

29:41

scale is such that we're decorrelating

29:44

workloads and you can run absolutely

29:46

anything on S3. But how do you know?

29:49

Milon just talked about how strong

29:50

consistency made it so much easier to

29:52

trust S3. Trust is something that is

29:54

just as important when writing code,

29:56

especially when with AI we write more

29:58

code than before. And this is a good

30:00

time to talk about our season sponsor

30:02

Sonar. What is the impact that AI is

30:04

having on developers? Let's look at some

30:06

data. A new report from Sonar, the state

30:08

of developer server report, found that

30:10

82% of developers believe they can code

30:13

faster with AI. But here's what's

30:15

interesting. In this same survey, 96% of

30:18

developers said they do not highly trust

30:20

the accuracy of AI code. This checks out

30:22

for me as well. While I write code

30:24

faster with AI agents, I don't exactly

30:26

trust the code it produces. This really

30:28

becomes a problem at the code review

30:30

stage where all this AI generated code

30:32

must be rigorously verified for

30:34

security, reliability, and

30:35

maintainability. Sonar Cube is precisely

30:38

built to solve this code verification

30:39

issue. Sonar has been a leader in the

30:42

automated code analysis business for

30:43

over 17 years, analyzing 750 billion

30:47

lines of code daily. That's over 8

30:49

million lines of code per second. I

30:51

actually first came across Sonar 13

30:53

years ago in 2013 when I was working at

30:55

Microsoft and a bunch of teams already

30:57

use Sonar Cube to improve the quality of

30:59

their code. I've been a fan since. Sonar

31:01

provides an essential and independent

31:03

verification layer. It's automated

31:05

guardrail that analyzes all code whether

31:07

it's developer or AI generated, ensuring

31:09

it meets your quality and security

31:11

standards before it ever reaches

31:13

production. To get started for free,

31:14

head to sonarsource.com/pragmatic.

31:17

And with this, let's get back to the

31:19

importance of strong consistency at AWS.

31:22

>> How do you know that you're strongly

31:23

consistent? And that is why we used

31:25

automated reasoning.

31:26

>> What is automated reasoning for for

31:28

those of us who are not as familiar with

31:30

this, which will be most people outside

31:31

of very few domains like S3.

31:34

>> Yeah, it's I mean S3 uses automated

31:36

reasoning all over the place. Okay. And

31:38

automated reasoning is a specialized

31:40

form of computer science. Okay. And

31:42

girly, if you if you kind of think about

31:44

if computer science and math got married

31:46

and had kids, right, it would be

31:49

automated research. It's

31:50

>> is it formal methods or based on formal

31:52

methods?

31:53

>> That's exactly.

31:53

>> Oh, yeah. I mean, I I studied computer

31:56

science. So, yeah, that that's fun.

31:58

>> So, it's actually proper formal methods

31:59

that you're using.

32:00

>> That is right. And we use formal methods

32:02

in many different places in S3. But one

32:04

of the first places that we adopted was

32:07

for us to feel good that we actually had

32:10

delivered strong consistency across

32:13

every request. So what we did is we

32:15

proofed it, right? We basically built a

32:17

proof for it and then we incorporated

32:19

our proof on check-ins into this index

32:22

area that I talked about, right? Where

32:24

you have your caching and then you have

32:25

your storage sub layers of the index

32:28

capabilities. And so when somebody

32:30

anybody is working on our index

32:32

subsystem now and they're checking in

32:34

code into the code paths that that are

32:37

being used um for uh consistency we are

32:41

proofing through formal methods that we

32:43

haven't regressed our consistency model

32:46

>> and can you just give us a rough idea

32:48

because the formal methods that I have

32:51

have studied they were pretty abstract

32:52

the things like designing languages how

32:55

to have like the different operators and

32:58

of course there are maths involved as

32:59

well. But what are are they like

33:02

primitives like servers, network

33:05

etc and models being built, data flows

33:08

like how how can I imagine a a a simple

33:12

proof of of something

33:14

>> inside S3 roughly at a really high

33:16

level.

33:16

>> Yeah. I mean if you if you go back to

33:18

the fundamental notion of a proof, you

33:21

are proving something to be correct.

33:23

Okay. And so the places that we use

33:25

these proofs, we use them in consistency

33:28

where we built a proof across all the

33:30

different combinatorics to make sure

33:33

that the consistency model is correct.

33:35

We use it in cross region replication to

33:38

prove that a replication of data from

33:41

one region to another arrived and we use

33:44

it in different places within S3 to

33:46

prove the correctness of API. In all of

33:48

these cases, you know, we talk about

33:51

durability, we talk about availability,

33:54

we talk about cost, but just as strong

33:56

of a a principle, a design principle for

33:59

us across S3 is correctness. It's a

34:02

correctness of uh, you know, a thing, an

34:05

API request, you know, an operation as

34:08

it were. And the key thing for us too is

34:10

that you don't you don't want to just

34:12

proof it once. You want to proof it on

34:15

every single check-in and you want to

34:17

proof it on every single request so you

34:20

can verify you can validate and verify

34:23

that um you are doing in fact what you

34:26

say you do and I think for us you know

34:28

at a certain scale

34:30

math has to save you right because at a

34:33

certain scale you can't do all the

34:36

combinatorics of every single edge case

34:38

but math math can save you and help you

34:41

on this uh at S3 scale and so we use we

34:44

use formal methods in many different

34:47

places of estry. We have some research

34:49

papers too. I can send you some links to

34:52

some research papers where you talk

34:53

about

34:53

>> Yeah, please please do and we will and

34:55

we will put it in the the show notes

34:57

below so anyone can check it out because

34:59

I think it's it's really interesting. I

35:00

I feel formal methods are not really a

35:03

thing in a lot of startups and even

35:05

infrastructure startups yet but it it

35:07

sounds very reassuring to me to actually

35:09

have an ongoing proof of that. And

35:12

speaking of which, I I I want to ask

35:14

about one thing that is related to this

35:16

durability. Uh Amazon S3 has very very

35:19

like high durability promises. I think

35:22

it's 11 9 which I I had to like do a

35:24

double check on because in in uh backend

35:28

systems whenever you say three nines

35:29

it's like h when you say four 9ines of

35:31

availability we're not talking dur

35:33

availability four 9s is already hard to

35:35

achieve and beyond that it just gets

35:37

very expensive and I have never heard of

35:40

11 9 of durability now this is

35:42

durability and not availability one

35:44

question that I I got when I when I I

35:46

shared this stat uh publicly what people

35:48

one thing people were asking and I was

35:50

also thinking How can you prove that not

35:53

just in a formal way but you're now

35:55

storing as you said 500 trillion uh

35:58

objects which is now large enough that

36:01

just by this durability promise you

36:02

should be you might be losing some of

36:05

them do you actually like validate it on

36:08

the actual data as well on outside of

36:10

the proof because I assume in the proof

36:12

you will have assumptions on hardware

36:13

failure rate which might or might not be

36:15

not be true. So my my question is that

36:17

at Amazon S3 level when when you you are

36:19

able to look at the are we living up to

36:22

for example our durability promise how

36:23

do you go about that and and what are

36:25

your findings?

36:26

>> Yeah. So we just spent a lot of time

36:28

talking about our index subsystem

36:31

uh because that is the subsystem that is

36:33

related to consistency but when you

36:36

think about durability I mean you think

36:38

about it all you know at different

36:39

levels of um the S3 stack but we really

36:42

think about it in the storage layer. And

36:44

so if you think about it in the storage

36:46

layer, you have this design, this

36:48

promise of you know the design here and

36:51

underneath that is a combination of

36:53

things. It's software but it's also the

36:55

physical layout of where our data is

36:58

across everything that we have in S3.

37:00

And you know one of the things that I

37:02

talked about is that we have you know

37:05

disks and servers which sit in racks

37:08

which sit in buildings and we have tens

37:10

of millions of these hard drives. We

37:12

have millions of servers and we have 120

37:14

availability zones across 38 regions.

37:17

>> Yeah. And one availability zone is like

37:19

two availability zones are two

37:20

physically separate locations just to

37:23

physically separate and sometimes

37:24

they're a ways away from each other and

37:26

in some of our regions we have more than

37:28

three availability

37:30

zones gives us a different domain a

37:32

fault domain. If I were to think about

37:34

durability I think the most important

37:36

thing for us is our auditors.

37:39

So if you think about a distributed

37:40

system, we talked about the put and the

37:42

get. We have many many many

37:46

microservices

37:47

that are all doing one or two things

37:49

very well in the background. Okay? And

37:51

so we have many different varieties of

37:54

health checks, but we also have um

37:56

repair systems and we have auditor

37:59

systems. And our auditor systems go and

38:01

they inspect every single bite across

38:03

our whole fleet. And if there are signs

38:06

that there is repair needed, you know,

38:08

another repair system will come in

38:10

place. And these are all, you know, in

38:12

the in the world of distributed systems,

38:14

these are all microservices working

38:16

together, loosely correlated, but

38:18

communicating through well-known

38:20

interfaces. And so that, you know,

38:22

collection of systems, which are over

38:24

200 microservices now, that all sit

38:27

behind one S3 regional endpoint. And a

38:30

fair number of those subsystems, those

38:32

microservices are all dedicated to the

38:35

notion of durability.

38:36

>> So, so they will go and check and log

38:39

and report back. So, do I understand

38:41

correctly that in any given time frame

38:44

at S3 someone or some people or or some

38:47

systems can actually answer the question

38:48

of what is our durability the past week,

38:52

month,

38:53

>> year and so on.

38:55

>> Yes.

38:55

>> Okay. Great. So, so you can actually

38:57

verify your your durability promise that

38:59

check if the math is mathing.

39:01

>> Yes. And you know, part of our design is

39:04

that at any given moment in this

39:05

conversation that you and I have had

39:07

just just today, we're we're having

39:09

servers fail because servers fail. And

39:12

so what we are building and what we've

39:14

built in S3 is an assumption that

39:16

servers fail. And so a lot of our

39:19

systems are always, you know, they're

39:21

first of all, they're they're they're

39:22

checking to see, you know, where any

39:25

failure might hit an individual node.

39:27

How does it affect a certain bite? What

39:29

repair needs to automatically kick in

39:31

place? And so this system is constantly

39:34

moving behind the scenes, if you will,

39:36

while and that is a completely separate

39:39

thing from the get and the put. The get

39:40

and the put is what the customer sees.

39:42

There's this whole universe under the

39:44

hood of how do we manage the business of

39:47

bytes at scale.

39:49

>> I'm just thinking because for a lot of

39:51

us engineers who are building like

39:53

moderately sized systems I'll say

39:56

compared compared to S3 they can already

39:58

be big but a failure is is a big deal

40:00

like you know like a a machine going

40:02

down again. I have a small side project

40:04

and my storage filled up and I started

40:08

to give errors and this is a big deal

40:09

because it rarely happens to me. This is

40:11

the first time it happened in 3 years.

40:13

>> Yeah.

40:13

>> But I understand in your business or or

40:15

when you work at S3 scale, this is just

40:17

every day. And and the question is not

40:19

when, it's just how often, how do you

40:21

deal with it? I guess it's a different

40:23

world.

40:24

>> It is a different world. And the the

40:26

trick is to really think about

40:27

correlated failure. Okay. So, if you're

40:30

thinking about availability at any

40:32

scale, it's the correlated failure

40:35

that'll get you. And

40:36

>> and what is a correlated failure?

40:38

>> Okay. So that's super interesting. So if

40:41

you think about what I talked about

40:42

with, you know, eventual consistency, we

40:45

talked about quorum. Okay? And quorum is

40:47

okay for one node to fail, but if all of

40:49

the nodes go south, for example, and

40:51

they're in the same availability zone or

40:53

on the same rack, then you're really

40:55

going to be messing with your

40:56

availability of the underlying storage,

40:58

okay? You've just lost your failure

41:01

allowance that I talked about with the

41:02

cache because they all fail together.

41:04

And so like a correlated failure is an

41:07

incredibly important thing to think

41:08

about when you're thinking about

41:10

availability. And so when we're

41:13

designing around correlated failures,

41:16

the thing is that we have to think about

41:18

is like do we expose or how are those

41:21

workloads exposed to different low uh

41:23

levels of failure. So when you upload an

41:25

object to S3 with a put, we replicate

41:28

that object. Okay? We don't just store

41:30

one copy of it. We store it many

41:32

[clears throat] times. And that

41:34

replication is important. It's important

41:36

for durability. But what's interesting

41:37

about it, it's also important for

41:39

availability because if any of those

41:42

correlated failure domains fail, like if

41:45

a whole a fails, there's still a copy

41:47

somewhere else and the data is still

41:49

available somewhere even though an

41:52

availability zone has failed or a rack

41:54

has failed or a server has failed or so

41:56

forth. Okay. And so that idea of how do

42:00

you manage and design around correlated

42:03

failures with both our physical instit

42:07

infrastructure is super important for S3

42:11

for both availability and durability. Uh

42:14

we also do things like we think about

42:16

something called crash consistency. I

42:18

mean Gregly you you can tell I can go on

42:20

and on about this so you just have to

42:22

stop me.

42:22

>> No but but this is the this is the

42:24

interesting stuff.

42:25

>> All right. So the whole idea of crash

42:26

consistency is that a system any system

42:29

that you build it should always return

42:31

to a consistent state after a a fail

42:34

stop failure. And if you can like do

42:37

things like reason about the set of

42:39

states that a system can reach in the

42:42

presence of failure and you just always

42:44

assume the presence of failure. Then you

42:48

also assume the presence of consistency

42:51

and availability. then you just design

42:54

all of these different microservices to

42:56

all work together in an underlying um uh

42:59

capability like S3. But that's what our

43:01

engineers do. They think about like

43:03

crash consistency. They think about

43:05

correlated failures, you know, they

43:08

think about failure allowances and

43:10

caches, right? And it's it's all that

43:13

deep distributed system work that um

43:16

that our engineers come in every day to

43:18

work on. Can can we talk about how you

43:20

think about failure allowances because

43:22

again there there is a concept of error

43:25

budgets outside in other companies as

43:28

well. I feel it's a bit like loosely

43:29

handled whereas I feel this is kind of

43:31

your bread and butter. So what is a

43:32

failure allowance and how do you measure

43:34

it and what do you do if you if you

43:36

overstep it or overspend it.

43:39

>> Yeah I mean I think that the idea of a

43:41

failure allowance is want to have it

43:43

like you have to have it. If you assume

43:44

for no, you know, that you'll never have

43:46

a failure, you will you'll actually have

43:48

a very bad day for your customer. And so

43:51

we account for failure allowances. And

43:53

but the the most important thing is

43:55

let's just talk about the failure

43:57

allowance in our cash. So how do we

43:59

manage that? Well, we manage it in such

44:01

a way that you'll never experience it

44:03

because we size it, right? And if you're

44:06

sizing the cache and you're making sure

44:08

that the underlying capabilities and the

44:10

hardware are always there and we have

44:12

like I talked about those distributed

44:14

sub subsystems those microservices that

44:16

are all interoperating under the hood.

44:19

We have a ton of them that do nothing

44:21

but just track metrics right and like

44:24

you know that the sizing of our cache is

44:27

all related to the metrics and the um

44:30

>> the size of our underlying system.

44:32

>> All the metrics. Yeah.

44:33

>> Yeah. That's right. And so one of the

44:35

really big benefits of running on S3 is

44:37

because our system is so huge, you have

44:40

these massive, you know, uh layers,

44:43

right? And the massive layers are all

44:46

managing things like correlated failures

44:48

and and um and failure allowances. And

44:50

because they are so huge at the scale of

44:53

S3, any application that's sitting on

44:55

top of S3 gets the benefit of it.

44:58

>> Let's take a break a minute from S3 to

45:00

talk about a one-of-a-kind event I'm

45:01

organizing for the first time. The

45:03

Pragmatic Summit in partnership with

45:05

Stats Sig. Have you ever wanted to meet

45:07

standout guests from the Pragmatic

45:08

Engure podcast, plus folks from Kadesh

45:11

Tech Companies and learn about what

45:12

works and what doesn't in building

45:14

software in this new age of AI? Come

45:16

join me 11 February in San Francisco for

45:18

a very special one-day event. The

45:20

Pragmatic Summit features industry

45:22

legends and past podcast guests like

45:24

Laura Tacho, Kent Beck, Simon Willis,

45:26

Chip Huan, Martin Fowler, and many

45:29

others. We'll also have insider stories

45:31

on how engineering teams like Cursor,

45:33

Linear, OpenAI, Ramp, and others built

45:35

cutting edge products. We'll also have

45:37

roundts and carefully created audience

45:40

where everyone and everyone is

45:41

interested to meet and chat with.

45:42

Something I'm hoping will make this

45:44

event extra special. Seats are limited

45:46

and you can apply to attend at

45:48

pragmaticsummit.com.

45:49

Talks will be recorded and shared and

45:51

paid subscribers will get early access

45:53

afterwards as well as a thank you for

45:55

your additional support. I hope to meet

45:57

many of you there and I am so excited

45:59

about this event. And now let's jump

46:00

back to S3 and the massive scale of the

46:02

service. To get a sense of what what

46:05

what the reality is like working as an

46:07

engineer, an engineering leader inside

46:09

an organization like this. I read a

46:12

quote from a distinguished engineer Andy

46:14

Warfield who who said I'm I'm just

46:16

quoting what what what he said. Early in

46:18

my career, I had this sort of naive view

46:20

that what it meant to build large scale

46:22

commercial software that it was

46:23

basically just code. The thing I

46:24

realized very quickly working on S3 was

46:27

that the code was inseparable from the

46:29

organizational memory and the

46:30

operational practices and you know the

46:32

scale and and the scale of the system

46:35

since you you've now been more than a

46:37

decade in S3. How do you think of this

46:39

this beast this this really complex

46:42

system hundreds of microservices data

46:45

that is hard to fathom you know unless

46:47

you think of the hard drive stacking all

46:49

the way to the space station and how do

46:51

you engineers kind of wrangle this

46:53

because it does feel a bit intimidating

46:55

I'm not going to lie

46:56

>> well I think so much of this just comes

46:58

back to the culture and the commitment

47:00

on the team and you know I've worked on

47:02

S3 for a very long time now and I have

47:06

such deep respect effect for the

47:09

engineering community on S3. And you

47:13

know, honestly, I mean, this is true for

47:16

all of the services in our data and

47:17

analytics stack, but we have engineers

47:20

in S3 and they come in every single day

47:23

with this deep commitment to the

47:26

durability and availability and the

47:27

consistent of your bite. And so the type

47:29

of conversations that we have are so

47:33

interesting because we have people and

47:35

really you know these are people who are

47:37

early out of school there are people

47:38

who've been working on S3 we have

47:40

engineers who've been working on S3 for

47:42

15 years and everything in between the

47:45

creativity

47:47

and the invention of S3 like you have

47:50

this tension which is like on one side

47:52

you're like you have to be very

47:53

conservative with S3 right and on the

47:56

other hand like I mean we have this

47:58

princip ible engineering tenant called

48:00

respect what came before and that's an

48:02

Amazon engineering tenant which is if it

48:04

has worked for many many years you have

48:06

to respect that but then there's this

48:08

also this tenant these two tenants are a

48:11

little bit in tension with each other

48:12

which is kind of what makes it so fun

48:14

Amazon engineering tenant is called um

48:17

be technically fearless

48:19

and I believe that the S3 engineers are

48:22

just amazing at this at respecting what

48:24

came before because if we build new

48:26

capabilities in S3 We have to maintain

48:29

the properties the traits of S3 which is

48:32

it just works and you get that

48:34

durability availability etc. But at the

48:36

same time we have to be technically

48:38

fairless because our ability to go into

48:41

the world of conditionals our ability to

48:44

go into the world of you know native

48:45

support for iceberg or for vectors means

48:48

that we are extending this this

48:50

foundation of storage in a way that

48:53

helps customers build whatever

48:55

application they need now and in the

48:56

future. And so that combination of the

48:59

two things that is sort of when I think

49:01

about our S3 engineering team I think

49:04

they come in every day and they embody

49:06

that.

49:07

>> Now going back to the evolution of of S3

49:10

from unstructured to structured data.

49:12

You were mentioning how Hadoop uh the

49:15

data warehouse was what was a big use

49:17

case where customers started to use it

49:19

on top of S3 and then at at S3 you

49:22

noticed your like what a lot of

49:24

customers or some of your biggest

49:25

customers doing and you kind of built it

49:27

uh yourself with with more structured

49:29

data and then S3 tables came along and

49:32

then vectors would you mind sharing a

49:34

little bit more on on how you evolve S3

49:36

because this was another question that

49:37

when when I asked people about what

49:39

they'd like to know about S3 one of the

49:40

question was like like is it done Is is

49:42

it finished or or is it still evolving?

49:44

Because there is this notion that S3 can

49:46

store anything already, right? Like any

49:47

any object, any blob? What what new

49:50

thing is there? And yet we have a lot of

49:52

new things.

49:54

>> Yeah. And if you kind of go back in time

49:56

a little bit and you think about, you

49:58

know, the rise of Parquet. Okay. So the

50:01

rise of Parquet data in S3 started about

50:03

2020 and um we started to see more and

50:07

more people store their tabular data in

50:10

S3. And if you think about what iceberg

50:14

provided, it provided a replacement for

50:16

Hive. Okay, so if you think about Hive

50:19

and Hadoop, Hive was basically giving

50:21

your file system access into S3

50:24

unstructured storage. Iceberg is giving

50:27

that iceberg that tabular access

50:29

including the you know the compaction

50:32

and all the table maintenance that goes

50:33

along with it into your parquet data.

50:36

And I actually think that the world's

50:38

data for tabular data is going to live

50:41

in the future in S3. And if you just

50:43

think about the launch that for example

50:45

Superbase did last week, Superbase

50:48

announced that their Postgress database

50:51

is now going to is just going to do

50:53

secondary writes directly into an S3

50:55

table just like their Postgress

50:57

extension for vector is going to um

51:00

integrate directly with S3 vectors. And

51:02

so if the world of database, if the

51:05

world is data as a source, if you will,

51:07

goes directly into an S3 table, what

51:11

does that mean for the world's data?

51:13

Okay, so SQL as we know is a lingua

51:16

frana of data and the world's LLMs have

51:21

all been trained on decades of SQL and

51:24

therefore

51:25

>> and Python, SQL and Python,

51:26

>> Python and the stuff that's already out

51:28

there. And so if you think about this,

51:32

you know, we have many, many AWS

51:34

customers who know the S3 API pretty

51:36

darn well by this point. It's pretty

51:38

simple API, but now you have the ability

51:40

to interact with data in S3 through SQL.

51:45

And what that means is that you don't

51:46

have to be, you know, somebody who's

51:49

building cloud applications or know S3.

51:51

You just need to know SQL.

51:53

>> And this is with S3 tables, right?

51:54

>> With S3 tables. And so you can just

51:57

write SQL into an S3 table and whether

52:00

you're an AI agent or a human, right?

52:04

You're introducing the lingua franka of

52:06

data as a native property of S3 with S3

52:10

tables and I think you're just going to

52:12

see that take off in the upcoming years.

52:15

>> And your latest launch is S3 vectors. C

52:18

can you share a little bit what it takes

52:19

to build a new data primitive like

52:23

vectors just just behind the scenes how

52:25

long it takes how the seams comes

52:27

together and and maybe what are some

52:28

engineering challenges of launching some

52:30

something like this and again we're

52:32

talking about vectors right so like you

52:34

can you use embeddings whenever you have

52:35

LMS you create an embedding it's a

52:37

vector you want to store that somewhere

52:39

you will need to do search on it there's

52:40

specialized vector databases there's

52:43

specialized vector additions etc so I'm

52:45

assuming this is the the functional that

52:47

that S3 vector supports it very nicely.

52:50

>> Yeah. And you know I mean today a lot of

52:53

customers use vector databases just like

52:56

back in the day a lot of people put

52:58

their you know their tabular data in

53:00

just databases. Okay. And they just use

53:03

the structure of the database in order

53:06

to you know take advantage of being able

53:08

to query their their data. But they

53:10

didn't really need to use a database.

53:12

They just put it in a database. And then

53:14

S3 came along and then we introduced

53:16

this way, you know, with with the help

53:18

of open formats like Apache Parquet and

53:22

being able to store that structured data

53:25

in S3. That's kind of what we're doing

53:27

with vectors right now. Okay. And if you

53:30

think about vectors, vectors are

53:31

basically a bespoke

53:34

um data type. A vector at the end of the

53:37

day is a very very long list of numbers.

53:40

And vectors have been around for a long

53:42

time and they've been in vector

53:43

databases for a while, but they really

53:45

kind of took off in people's, you know,

53:48

data worlds in the last couple of years

53:50

with the rise of, as you said, the

53:52

embedding models. Okay. And so if you

53:54

take a step back and you think about one

53:56

of the great ironies of data,

53:59

it is that you have to know your data to

54:01

know your data, right? You have to know

54:03

what your schema is. You have to know

54:04

what the data types are. You have to

54:06

know where it is. And as these data

54:08

lakes become data oceans, you have this

54:13

situation where it gets harder and

54:15

harder to know what's in your data,

54:18

right? And the beautiful thing about

54:20

embeddings is that embedding models will

54:23

understand your data so that you don't

54:26

have to understand your data. And the

54:29

format that these embedding models puts

54:31

this semantic understanding of your data

54:34

is in fact a vector. And so when we talk

54:37

to customers and we, you know, they're

54:40

so excited about how these embedding

54:41

models are getting better and better,

54:43

they want to apply more and more

54:47

basically semantic understanding to

54:49

their underlying data whether it's

54:51

unstructured or structured that they

54:53

have in storage and so they kind of want

54:55

to store billions of vectors. But but

54:58

just to say when they want you say they

54:59

want to understand to correct me if I'm

55:01

wrong but hypothetically you have a

55:02

bunch of text data or maybe some image

55:04

data and you're saying that a lot a lot

55:06

of people customers teams they would

55:08

like to write queries to say like hey

55:10

can you find an image that looks like a

55:12

puppy or can you find an article that

55:16

contains this or this and and embeddings

55:18

are as we know are great for that but

55:20

then you need to actually create the

55:21

embedding build the system etc. Right.

55:23

>> Yeah. And like exactly what you're

55:26

saying like I mean if you think about

55:27

what what vectors can do, if you think

55:30

about all the data that a given company

55:33

has, you know, your knowledge across

55:36

your business or your knowledge across

55:38

your life isn't organized into rows and

55:42

columns like a database. It's in PDFs.

55:45

It's in your phone, right? It's in audio

55:48

customer care recordings which capture

55:51

the sentiment of how a customer actually

55:53

feels about their interaction with you.

55:56

It's whiteboards. By the end of this

55:58

day, this whiteboard is totally filled

56:00

up with ideas and it's in documents

56:02

across dozens of systems. And so it it's

56:05

not that you don't have data. You have

56:07

tons of data. But understanding what

56:09

data you have across all of those

56:12

different formats is a real problem. And

56:15

it's one that AI models can help you

56:17

with. And so the capabilities of those

56:20

AI models have gotten so much better in

56:22

the last 18 to 24 months. But it we

56:25

needed a place to put billions of

56:29

vectors, billions of uh you know the

56:32

semantic understanding of relationships

56:35

and that's what we built S3 for. the

56:38

state-of-the-art embedding models

56:41

combined with the ability to have

56:43

vectors across S3 is like a a really

56:46

important part and it's not a database.

56:49

I mean it's the cost structure and scale

56:51

of just S3 but it's for vector storage.

56:54

And then do I understand that did you

56:56

need to build new primitives to store

56:57

this like going down to the metal

56:59

figuring out exactly where we do this or

57:01

did you build it on top of your existing

57:04

you know like again existing primitives

57:06

as well like blob storage etc. It's

57:08

actually a new primitive and so you know

57:11

we had talked about S3 tables. S3 tables

57:14

um is building on objects because those

57:16

individual parquet files at the end of

57:18

the day they're an object. Vector is

57:19

totally different. So with vector we

57:21

built a new data structure a new data

57:24

type and you know it turns out that when

57:28

you're building vectors searching for

57:32

the closest vector in a very

57:34

highdimensional space which is basically

57:37

vector space

57:38

>> yes

57:38

>> it's often really hard to find the

57:41

nearest neighbor and so you basically in

57:44

a database you have to essentially

57:46

compare every vector in a database and

57:49

that's often like super expensive. And

57:52

so what we do in S3 is because we aren't

57:57

storing all of our vectors in memory,

57:59

we're storing it on our fleet of S3,

58:02

very large fleet, we still need to

58:04

provide a super low latency. And in our

58:07

launch last week, we were getting about

58:10

um 100 milliseconds or less for a warm

58:12

query to our vector space, which is

58:15

actually pretty fast. It's not database

58:16

fast, but it's pretty fast. And the way

58:18

that we do that is we premputee a bunch

58:22

of think of them as vector

58:24

neighborhoods. Okay? And so it's

58:27

basically a cluster a bunch of vectors

58:30

that are clustered together in

58:31

similarity like you know a a type of dog

58:34

as an example. These vector

58:36

neighborhoods if you will they're

58:37

computed ahead of time offline. They're

58:39

computed ahead of time asynchronously so

58:42

that when you're doing your query it's

58:44

not going to impact your uh query

58:46

performance. And then every time a new

58:48

vector is inserted to S3, the vector

58:51

gets added to one or more of these

58:52

vector uh neighborhoods based on on

58:55

where it's located. And so when you are

58:59

executing a query on S3 vectors, there's

59:02

a much smaller search that's done to

59:04

find the nearest neighborhoods. And it's

59:06

just the vectors and the vector

59:08

neighborhoods that are loaded from S3

59:11

into a fast memory. That's where we

59:14

apply the nearest neighbor algorithm and

59:17

it can result in like really good sub

59:20

100 millisecond query times. And so you

59:23

know if you think about the scale for S3

59:26

will give you up to two billion vectors

59:27

per index. You think about the scale of

59:30

a S3 vector bucket which is up to 20

59:33

trillion vectors. And you think about

59:36

that combined with a 100 milliseconds or

59:38

less for warm query performance. that

59:41

just opens up what you can do with

59:44

creating a semantic understanding of

59:46

your data and how you can query it.

59:48

>> It sounds very interesting and also

59:49

challenging to because you have to build

59:51

this for scale from day one. I guess

59:53

that's that's one of the I guess

59:54

benefits and curses of working at S3

59:56

that everything that you launch you need

59:57

to prepare for what will be extreme data

60:00

elsewhere but here it's just Monday. We

60:02

have S3 service tenants as well. And one

60:05

of the tenants and one phrase that I use

60:07

all the time and our engineers do too is

60:10

scale is to your advantage. So if you

60:13

are an engineer and you think about that

60:15

and you think about one of your tenants

60:18

for anything you build is that scale

60:20

must be to your advantage. It just

60:22

changes how you design. It means that

60:26

you can't actually build something where

60:28

the bigger you get, the worse your

60:31

performance gets or the worse some some

60:34

attribute gets. It has to be constructed

60:37

so that the bigger you get, the better

60:40

your performance gets. The bigger S3

60:42

gets, the more decorrelated the

60:45

workloads are that run in S3. That is a

60:48

great example of scale is to your

60:50

advantage. And so when we built vectors

60:52

just like we built everything in S3, we

60:54

ask ourself how can we build this such

60:57

that scale is to our advantage. How can

61:00

we build this such that a 100

61:01

milliseconds or less is just the start

61:04

of the performance that we're going

61:06

after? And how can we make sure that the

61:08

more vectors we have in storage, the

61:10

better the traits of S3 for vector.

61:13

>> I have a different question about the

61:16

limitations of of S3. Uh I read that the

61:19

largest object you can store in S3 is 50

61:21

terabytes. Um why is there a limit on

61:24

the largest object? I mean I think we

61:27

can imagine this will be through either

61:28

multiple hard drives and so on but why

61:31

did you decide to have a a limit? You

61:33

know I'm just interested more in the

61:35

thought process of how the team comes up

61:36

with like okay this will be the limit

61:38

and this is why

61:39

>> I mean I think um first of all that

61:41

limit of 50 terabytes is 10 times

61:43

greater than what we launched with. We

61:45

launched with five terabytes and now

61:48

we're 50 terabytes and sometimes we sit

61:50

and tell customers that and they go what

61:52

am I going to store that's going to be

61:54

50 terabytes and we're like high

61:56

resolution video right and so um you

62:00

know if I think about

62:01

>> known customer

62:02

>> right and so if you think about this

62:04

sort of thing you know like if you think

62:06

about I don't know size size limits

62:10

generally speaking we we do try to

62:12

optimize for certain patterns And um

62:16

when you raise the size of an object by

62:18

10 times like we did, we're just

62:20

optimizing for the performance and scale

62:22

of the underlying systems, it's like we

62:25

increase the scale of our batch

62:27

operations by 10 times last week, too.

62:30

And the idea behind that is that the

62:32

underlying systems were just optimizing

62:35

for distributions of work that are the

62:38

new norm for how people are doing

62:39

things. And um we'll just keep on

62:42

changing. We don't have too many limits

62:44

to be honest, but we'll just keep on,

62:47

you know, looking at what customers are

62:49

doing across a distribution of workloads

62:51

and seeing if there's something that

62:53

that needs to be changed. The big thing

62:54

for us, you know, again, we we did have

62:56

a lot of conversations with customers

62:58

and they're like, "Really? Like, I don't

63:00

have that many individual objects that

63:02

are that big, but with the increase of,

63:06

you know, cameras and phones and things

63:09

like that, we are seeing more and larger

63:11

size objects and we just wanted them to

63:13

be able to grow unfettered in S3."

63:16

>> And so, how does S3 evolve and and how

63:20

has the road map changed? Because so far

63:21

what I picked up is everything that you

63:24

told me is saying well you know our

63:25

customers were doing this or that and

63:27

you obviously here you live and breathe

63:29

data so you see the patterns you see

63:31

stats you see the the objects you also

63:33

talk with them is it only you talking

63:36

with customers seeing what's happening

63:39

what's uh what they're struggling with

63:41

what they're using more of and then

63:43

deciding to improve that may that be the

63:45

limits may that be figuring out we need

63:47

a new data type because they're now

63:48

building their own data types on on top

63:50

of it or is is there also some some kind

63:53

of more kind of all right here's a

63:54

vision here's a road mapap of what we'll

63:56

do

63:56

>> it's a great question and in fact one of

63:59

the things that we talk about all the

64:02

time is the coherency

64:05

of S3 right and so there are certain

64:09

things that people always expect from S3

64:11

it's the traits of S3 it's the

64:12

durability availability attributes that

64:15

we talked about and so a fair amount of

64:17

engineering goes on under the hood for

64:20

that. Okay. And it's a set of

64:23

capabilities that you know we may or may

64:25

not have talked about today. In fact, if

64:27

you think about I think back to 2020, I

64:30

think we've launched over a thousand new

64:32

capabilities since 2020 in S3. And some

64:35

of them are what we think of as the 90%

64:38

of the road map, which is what people

64:40

ask for explicitly. Okay. And so, for

64:43

example, you know, some of our our media

64:45

customers want the bigger object size,

64:47

and so we delivered that. We have other

64:50

customers that do a lot with batch

64:51

operations. But then we have some things

64:53

that we invent because you know we look

64:56

at what customers are doing with the

64:58

data and we ask ourselves how can we

65:01

build that. Vector kind of falls into

65:02

that category. For vector when we looked

65:05

at S3 and how S3 is evolving, we told

65:09

ourselves like look you know we can

65:10

continue to make S3 the best repository

65:13

for data on the planet. And we will. We

65:16

will. We have engineers that come in

65:17

every day working to make that so. But

65:20

there's this other element of how do you

65:23

make sure that the data that you have is

65:25

in fact usable and how do you make sure

65:28

that it's usable in a way that's you

65:31

know industry standard like that iceberg

65:34

layer on top of our tabular data. But

65:37

it's usable because AI models have now

65:40

gotten so good at embeddings that you

65:42

can have AI give you a semantic

65:44

understanding of your data. If only you

65:47

had the cost point of putting billions

65:49

of vectors into storage. So you could

65:51

actually understand and use your data in

65:53

a different way. And so for us, a lot of

65:56

it is kind of taking a step back and

65:58

looking not just at what customers ask

66:00

us for, but we want to remove the

66:04

constraint of the cost of data, which is

66:07

what we do in S3. And we want to remove

66:10

the constraint of working with your

66:12

data, which is what we do in S3 too. And

66:15

when we can do both of those things, if

66:17

we can make it possible that your data

66:19

grows as your business needs it and you

66:22

can tap into all the capabilities that

66:24

you're getting with AI and how the world

66:26

is changing for data, then then we have

66:30

a shape. We call it a product shape.

66:32

Then we have a product shape.

66:33

>> Product shape.

66:34

>> What's a product shape? It's sort of

66:36

like an emerging like when I think about

66:39

S3 I think of it as almost like this

66:41

living breathing organism where the

66:44

shape of the product is evolving but

66:46

it's evolving with coherency around what

66:49

you expect for the traits of S3 but it's

66:52

evolving in a way that lets you steer

66:54

into how you want to use data and how do

66:57

you want to use data not just now but in

67:00

the future and we will continue to

67:02

evolve the product shape of S3 based on

67:05

what you want to do with data. And so in

67:07

a lot of ways, we're sort of

67:09

transcending the the the boundaries of

67:12

what object storage was or what a

67:14

database traditionally was because now

67:17

we have tabular formats, we have

67:18

conditionals and we're we're evolving

67:21

into this new shape and it is ultimately

67:24

uniquely S3.

67:26

>> It it kind of sounds like you have all

67:28

these microservices. It's kind of

67:29

evolving almost like a plant or a living

67:31

organism. No. Yes, I uh I am in fact a

67:35

former peacecore volunteer from forestry

67:38

and so you know a lot of times I will go

67:40

back to the natural world for my my

67:42

metaphors and uh yeah I mean S3 is this

67:46

living breathing repository of data that

67:49

lets people do things with data that

67:51

they never thought possible. It's just

67:53

interesting because I think as engineers

67:54

we don't often think to relate the

67:58

systems that we build with like a a

68:00

living organization when in fact I mean

68:02

obviously there's code but as as you

68:04

said there there's people there's

68:05

servers there's failures that now happen

68:07

at at a at a cadence you can almost just

68:10

you you can probably predict how many

68:12

hard drives are failing today in fact at

68:14

at your scale already which again maybe

68:17

is do you think it's because of the

68:19

scale when things become large enough

68:21

they start to have these characteristics

68:24

because what what I find fascinating

68:25

talking to you is the way engineering

68:27

works inside of S3 feels very different

68:30

to how it works inside a smaller

68:32

organization your kind of startup which

68:34

again does you know like terabytes of

68:36

data or maybe even a few pabytes but but

68:38

that's kind of it uh and you've seen

68:40

some of the these organizations what

68:42

what changes at at at this large scale

68:44

what do you think that makes it it feels

68:46

pretty different the the world that you

68:48

and and the teams work in

68:50

>> it does but you So in order for us to

68:53

sustain

68:55

the traits of S3 and to evolve it over

68:58

time, we have to constantly go back to

69:01

simplification. We have a very complex

69:04

system with all of our different

69:05

microservices, but I kind of go back to

69:08

those microservices have to do one or

69:11

two things really well and we have to

69:14

stay true to that. Otherwise you know

69:16

the complexification

69:18

of a distributed system you know it's

69:21

it's unmaintainable over time and for S3

69:24

this concept of okay there's a simple in

69:27

S3 and the simple in S3 is a couple of

69:30

things one it's a simplicity of the user

69:32

model where not only do you have a

69:34

simple API but now you have the

69:36

simplicity of using SQL with S3 or you

69:39

have the simplicity of being able to

69:40

leverage these AI embedding models which

69:43

makes semantic understanding

69:44

understanding of your data so much

69:46

easier than having to annotate you know

69:50

a whole metadata layer. And so that

69:52

concept of simplicity is in the user

69:54

model of S3 but under the hood if you

69:57

are sit on any of our engineering

69:59

meetings you will hear our engineers

70:01

talk about how do we make sure that we

70:04

implement this capability with the

70:07

greatest simplicity that we possibly

70:09

can. Speaking of which, what type of

70:11

engineers do you typically hire to to

70:14

work at S3 in terms of what kind of

70:16

traits potentially past experience do

70:18

you look for?

70:19

>> Well, we hire all kinds of engineers.

70:21

Uh, you know, we have a lot of, um,

70:22

engineers on S3 who are early career.

70:25

Uh, they're straight out of school or

70:27

they're at a, you know, undergrad or

70:29

graduate school. And like I said, we

70:30

have, like a ton of engineers who have

70:33

been on S3 for a long time and

70:34

everything in between. I think there's a

70:36

really strong element in our teams um

70:40

that work on data around ownership. It's

70:43

a it's you know people feel this like

70:46

personal sense of commitment. I feel it.

70:48

I feel it every day I come in where I

70:51

feel a personal sense of commitment to

70:53

your bite, to the preservation of your

70:56

bite, to the use youthfulness of your

70:58

bite, to the ability for you to think

71:02

about what your application does next

71:05

and not the types of storage that you

71:08

need or how you grow it. And that deep

71:11

sense of ownership and that deep sense

71:13

of commitment is a very very common

71:16

thread across our data teams because we

71:20

know that at the end of the day every

71:22

modern business is a data business and

71:25

everything that people are trying to do

71:27

with traditional systems AI whatever is

71:31

based on your data as shaping the core

71:35

of your application experience. And so

71:38

that data is our responsibility and we

71:41

feel it very deeply.

71:42

>> And what would your advice be to let's

71:44

say a mid-career software engineer,

71:46

someone who has a few years of

71:47

experience working at at different

71:49

places who would who is actually after

71:51

listening to this gets really

71:53

enthusiastic and decides like one day

71:55

I'd love to work on a a deep strong

71:58

infrastructure team like S3 for like

72:00

let's say like more experienced folks.

72:02

what are experiences,

72:04

activities that you might look for that

72:06

that might uh help you consider these

72:08

folks more?

72:10

>> There's a a strong value and relentless

72:12

curiosity. Okay. And you know, I talked

72:15

a little bit about coloring within the

72:17

lines and how when you work on S3 or a

72:19

large scale distributive system which

72:21

continues to reinvent what storage

72:23

means, you're not really coloring within

72:25

the lines. you're just kind of looking,

72:28

you're taking a step back and you're

72:29

saying, you know, I I will draw what the

72:31

lines are today and I will know that I

72:34

might have to rub those out and draw new

72:36

lines in the future for wherever things

72:38

go. And so, you know, I have three kids

72:41

who are um in university. I have two

72:43

kids in university and one in grad

72:44

school. And that is one thing that I,

72:48

you know, I think is really important is

72:50

to always take a step back, take a look

72:52

at the latest research. And some of the

72:54

papers that I'll share with you are

72:56

around how we, you know, we either took

72:59

formal methods and we brought them into

73:01

storage systems, right? Or we thought

73:04

about failure in a different way where

73:07

that that creativity, that relentless

73:09

curiosity and that creativity with

73:12

engineering, I don't think you can go

73:14

wrong with that. I think the next

73:15

generation of software, no matter if

73:17

it's built in S3 or elsewhere, it is all

73:20

driven by the creativity of the

73:23

engineering mind and it is in all of us.

73:26

We just have to kind of unlock it and

73:28

unleash it and we will bring we'll build

73:30

amazing things like S3. And I also love

73:33

that with S3, not only has S3 created

73:36

something that did not exist and I think

73:37

like it just was unimaginable because it

73:41

didn't exist, but now I'm hearing

73:42

startups that are building on top of S3.

73:45

I think Turbopuffer is a good example.

73:47

You know, they're building innovation

73:48

because now they have a base layer and I

73:50

I feel there's different levels of

73:52

innovation. You decide where you want to

73:53

innovate at the very lowest level, one

73:55

level higher and and so on. And you just

73:58

use the right primitives, right? In your

74:00

case, this is just doing hardware and

74:02

storage better than anyone. In the other

74:05

layers, it will be using the right

74:06

primitives better than anyone.

74:08

>> Yeah, it's very exciting for us to see

74:10

so many different types of

74:12

infrastructure built on S3. Now,

74:14

>> and as closing, what is a book or a

74:16

paper that you would recommend reading

74:19

that that you enjoyed and and why?

74:21

>> I read a lot of different papers. I am

74:23

fascinated by how quickly the evolution

74:26

of embedding models are coming along now

74:29

and in particular um a field of science

74:32

that I'm quite interested in is the

74:33

multimodal embedding model because as

74:36

you know the world that we experience is

74:38

multimodal and therefore the

74:40

understanding that we have of data

74:42

should be multimodal as well and so

74:44

there's this whole field of science

74:45

that's that's emerging quite rapidly um

74:48

around multimodal embedding models uh

74:51

and so I That is something that I

74:54

encourage people who are working in the

74:55

field of data to to look at because I

74:58

think that is the next generation of

75:00

data. If you think about you know the

75:03

next world of data lakes I think it's

75:05

actually going to be on um metadata.

75:07

It's going to be on the semantic

75:09

understanding of our data and uh

75:12

understanding how that is created

75:13

through uh vectors and how it's being

75:16

searched and um done across multi uh

75:19

multiple modalities I think is is an

75:21

important area of both research and

75:23

advancement. And so that's what I would

75:25

encourage people to look at in the world

75:27

of data. I think vector is going to be

75:29

quite quite big particularly at the

75:31

price point that we've introduced for S3

75:33

storage for vectors. And um I'm excited

75:35

about it. I think, you know, I think

75:38

we're just getting started with data and

75:41

an understanding of our data and I can't

75:43

wait to see what comes next.

75:45

>> Amazing. And do you have any book book

75:47

recommendations?

75:48

>> I will give you a book recommendation um

75:50

just in case your readers are

75:52

interested. It won't be in the field of

75:54

computer science. um it will be about

75:57

the evolution of um the ecology around

76:01

us and supporting the bees, the native

76:04

bees and insects around us. So, a tiny

76:07

bit farther a field, but I'll give you a

76:09

book recommendation and if your your

76:11

readers are interested, they can um they

76:13

can take a look at how to support the

76:15

bees of the planet.

76:16

>> Well, Miline, thank you very much. This

76:18

was fascinating and and very interesting

76:20

to get a peak into this massive world of

76:24

scale of data and and respecting the

76:26

bite and and treating it and making sure

76:28

that it's durable.

76:30

>> It was great talking to you and thank

76:32

you to both yourself. I know you're a

76:34

fan of S3 and to all of your listeners

76:36

who use S3. Uh we quite literally

76:39

wouldn't be able to do what we do

76:41

without the feedback and the

76:43

encouragement from everybody who uses S3

76:46

today. So thank you for that.

76:48

>> Just wow. I always suspected there's a

76:51

lot of complexity behind a system like

76:53

S3, but [music] I just did not realize

76:55

the scale of it. Whenever I worked on a

76:57

systems with even hundreds of virtual

76:59

machines, failure of one machine was a

77:01

rare event and not something that we

77:03

really counted on. During my

77:04

conversation with my launch, she

77:05

casually mentioned that several machines

77:07

have failed during our conversation,

77:08

which is something that the Sream knows

77:10

and prepares for and treats it like an

77:12

everyday event. I personally really

77:14

liked how AWS has two conflicting

77:16

tenants heavily used on the S3 team.

77:18

[music]

77:18

Respect what came before and technically

77:20

fearless. For such a massive system, it

77:23

will be easy to say let's move

77:24

conservatively because of how many

77:26

companies depend on us. But if they did

77:28

so, S3 would fall behind. Finally, I'm

77:30

still in awe that AWS put strong

77:32

consistency in place, roll it out to all

77:35

customers and did not increase pricing

77:38

nor they didn't increase latency at S3

77:40

scale. This is an absolutely next level

77:42

engineuring achievement. In fact, it was

77:44

probably one of the lesserknown enduring

77:46

feats [music] of the decade. I hope you

77:48

found the episode as fascinating as I

77:49

did. If you'd like to learn more about

77:51

Amazon and AWS, check out the exclusive

77:53

deep dive I did with AWS's incident

77:55

management team on how they handle

77:57

outages in [music] the show notes below.

77:59

In the Pragmatic Engineer, I also did

78:00

other deep dives about Amazon and AWS.

78:03

They are also linked in the show notes.

78:04

If you enjoy this podcast, [music]

78:06

please do subscribe on your favorite

78:07

podcast platform and on YouTube. A

78:09

special thank you if you also leave a

78:11

rating on the show.

78:12

>> [music]

Interactive Summary

This video features Milon, the VP of data and analytics at AWS, discussing the immense scale and engineering behind Amazon S3. The conversation covers the service's evolution from its 2005 beginnings to storing over 500 trillion objects today. Key topics include the technical transition from eventual to strong consistency without cost increases, the use of formal methods to ensure 11 nines of durability, and the introduction of new primitives like S3 Tables and S3 Vectors for AI and analytics workloads.

Suggested questions

5 ready-made prompts

Recently Distilled

Videos recently processed by our community