HomeVideos

I replaced my entire stack with Postgres...

Now Playing

I replaced my entire stack with Postgres...

Transcript

339 segments

0:00

I just replaced my entire tech stack

0:01

with Postgress. Modern software

0:03

engineering has basically become a

0:04

subscription management simulator. We've

0:06

been gaslit by cloud vendors into

0:08

believing that to build even a basic

0:10

application, we need to stitch together

0:12

a fragile distributed web of highly

0:14

specialized microservices. You wire up a

0:16

Reddus instance for caching, a CFKA

0:18

cluster for background jobs, elastic

0:20

search just to power a simple search bar

0:22

and a dedicated vector database for that

0:24

one AI feature you tacked on. By the

0:26

time you finally deploy your app to your

0:28

highly demanding user base of yourself

0:30

and your mom, you're paying a dozen

0:32

different Y Combinatorbacked SAS

0:33

startups just to keep the lights on. It

0:35

is an overengineered, wildly overpriced

0:37

trap. But what if I told you that you

0:39

could take almost all of those shiny

0:40

cloud dependencies, toss them directly

0:42

into the incinerator and replace them

0:43

with a single piece of boring

0:45

30-year-old open-source software? The

0:47

dirty little secret the tech industry

0:49

doesn't want you to know is that one

0:50

battle tested tool cannibalize your

0:52

entire architecture. Today we're

0:54

stripping your stack down to one

0:56

unstoppable source of truth, PostgreSQL.

0:58

Here's how you use Postgress to replace

1:00

literally everything. Before we start

1:02

violently dismantling your current

1:03

architecture, let's look at the weapon

1:05

that we're using. At its core,

1:06

Postcrestql is an open- source object

1:09

relational database system that has been

1:10

in active development for over three

1:12

decades. Out of the box, it gives you

1:13

rocksolid acid compliance, meaning when

1:16

your cheap cloud server inevitably

1:17

crashes, your user data isn't corrupted.

1:19

But the real reason it cannibalize your

1:20

entire stack is its extensibility. It

1:22

doesn't just store standard rows and

1:24

columns. It supports advanced custom

1:25

data types, multi-dimensional arrays,

1:27

geometric shapes, and key value stores.

1:29

This architectural flexibility has led

1:31

to a massive ecosystem of thirdparty

1:33

extensions. It's basically the Skyrim of

1:35

databases, a rockolid foundation that

1:37

you can aggressively mod until it does

1:39

exactly what you want. Here is how you

1:41

use it to replace everything. One of the

1:43

great debates among web developers is

1:45

SQL versus NoSQL. And the core selling

1:47

point of NoSQL is handling unstructured

1:49

data. You no longer need a separate

1:50

database like MongoDB just to do this.

1:52

Postgress offers deeply integrated

1:54

native support for JSON through its JSON

1:56

B data type which fundamentally changes

1:58

how data is processed. The B stands for

2:00

binary. Unlike standard text storage

2:02

that must be parsed every time a query

2:04

is run, JSON B converts your JSON

2:06

payload into a decomposed binary format

2:08

at the moment of insertion. The true

2:10

magic unlocks when you apply a gin or

2:12

generalized inverted index to this

2:14

column. An inverted index works exactly

2:17

like the index at the back of a

2:18

textbook. Instead of scanning every

2:20

database row looking for a specific key,

2:22

the index maps the keys directly to the

2:24

row IDs where they exist. This allows

2:26

you to query deeply nested JSON

2:28

properties instantly and join those

2:30

documents with traditional relational

2:31

tables in a single asset compliant

2:33

transaction. You get the exact schema

2:35

flexibility of NoSQL without sacrificing

2:37

data integrity. Provisioning Rabbit MQ

2:40

or Reddus purely for reliable task

2:42

distribution introduces massive

2:43

architectural overhead. But building a

2:45

queue in a standard SQL database usually

2:47

leads to deadlocks. Postgress solved

2:49

this elegantly with its native

2:50

concurrency control, specifically the

2:52

four update skip locked clause. When

2:55

building a background worker system, the

2:57

traditional problem is that two workers

2:59

might try to grab the same pending job

3:01

row at the exact same time. One locks it

3:03

and the other gets stuck waiting. Adding

3:05

skip locked changes the physics of the

3:07

query. It instructs the database engine,

3:09

grab the first available row, lock it so

3:11

no one else can touch it. But if you hit

3:13

a row that is already locked by another

3:15

worker, don't wait. Just skip it and

3:17

grab the next one. This turns a standard

3:19

relational table into a highly

3:21

concurrent weight-free message cue

3:23

capable of processing thousands of jobs

3:25

per second. While specialized tools like

3:27

Elastic Search are mandatory for

3:28

globally distributed log analysis, using

3:30

them just to power a search bar in your

3:32

app is massive overkill. Postgress is

3:34

fully equipped to power advanced full

3:36

text search directly by stripping

3:38

language down to its mechanical roots

3:40

using TS vector and TS query. When you

3:42

insert text into a TS vector column,

3:44

Postgress parses it, removes useless

3:47

stop words, and applies linguistic

3:48

stemming. So a word like running simply

3:51

becomes run. Furthermore, you can apply

3:53

the pg triagram extension for fuzzy

3:56

matching, the ability to find accurate

3:58

results even when a user makes a typo.

4:00

It does this using triagrams, which

4:02

simply breaks words down into

4:03

three-letter chunks. If a user misspells

4:05

Postgrql as Postgress with two S's, the

4:08

database doesn't look for an exact

4:09

match. It finds the overlapping

4:11

three-letter patterns and returns the

4:13

correct result anyway, giving you a

4:15

highly performant typo tolerant search

4:17

engine without syncing data to a

4:18

secondary cluster. If you're building an

4:20

AI app, you might consider paying for a

4:22

vector database like Pine Cone. But

4:24

keeping vector data separate from your

4:26

relational data creates an architectural

4:28

nightmare known as the hybrid search

4:30

problem. If you need to find documents

4:31

semantically similar to a user prompt,

4:33

but only if they were authored by a

4:35

specific user last week, querying two

4:37

different databases and cross

4:39

referencing the results over a network

4:41

is incredibly slow. You can handle this

4:42

entirely within Postgress using the PG

4:44

vector extension. It allows you to store

4:46

highdimensional arrays right next to

4:48

your core application data and supports

4:50

HNSW or hierarchical navigable small

4:53

world indexes. HNSW is a graph-based

4:55

algorithm for approximate nearest

4:57

neighbor search that organizes vectors

4:59

into a multi-layered structure acting as

5:01

a highdimensional skip list. It allows

5:03

for fast scalable vector searches by

5:05

starting at a top layer with few

5:07

longrange connections and moving to

5:09

lower denser layers to refine the

5:11

search. This minimizes the number of

5:13

distance calculations needed, allowing

5:15

the database to rapidly navigate through

5:17

neighborhoods of similar data points to

5:18

find approximate nearest neighbors in

5:20

milliseconds. Ultimately, you can

5:22

execute this complex vector math

5:23

natively while simultaneously applying

5:26

strict relational filters. We've been

5:28

talking a lot about how powerful

5:29

Postgress is, but let's be honest,

5:31

provisioning, scaling, and managing

5:32

testing environments for it can still be

5:34

a massive headache, and that's exactly

5:36

where today's sponsor, Neon, comes in.

5:38

Neon is a fully managed serverless

5:40

Postgress platform built specifically

5:42

for the cloud. They've fundamentally

5:43

re-engineered Postgress by separating

5:45

compute from storage, which unlocks

5:47

features you just can't get with a

5:48

traditional setup. My absolute favorite

5:50

part about Neon is database branching.

5:52

Just like you branch your code in Git,

5:53

Neon lets you instantly branch your

5:55

Postgress database. Want to test a risky

5:57

schema migration or a complex query?

5:59

Just click a button, spin up a copy of

6:01

your database in seconds with all of its

6:03

data, and run your tests. If you mess

6:05

up, your prod database remains

6:06

completely untouched. It completely

6:08

changes how you handle dev and staging

6:10

environments. Plus, because Neon is true

6:12

serverless, it automatically scales

6:14

compute based on your application's

6:16

workload and scales down to zero when

6:18

it's not in use. You don't have to

6:19

overprovision servers, meaning you only

6:21

pay for exactly what you use. Whether

6:23

you're building a weekend side project

6:25

or a hightraic application, Neon makes

6:27

Postgress feel modern, fast, and

6:30

frictionless. Click the link in the

6:31

description to sign up and deploy your

6:32

first serverless Postgress database on

6:34

Neon for free in just seconds. A huge

6:37

thanks to Neon for sponsoring this

6:38

video. Now, back to even more Postgress.

6:41

If you're building applications that

6:42

rely heavily on maps or routing,

6:44

Postgress isn't just an alternative. It

6:46

is the undisputed industry gold

6:48

standard. The PostGIS extension

6:50

transforms Postgress into a spatial

6:52

powerhouse using the gist or generalized

6:54

search tree index. If you ask the

6:56

database to find all coffee shops within

6:58

a complex geographic polygon, doing raw

7:01

mathematical distance calculations on

7:03

every coordinate would crash the server.

7:04

Instead, a generalized search tree draws

7:07

simple overlapping bounding boxes around

7:09

your geographic shapes. The database

7:11

first checks these simple boxes

7:13

instantly discarding millions of data

7:15

points that aren't even close and only

7:17

performs the heavy precise geometric

7:19

math on the handful of points that

7:20

remain. This routinely outperforms

7:22

standalone GIS systems. On the other

7:24

hand, when handling massive volumes of

7:26

telemetry or event logs, developers

7:28

reach for time series databases.

7:30

Postgress handles this natively through

7:32

declarative partitioning and the highly

7:34

underutilized brin or block range index.

7:37

Instead of storing billions of logs in

7:39

one massive table, partitioning

7:41

transparently splits your data into

7:42

physical daily or monthly chunks. As

7:45

long as your logs are inserted

7:46

sequentially, the brin index is a

7:48

superpower. Instead of indexing every

7:50

single row like a massive bloated B

7:52

tree, it only stores the minimum and

7:54

maximum timestamps for physical blocks

7:55

of data on the disk. When you query for

7:57

a specific time range, Postgress reads

7:59

the brin index, instantly skips millions

8:02

of physical disk pages that don't

8:04

contain your target timestamps, and

8:05

scans only the tiny fraction that do.

8:08

How about for complex dashboards? The

8:10

knee-jerk reaction is to pipe data into

8:12

expensive data warehouses like

8:14

snowflake. You can bypass this by

8:15

leveraging Postgress materialized views.

8:18

A standard view runs its underlying

8:20

query from scratch every time a user

8:22

hits the dashboard, crashing your

8:23

database under load. A materialized view

8:25

runs the heavy aggregation just once and

8:27

physically saves that calculated result

8:29

to the disk. To prevent stale data,

8:31

Postgress uses the refresh materialized

8:33

view concurrently command provided your

8:35

view has a unique index. It calculates

8:37

the fresh analytics entirely in the

8:39

background, compares the differences and

8:41

seamlessly hot swaps the updated rows

8:44

into place without ever locking out your

8:46

end users. For years, we've blindly

8:48

accepted that you need to write and

8:50

maintain thousands of lines of

8:51

boilerplate Node.js JS or Python code

8:53

just to shuttle JSON between your

8:55

database and your front end. You can

8:57

incinerate this entire middleware layer

8:59

using tools like post REST or the PG

9:02

GraphQL extension. Instead of manually

9:04

writing a new controller and endpoint

9:06

every time you add a database table,

9:07

these tools analyze your schema and

9:09

automatically generate a fully

9:11

documented highly performant REST or

9:13

GraphQL API on the fly. And before you

9:15

panic about security, Postgress handles

9:17

that natively, too. By leveraging rowle

9:19

security, you can write strict

9:20

cryptographic policies directly in the

9:22

database that guarantee a user can only

9:24

ever read or write their own specific

9:26

rows based on their authentication

9:28

token. Your database securely becomes

9:30

your entire backend, eliminating the

9:32

need for a fleet of API servers. While

9:34

the just use Postgress philosophy is

9:36

incredibly powerful, you shouldn't

9:38

entirely abandon your critical thinking.

9:40

It isn't a silver bullet. Postgress

9:42

scales vertically with exceptional

9:43

grace, but horizontally sharding a

9:45

monolithic database to handle extreme

9:47

scale introduces immense complexity. If

9:49

your application actually needs to

9:50

ingest millions of telemetry events per

9:52

second or requires submillisecond

9:54

in-memory caching for millions of

9:56

concurrent websockets, you absolutely

9:58

must adopt specialized distributed

10:00

tools. However, until you cross that

10:02

threshold of massive enterprise scale,

10:04

leaning on the core battle tested

10:06

mechanics of Postgress to run your

10:08

entire stack is arguably the smartest

10:10

and most cost-effective engineering

10:11

decision you can make. Seriously, to

10:14

really level up as a software engineer,

10:15

you have to build hard things. That's

10:17

why I highly recommend Code Crafters.

10:19

Instead of building basic apps, they

10:21

guide you through building real

10:22

developer tooling from scratch. You'll

10:23

write your own working versions of

10:24

Reddus, Git, CFKA, Docker, and even

10:27

modern AI tools like Claude Code. It

10:29

completely changes how you understand

10:30

software. Check the description for a

10:32

link that automatically applies a 40%

10:34

discount to your account. Also in the

10:36

description is a link to my free

10:37

newsletter where I share exclusive deep

10:38

dives on system design and real world

10:40

backend development. The stuff you won't

10:42

find in basic coding tutorials.

Interactive Summary

The video argues that modern web architecture is often unnecessarily over-engineered by relying on a complex web of specialized cloud services. It promotes simplifying the tech stack by leveraging the extensive, built-in capabilities of PostgreSQL, such as JSON support for NoSQL data, advanced indexing for queues and full-text search, vector capabilities for AI, and spatial indexing, effectively replacing many separate specialized tools with one robust, battle-tested database.

Suggested questions

4 ready-made prompts