HomeVideos

How I Scaled My NextJS + Supabase App To Handle 10,000 Users

Now Playing

How I Scaled My NextJS + Supabase App To Handle 10,000 Users

Transcript

628 segments

0:00

In the past 30 days, my app went from

0:02

having zero users to over 10,000 users.

0:04

Literally over 100xing the number of

0:06

users on our app in 30 days. And we also

0:08

went from $0 of revenue to $4,000 of

0:10

revenue in a month. And while this is

0:12

great, one big problem came up, and it's

0:14

the fact that my app was not originally

0:15

built to scale and handle that many

0:17

users actively signing [music] up and

0:19

testing out the app. And in this video,

0:21

I'm going to be going over the main

0:22

performance optimizations that I did to

0:24

make my app run faster and scale from

0:26

zero to 10,000 users. And then later on

0:28

in the video, I'm also going to go over

0:30

some of the things I thought about doing

0:31

but haven't quite taken the leap to do

0:33

yet. And before I go into talking about

0:34

the performance optimizations that I

0:36

actually did, I know that some haters

0:37

are going to be like, "Bro, this is

0:39

fake. Show us the proof. Show us the

0:40

sauce, bro." So, I'm going to show you

0:41

the proof of what exactly I built and

0:43

show you the real numbers that went into

0:44

it and how much I scaled. And then

0:46

afterwards, we're going to go into

0:46

talking about how exactly I improved the

0:48

performance of the app. The app that I'm

0:50

building is called Yorby. And it's a

0:51

social media marketing platform that has

0:53

two main goals right now. and that is to

0:54

help you find and get inspiration to

0:56

make viral marketing content to market

0:58

your business faster. And then we also

1:00

help you create viral marketing content

1:02

to market your business faster. So,

1:03

we're trying to make marketing content

1:04

info and creation a lot easier. And the

1:06

main premise of how it all works is that

1:08

number one, we have this viral content

1:10

database, which is a whole database of

1:12

different types of viral formats that

1:13

other businesses have used, strictly

1:14

what other businesses have used to

1:16

market their business on social media

1:17

and what's worked. And then from there,

1:19

you can actually remix these videos

1:20

within our content studio. For example,

1:22

let's say we see this video and when you

1:24

click on the content studio button,

1:25

you're going to then be opened up into

1:27

our content studio right here. And from

1:29

here, you can remix this content to fit

1:31

whatever niche that your business is in.

1:33

So, obviously, this original video looks

1:34

like some type of health and wellness

1:36

brand. But let's say I want to remake

1:38

this same exact video with for a dating

1:40

app, like a fictional dating app right

1:41

here. then you can just send this chat

1:43

message and then we use our AI that will

1:45

then transform this entire script to fit

1:47

your niche, your brand, whatever you're

1:49

trying to market while still maintaining

1:50

that same original viral format and

1:53

viral soul that that video had. So

1:54

that's the main premise of our app. And

1:56

in terms of the growth right here, you

1:58

can see that we went from let's see

1:59

January 1st, what zero, literally

2:02

nothing on January 1st and then we've

2:03

steadily grown and right now we have

2:05

over 10,000 signed up users. And then in

2:07

terms of revenue, if you look at the

2:08

gross volume, you can see that in the

2:10

past four weeks, in the past month, we

2:12

have made over $4,000 of revenue in the

2:14

past month. [music] And literally before

2:16

that, there was literally nothing going

2:18

on. If you go over here, I'll do the

2:20

whole uh let's see, all time. And you

2:22

can see right here, basically nothing

2:24

happened. And then in December, things

2:26

start to pick up. We started getting our

2:27

first couple of users, and things just

2:29

started going crazy afterwards. So

2:31

there's the proof. I'm not joking. This

2:33

is really real. I'm not making stuff up

2:34

for the internet. So now let's get into

2:36

the actual performance optimizations

2:37

that I did. All right, so right now we

2:39

are in the Superbase dashboard for my

2:40

project your beef. The first big

2:42

performance optimization that I did was

2:44

simply upgrading the Superbase instant.

2:45

This is more of like a oh my god, like I

2:47

know that there's other optimizations

2:48

that I can do, but honestly the

2:50

performance was so bad that I had to do

2:52

something immediately. So what we ended

2:53

up doing was I upgraded to the XL

2:56

instance. Originally, I believe I was on

2:58

the micro instance, just like straight

2:59

up the starter package for 20 bucks a

3:01

month. And clearly it was not meant to

3:03

scale to handle like 10,000 users. So as

3:06

a first line of defense, the first thing

3:07

that we did was increase the compute

3:09

instance for my Postgress database

3:11

within Superbase. And the way that I did

3:12

that, I kept upgrading, upgrading,

3:14

upgrading. Kept running into a couple

3:15

limits and performance issues here and

3:17

there. Then I ended up just going for

3:18

the Excel package and that has helped

3:20

out a lot. Granted, it is freaking

3:22

expensive. This is like 200 bucks a

3:24

month. Champagne problems, you know, I

3:26

know it's like lobster's too rich. I

3:27

know it's like, oh my lobster's too

3:29

buttery. My steak is too rich. you know,

3:31

I have enough users that I was actually

3:32

forced to just pay for a bigger compute

3:33

instance, but still hurt. 200 bucks a

3:35

month. It's crazy. Oh, yeah. And the way

3:36

that I knew that I had to increase the

3:38

size of my compute instance was I went

3:39

over here to the observability tab in

3:41

Superbase went over to my database and I

3:44

was looking at the CPU usage and for a

3:46

while when I was on the smaller

3:47

instances, the CPU usage was

3:49

consistently going to 100% 100% and like

3:51

a lot of these other metrics were just

3:53

showing that it was just getting

3:54

incredibly overworked and the database

3:56

was just being overworked and overloaded

3:58

with too much usage. So I knew that okay

3:59

I know that this is a first line of

4:00

defense. I know that I can do other

4:02

things to improve the performance but

4:03

this is something I know that for sure I

4:05

probably will have to do no matter what.

4:06

First thing just upgraded my superbase

4:08

instance into a larger instance

4:10

expensive but works. But that didn't fix

4:11

the core issue of the problem which was

4:13

just a really inefficiently and janky

4:15

hacky written app. So then that's when I

4:17

started to look into like what queries

4:19

can I start optimizing. Luckily you can

4:22

soup base also has this query

4:24

performance tab. So, as you can see

4:25

here, you go into the observability, you

4:26

go to query performance, and from here

4:28

you can see what some of your biggest

4:30

slow query performances are. So, you can

4:32

see here I have these queries within my

4:35

team's table. Very, very expensive.

4:37

We're seeing like a mean time of 8

4:39

seconds per query. Oh my god, 2 seconds

4:41

per query. That's really, really bad.

4:42

So, from here, what I actually started

4:44

ending up doing was within Google Chrome

4:47

specifically, let me switch over to

4:48

there. Yes, I do use Safari as my main

4:50

personal browser. And then I only use

4:51

Google Chrome for like work browsing

4:53

when I need to. Like I know that I'm not

4:55

an expert in terms of query

4:57

optimizations or even reading through

4:58

the superbase dashboard. So what I ended

5:00

up doing was I started to use AI. I did

5:02

this both with Claude as well as the

5:04

[music] built-in Gemini within Google

5:06

Chrome. So Google Chrome has support for

5:07

these AI extensions that lets AI

5:09

actually navigate throughout the website

5:11

for you and click around and just do

5:13

tasks on your behalf. I ended up telling

5:15

both quad as well as Gemini the problem

5:17

that I'm facing with my Superbase

5:18

instance, how slow it was, the

5:20

performance issues I was running into. I

5:21

will tell it to go navigate throughout

5:23

the entire Superbase website and see

5:25

what performance optimizations that I

5:26

can do. And then from there, this

5:28

computer use tool that Claude and Gemini

5:30

have within Chrome. They are able to

5:32

screenshot the web page and see where to

5:34

click on and where to navigate to and

5:35

then read whatever is on that page. So

5:37

when I did this, each task took like 20

5:40

to 30 minutes. It was a really long

5:41

running task, not fast at all. But what

5:43

they ended up doing was just going

5:44

throughout my entire Superbase

5:46

dashboard, finding some problematic

5:47

queries and other optimizations that I

5:49

can do. And one thing they did recommend

5:50

was increasing my compute size. And it

5:52

was actually because of AI that they

5:54

took me to this observability tab that I

5:55

actually didn't know existed within

5:57

Superbase where I could start diagnosing

5:58

some of the problematic queries that I

6:00

had. Then I also told them to tell me

6:02

what all the problematic queries were

6:03

and how I can improve them. So this is a

6:05

really great use case of like I'm

6:06

knowing that I'm not an expert but I

6:08

know AI can probably become a better

6:10

expert than I am and delegating this

6:12

whole research process to AI to use the

6:15

browser browse for me and find all the

6:17

things within my superbase instance that

6:18

I can improve. So then from there I was

6:20

able to get a list of problematic

6:21

queries uh just from superbase and AI to

6:24

like find all of those for me. But then

6:26

what I also did was actually use my app

6:28

myself like browse around like I would

6:30

load the viral content database. I

6:31

realized this was like really slow to

6:33

load at times. I would go into this your

6:35

library tab, you know, like within here,

6:36

you can spy on different accounts and

6:38

get alerted whenever they make like, you

6:40

know, high performing content. So, you

6:41

get alerted like one of your competitors

6:42

makes good content. We have a personal

6:44

library tab that lets you upload

6:46

individual pieces of content yourself

6:48

that you can like remix if it doesn't

6:49

exist in our viral content database,

6:51

liking post and collections post. And I

6:53

realized during these tabs navigations,

6:55

it was really, really slow. So

6:56

essentially what I would do, I would

6:58

navigate throughout my app, see where

7:00

some of the biggest latency there was,

7:01

and I would write it down into a scratch

7:03

pad. Then from there, I had a general

7:05

premise from both the AI researching my

7:07

stupidbased dashboard, as well as my

7:08

personal usage of Yorby to see where the

7:11

biggest latency existed. And then from

7:13

there, I would go into my code editor.

7:15

In this case, I've been testing out Zed,

7:17

you know, might do a little review on

7:18

this editor for in a in a bit. So from

7:20

here, I would use bootup cloud code.

7:22

Then as an MCP I added the superbase MCP

7:26

for my project. The same project that AI

7:28

earlier was navigating and exploring,

7:30

right? And then from there within cloud

7:31

code I would tag all of the really

7:34

problematic pages like this viral

7:36

content database page was really slow. I

7:38

would tag all of the pages one by one

7:40

and then tell cloud code to do an

7:42

investigation on every single page that

7:44

was really slow. See what queries that

7:45

they were doing, what tables they were

7:47

hitting, what keys that they were

7:48

researching and fetching and filtering

7:49

on. And then I would then tell them use

7:51

the superbase MCP analyze those exact

7:54

tables and see what the current database

7:56

table schema is. What indexes are

7:58

available on these tables? And then

7:59

based off of the queries that are

8:01

actively being done on those components

8:03

in my code, what additional database

8:05

optimizations can I add? Particularly

8:06

what indexes can I add to my tables

8:09

within Superbase to improve the

8:10

performance. And that was a huge unlock.

8:13

It basically was able to pinpoint the

8:14

exact indices that I did not create on

8:17

my first go making my application. And

8:19

that was a huge unlock on my behalf

8:21

because you know when I was originally

8:22

creating the first version of this app,

8:24

I wasn't thinking about scale. I wasn't

8:25

thinking about all any of that. I was

8:27

thinking about just getting the app out

8:28

as fast as possible to get signal from

8:30

users. Do users want to buy this? Do

8:32

people actually want this? And as a

8:33

result, I forgot to add a ton of

8:35

different indexes and indices into my

8:37

table to lead to faster database

8:39

performance and faster queries. But then

8:41

by being able to use cloud code to look

8:43

at the exact tables, the exact fields

8:45

that I was querying again and then using

8:46

cloud code to use the superbase MCP to

8:48

look at the current structure of my

8:50

tables, it was able to write a ton of

8:52

additional indexes that I was missing

8:53

that I forgot to add in the first

8:55

implementation. And that was a huge

8:57

performance improvement because everyone

8:58

knows with any type of SQL tables and

9:00

SQL databases, one of the fastest ways

9:02

to improve database performance is by

9:04

adding good indexes and good table

9:06

structures. That was a huge unlock. Once

9:08

again, leveraging AI vastly improved the

9:10

performance of my application. Now,

9:12

after adding all of the missing indexes

9:14

to my table, the next biggest, arguably

9:16

the biggest optimization that I did was

9:17

just better caching throughout my

9:19

application. And a big component of this

9:21

was offloading really heavy read

9:23

operations off of Postgress onto Reddus

9:26

instead. Now, personally for me, I use

9:28

Upstash. It is a serverless Reddus

9:30

platform. I've used them for a lot of

9:32

other parts of my app. If you look at

9:33

other videos on my channel, you'll see

9:34

that I talk a lot about how I use

9:36

Upstatch of workflows for huge portions

9:38

of my app. It basically lets you do

9:40

longer, more complex operations that

9:42

would normally time out on serverless

9:44

functions, but you can do like 20 30

9:46

minute operations on serverless

9:47

functions. Really, really great product.

9:48

I love using Upstatch for workflows.

9:50

Then I also know that they originally

9:52

started off being a serverless Reddus

9:53

provider. And then if you go over to my

9:55

Upst provider, you'll see that I have

9:58

like tons of reads, 91,000 reads versus

10:01

4,000 writes. And that is just a

10:03

testament to better caching. And the big

10:05

optimization that I did here is when you

10:06

go into my application, you can see

10:08

everything is scoped to a specific team.

10:10

In this case, your V team, right? And if

10:12

you look at the URL, you can see that

10:14

this is the team slug. And every single

10:16

thing that comes after this is that

10:18

specific product, right? So this is the

10:20

content studio page, but then I go to

10:22

the viral content database page that is

10:24

also scoped to that particular team of

10:26

Yorby. So every single one of these

10:28

pages basically the main bulk part of

10:30

the app is all scoped to this team's

10:33

path. And as you saw earlier you can see

10:35

that the team's superbased query

10:37

postgress query was really slow. You can

10:39

consistently see that of all the really

10:41

problematic queries that took a really

10:42

long time they were all reading from the

10:44

teams table. And the reason for that is

10:46

is if you go into my code within the

10:49

teams like page, the main entry point

10:51

throughout my app, I littered my app

10:53

like crazy with making sure that the

10:54

user is in that correct team and only if

10:57

they are in that designated team, they

10:58

can perform a certain database

11:00

operation. Obviously, this is really

11:01

important for security purposes because

11:03

you don't want somebody being able to

11:05

make a request on a team that they are

11:06

not a part of. So, we couldn't rip this

11:08

logic out of my app. Then instead the

11:10

workaround that I did was instead of

11:11

reading from Postgress every single time

11:13

for checking like what is a team slug

11:16

given an ID or what is a team ID given a

11:18

slug and who are the members of our

11:20

particular team instead of reading from

11:21

Postgress every single time I use Reddus

11:24

instead. I cache all the team info into

11:26

Reddit because I know that for most

11:28

users the team instance does not change

11:30

much. It's very stale very read heavy

11:32

and very right light. Very few updates

11:35

are ever being made. At least right now

11:36

not a lot of teams or team members are

11:38

being added. But even later on if a user

11:39

does delete a team member, add a team

11:41

member. It's not that much writing being

11:43

done and a lot more reading being done.

11:44

So we didn't have to waste Postgress

11:46

resources to perform this read operation

11:48

and instead I cache it all within

11:50

Reddit. And not only that, I also cache

11:52

it using React's cache command. I

11:54

believe this is a new thing that they

11:56

added to React recently in their really

11:57

big like server component push. And this

11:59

cache function kind of does exactly what

12:01

you think it does. It caches the results

12:02

of whatever database operation that

12:04

you're doing. And especially since when

12:06

users are using our app, they are pretty

12:07

much always within this scope of their

12:11

team. Now what we're doing is every

12:13

single time a user just logs in and uses

12:15

the app to a specific team. We fetch all

12:17

the information about that team whether

12:18

sometimes if it's a cash miss it'll read

12:20

from Postgress. If it's a cash hit,

12:22

it'll read just directly from Reddus and

12:24

we fetch that data. We cach it and then

12:26

and now whenever users go on any other

12:28

page on our app, it is so much faster

12:30

because we cache that team information

12:32

not only within Reddus but also within

12:34

the React cache as well. And that team

12:37

information like you saw earlier in my

12:38

superb based query performance is one of

12:40

the most problematic like slow

12:42

performing queries and slow performing

12:44

tables that I have in my app. That was a

12:46

really really big unlock for this much

12:48

faster much just quicker more fluid

12:50

performance better caching all around

12:52

and especially when you're using a

12:53

framework like next.js JS I think

12:55

caching is incredibly important and

12:57

honestly I know that I am not an expert

12:58

on how exactly the next.js cache works.

13:01

I think it can get a little esoteric

13:02

sometimes and that's something I want to

13:04

read up a little bit more on just to

13:06

have a better performing next.js app.

13:08

But these are some other workarounds

13:09

that I did to improve the caching in my

13:10

app which led to significantly better

13:13

performance within the app. And I think

13:14

this is just one of the testaments to

13:15

when you're building out your app you

13:17

need to use the right tool and the right

13:18

database that is best for you. And in

13:20

this case, I knew that the team's table

13:22

doesn't change much and I know it's a

13:23

read really readheavy and very right

13:25

light. And in that case, a key value

13:27

store like Reddit is a really great

13:29

solution for that type of operation. So

13:30

those are the main performance

13:32

optimizations that I did add into my

13:34

app. But here are some things that I

13:36

considered doing but just didn't quite

13:37

pull the plug on quite yet. And

13:39

throughout the process of my app

13:40

literally 100xing overnight and just

13:42

getting way more users, we have seen a

13:44

lot of our systems kind of be a little

13:45

bit more broken. For example, in this

13:47

viral content database, this content

13:49

looks good. So, so far, but every now

13:51

and then, whenever we try to scrape a

13:53

certain piece of viral content, the

13:54

scraping fails and the user is not able

13:56

to actually view the video that is

13:58

presented in that piece of content. And

14:00

then, because I'm a solo developer,

14:01

literally building out the entire thing

14:02

by myself, I don't have time to manually

14:04

go in and like find the individual piece

14:06

of content and fix it myself and

14:08

re-trigger a rescraping job. So,

14:10

instead, I actually created an

14:11

automation to handle this directly

14:13

within Warp, which is the sponsor of

14:14

today's video. Warp recently just

14:16

launched their brand new cloud agent

14:18

feature which lets you run background

14:19

agents for you on a schedule or based

14:21

off of any triggers. Similarly, in the

14:23

process of our app 100xing and getting

14:25

over 10,000 users in the past month, we

14:27

have found a lot of users coming from

14:29

different countries and we want to make

14:30

sure that the website is localized into

14:31

their native language and we added

14:33

localization support. And I personally

14:35

only edit the English strings because

14:37

that's my native language. But to make

14:38

sure that we update in a lot of other

14:40

locals and other languages, we have a

14:42

daily job called update localizations

14:44

that looks at all of our PRs and updates

14:46

any string changes that we made into all

14:48

the various other languages that we are

14:49

supporting within our app. Like right

14:51

now, English, Korean, and German. Warp

14:53

has been my go-to terminal since back in

14:55

2021, and now they've evolved to

14:56

becoming a full-blown like agentic

14:58

coding suite of tools. The terminal has

15:00

always been my home as a developer, and

15:02

then it transitioned into a place where

15:04

I can actually get huge amounts of

15:05

coding done with agentic coding. And now

15:07

Warp has expanded beyond that with

15:09

autonomous agents scheduled running in

15:11

the background for you to do all sorts

15:13

of different tasks for you. And you can

15:14

run it in the background based off of a

15:15

schedule like I showed you or even a

15:17

trigger based off of linear, Slack, web

15:19

hook, you name it. For example, if you

15:21

look at their native integrations,

15:22

there's Slack, Linear, GitHub actions.

15:24

So you can kick off any type of workflow

15:26

from within any of your favorite

15:28

workplace tools like the ones listed

15:29

here. Thanks again to Warp for

15:31

sponsoring today's video and you can

15:32

check out Oz.dev to learn more. So one

15:34

thing that I consider doing is using a

15:36

readonly replica within Postgress. So

15:39

Postgress and by default superbase has

15:41

this option as an add-on where you can

15:43

have a readonly instance of your

15:45

Postgress table. No write operations and

15:47

the whole benefit of this is that it

15:48

offloads writing to be all done on your

15:51

main instance and within this readonly

15:53

replica because users are strictly only

15:55

using it for reading has less usage

15:56

because there's less writing being done.

15:58

So there's just more compute available

16:00

to have the read operations being done

16:02

on this postcrist table. I was

16:03

considering this, but honestly the

16:05

biggest turnoff for me was the fact that

16:06

getting a readonly replica within

16:08

Superbase is double the cost of what

16:10

your original database is. So since I'm

16:12

already on a $200 a month plan on the

16:15

base table, I would have to pay another

16:16

$200 a month. So $400 a month in total

16:19

just to have a readonly replica of my

16:21

database. I'm not opposed to doing it if

16:22

it was really a big performance boost.

16:24

But honestly migrating a lot of read

16:27

operations onto better caching with

16:28

Reddus that honestly improved the

16:30

performance so significantly that right

16:32

now I'm not even tempted to have a

16:33

readonly replica table. But maybe in the

16:35

future if we have a lot more write

16:37

operations being done if it becomes

16:39

super super right heavy probably better

16:41

to have a separate table database

16:43

instance only to handle the right

16:44

operations and then have this readonly

16:46

replica. And the last diabolical

16:48

performance optimization I thought about

16:50

was migrating my app off of Nex.js to

16:52

Tanstack. such a dumb thing to think to

16:54

myself because like obviously I spend

16:56

way too much time on dev social media

16:58

probably just like you do and everyone's

17:00

talking about how tan stack start is so

17:01

much better and there's a lot of nextjs

17:03

hate out there which honestly I get

17:04

nextjs does annoy me and if I were to

17:06

rewrite an app from a scratch I probably

17:08

would use tan stack start but then

17:10

because of that obviously when my app

17:11

was becoming really slow the first thing

17:12

I thought was it's because of nextj it's

17:14

not because of my tables it's because of

17:15

nextjs I got to rewrite everything to

17:16

tan stack that's going to make my app so

17:18

much faster when really the root cause

17:20

was my postgress tables and that is

17:21

where a lot of the slowdown came from.

17:23

Now, that's not to say a Tanstack

17:24

rewrite is off the table. It's just not

17:26

immediately on the table. Obviously, I

17:28

have a lot of opinions about the whole

17:29

Nex.js versus Tanstack thing, but my

17:31

general like really highle overview

17:33

thoughts on this is that Nex.js is an

17:35

app that's really great if you have a

17:37

lot of static pages. I think it's great

17:38

for static pages, but if you have a

17:40

really high dynamic interactive app or a

17:43

page, I think you're better off just

17:44

using pure React tan stack with that V.

17:47

You know, it's just so much faster than

17:48

Turboac and all of that. That's just my

17:50

take, but I'm not at that point right

17:52

now where I can just rewrite my entire

17:54

app off of Next.js into Tanstack quite

17:56

yet. Not off the table. If anyone's

17:57

interested in actually writing doing

17:59

that migration for me, let me know. I

18:01

could be interested. Reach out. Leave a

18:02

comment and let me know and I might

18:03

reach out to you. I got to get off

18:05

social media. I got to get off coding

18:06

and developer social media. It's so

18:07

toxic. Anyways, that is all I have for

18:09

today videos. That is all of the

18:11

optimizations that I did to my app,

18:12

Yorbi, after it 100xed in size in 30

18:15

days. It's been a crazy past 30 days.

18:18

Super fun, don't get me wrong, but

18:19

really crazy. And if you like this type

18:20

of content where I talk about my

18:22

learnings and what I'm doing, actually

18:23

building my startup, Yorby, make sure to

18:25

follow this channel. Make sure to

18:27

subscribe to the channel so you can get

18:28

an alert of everything that I'm building

18:29

and all the updates, the highs and the

18:31

lows, and everything that comes with

18:32

building an app. That's all I got for

18:34

today. Thanks for watching and I'll see

18:35

you in the next one. Peace.

Interactive Summary

The creator's app, Yorby, a social media marketing platform, experienced explosive growth, going from zero to over 10,000 users and $0 to $4,000 in revenue within 30 days. This rapid scaling exposed severe performance issues as the app was not initially built for such high usage. To address this, the creator implemented several key optimizations: upgrading the Superbase instance, leveraging AI tools like Claude and Gemini to identify and optimize slow database queries and add missing indexes, and significantly improving caching by offloading heavy read operations to Redis and utilizing React's cache command for team-scoped data. The creator also considered, but ultimately decided against, a read-only PostgreSQL replica due to cost and a full rewrite from Next.js to Tanstack, realizing the root cause of performance bottlenecks was primarily database-related.

Suggested questions

8 ready-made prompts