How I Scaled My NextJS + Supabase App To Handle 10,000 Users
628 segments
In the past 30 days, my app went from
having zero users to over 10,000 users.
Literally over 100xing the number of
users on our app in 30 days. And we also
went from $0 of revenue to $4,000 of
revenue in a month. And while this is
great, one big problem came up, and it's
the fact that my app was not originally
built to scale and handle that many
users actively signing [music] up and
testing out the app. And in this video,
I'm going to be going over the main
performance optimizations that I did to
make my app run faster and scale from
zero to 10,000 users. And then later on
in the video, I'm also going to go over
some of the things I thought about doing
but haven't quite taken the leap to do
yet. And before I go into talking about
the performance optimizations that I
actually did, I know that some haters
are going to be like, "Bro, this is
fake. Show us the proof. Show us the
sauce, bro." So, I'm going to show you
the proof of what exactly I built and
show you the real numbers that went into
it and how much I scaled. And then
afterwards, we're going to go into
talking about how exactly I improved the
performance of the app. The app that I'm
building is called Yorby. And it's a
social media marketing platform that has
two main goals right now. and that is to
help you find and get inspiration to
make viral marketing content to market
your business faster. And then we also
help you create viral marketing content
to market your business faster. So,
we're trying to make marketing content
info and creation a lot easier. And the
main premise of how it all works is that
number one, we have this viral content
database, which is a whole database of
different types of viral formats that
other businesses have used, strictly
what other businesses have used to
market their business on social media
and what's worked. And then from there,
you can actually remix these videos
within our content studio. For example,
let's say we see this video and when you
click on the content studio button,
you're going to then be opened up into
our content studio right here. And from
here, you can remix this content to fit
whatever niche that your business is in.
So, obviously, this original video looks
like some type of health and wellness
brand. But let's say I want to remake
this same exact video with for a dating
app, like a fictional dating app right
here. then you can just send this chat
message and then we use our AI that will
then transform this entire script to fit
your niche, your brand, whatever you're
trying to market while still maintaining
that same original viral format and
viral soul that that video had. So
that's the main premise of our app. And
in terms of the growth right here, you
can see that we went from let's see
January 1st, what zero, literally
nothing on January 1st and then we've
steadily grown and right now we have
over 10,000 signed up users. And then in
terms of revenue, if you look at the
gross volume, you can see that in the
past four weeks, in the past month, we
have made over $4,000 of revenue in the
past month. [music] And literally before
that, there was literally nothing going
on. If you go over here, I'll do the
whole uh let's see, all time. And you
can see right here, basically nothing
happened. And then in December, things
start to pick up. We started getting our
first couple of users, and things just
started going crazy afterwards. So
there's the proof. I'm not joking. This
is really real. I'm not making stuff up
for the internet. So now let's get into
the actual performance optimizations
that I did. All right, so right now we
are in the Superbase dashboard for my
project your beef. The first big
performance optimization that I did was
simply upgrading the Superbase instant.
This is more of like a oh my god, like I
know that there's other optimizations
that I can do, but honestly the
performance was so bad that I had to do
something immediately. So what we ended
up doing was I upgraded to the XL
instance. Originally, I believe I was on
the micro instance, just like straight
up the starter package for 20 bucks a
month. And clearly it was not meant to
scale to handle like 10,000 users. So as
a first line of defense, the first thing
that we did was increase the compute
instance for my Postgress database
within Superbase. And the way that I did
that, I kept upgrading, upgrading,
upgrading. Kept running into a couple
limits and performance issues here and
there. Then I ended up just going for
the Excel package and that has helped
out a lot. Granted, it is freaking
expensive. This is like 200 bucks a
month. Champagne problems, you know, I
know it's like lobster's too rich. I
know it's like, oh my lobster's too
buttery. My steak is too rich. you know,
I have enough users that I was actually
forced to just pay for a bigger compute
instance, but still hurt. 200 bucks a
month. It's crazy. Oh, yeah. And the way
that I knew that I had to increase the
size of my compute instance was I went
over here to the observability tab in
Superbase went over to my database and I
was looking at the CPU usage and for a
while when I was on the smaller
instances, the CPU usage was
consistently going to 100% 100% and like
a lot of these other metrics were just
showing that it was just getting
incredibly overworked and the database
was just being overworked and overloaded
with too much usage. So I knew that okay
I know that this is a first line of
defense. I know that I can do other
things to improve the performance but
this is something I know that for sure I
probably will have to do no matter what.
First thing just upgraded my superbase
instance into a larger instance
expensive but works. But that didn't fix
the core issue of the problem which was
just a really inefficiently and janky
hacky written app. So then that's when I
started to look into like what queries
can I start optimizing. Luckily you can
soup base also has this query
performance tab. So, as you can see
here, you go into the observability, you
go to query performance, and from here
you can see what some of your biggest
slow query performances are. So, you can
see here I have these queries within my
team's table. Very, very expensive.
We're seeing like a mean time of 8
seconds per query. Oh my god, 2 seconds
per query. That's really, really bad.
So, from here, what I actually started
ending up doing was within Google Chrome
specifically, let me switch over to
there. Yes, I do use Safari as my main
personal browser. And then I only use
Google Chrome for like work browsing
when I need to. Like I know that I'm not
an expert in terms of query
optimizations or even reading through
the superbase dashboard. So what I ended
up doing was I started to use AI. I did
this both with Claude as well as the
[music] built-in Gemini within Google
Chrome. So Google Chrome has support for
these AI extensions that lets AI
actually navigate throughout the website
for you and click around and just do
tasks on your behalf. I ended up telling
both quad as well as Gemini the problem
that I'm facing with my Superbase
instance, how slow it was, the
performance issues I was running into. I
will tell it to go navigate throughout
the entire Superbase website and see
what performance optimizations that I
can do. And then from there, this
computer use tool that Claude and Gemini
have within Chrome. They are able to
screenshot the web page and see where to
click on and where to navigate to and
then read whatever is on that page. So
when I did this, each task took like 20
to 30 minutes. It was a really long
running task, not fast at all. But what
they ended up doing was just going
throughout my entire Superbase
dashboard, finding some problematic
queries and other optimizations that I
can do. And one thing they did recommend
was increasing my compute size. And it
was actually because of AI that they
took me to this observability tab that I
actually didn't know existed within
Superbase where I could start diagnosing
some of the problematic queries that I
had. Then I also told them to tell me
what all the problematic queries were
and how I can improve them. So this is a
really great use case of like I'm
knowing that I'm not an expert but I
know AI can probably become a better
expert than I am and delegating this
whole research process to AI to use the
browser browse for me and find all the
things within my superbase instance that
I can improve. So then from there I was
able to get a list of problematic
queries uh just from superbase and AI to
like find all of those for me. But then
what I also did was actually use my app
myself like browse around like I would
load the viral content database. I
realized this was like really slow to
load at times. I would go into this your
library tab, you know, like within here,
you can spy on different accounts and
get alerted whenever they make like, you
know, high performing content. So, you
get alerted like one of your competitors
makes good content. We have a personal
library tab that lets you upload
individual pieces of content yourself
that you can like remix if it doesn't
exist in our viral content database,
liking post and collections post. And I
realized during these tabs navigations,
it was really, really slow. So
essentially what I would do, I would
navigate throughout my app, see where
some of the biggest latency there was,
and I would write it down into a scratch
pad. Then from there, I had a general
premise from both the AI researching my
stupidbased dashboard, as well as my
personal usage of Yorby to see where the
biggest latency existed. And then from
there, I would go into my code editor.
In this case, I've been testing out Zed,
you know, might do a little review on
this editor for in a in a bit. So from
here, I would use bootup cloud code.
Then as an MCP I added the superbase MCP
for my project. The same project that AI
earlier was navigating and exploring,
right? And then from there within cloud
code I would tag all of the really
problematic pages like this viral
content database page was really slow. I
would tag all of the pages one by one
and then tell cloud code to do an
investigation on every single page that
was really slow. See what queries that
they were doing, what tables they were
hitting, what keys that they were
researching and fetching and filtering
on. And then I would then tell them use
the superbase MCP analyze those exact
tables and see what the current database
table schema is. What indexes are
available on these tables? And then
based off of the queries that are
actively being done on those components
in my code, what additional database
optimizations can I add? Particularly
what indexes can I add to my tables
within Superbase to improve the
performance. And that was a huge unlock.
It basically was able to pinpoint the
exact indices that I did not create on
my first go making my application. And
that was a huge unlock on my behalf
because you know when I was originally
creating the first version of this app,
I wasn't thinking about scale. I wasn't
thinking about all any of that. I was
thinking about just getting the app out
as fast as possible to get signal from
users. Do users want to buy this? Do
people actually want this? And as a
result, I forgot to add a ton of
different indexes and indices into my
table to lead to faster database
performance and faster queries. But then
by being able to use cloud code to look
at the exact tables, the exact fields
that I was querying again and then using
cloud code to use the superbase MCP to
look at the current structure of my
tables, it was able to write a ton of
additional indexes that I was missing
that I forgot to add in the first
implementation. And that was a huge
performance improvement because everyone
knows with any type of SQL tables and
SQL databases, one of the fastest ways
to improve database performance is by
adding good indexes and good table
structures. That was a huge unlock. Once
again, leveraging AI vastly improved the
performance of my application. Now,
after adding all of the missing indexes
to my table, the next biggest, arguably
the biggest optimization that I did was
just better caching throughout my
application. And a big component of this
was offloading really heavy read
operations off of Postgress onto Reddus
instead. Now, personally for me, I use
Upstash. It is a serverless Reddus
platform. I've used them for a lot of
other parts of my app. If you look at
other videos on my channel, you'll see
that I talk a lot about how I use
Upstatch of workflows for huge portions
of my app. It basically lets you do
longer, more complex operations that
would normally time out on serverless
functions, but you can do like 20 30
minute operations on serverless
functions. Really, really great product.
I love using Upstatch for workflows.
Then I also know that they originally
started off being a serverless Reddus
provider. And then if you go over to my
Upst provider, you'll see that I have
like tons of reads, 91,000 reads versus
4,000 writes. And that is just a
testament to better caching. And the big
optimization that I did here is when you
go into my application, you can see
everything is scoped to a specific team.
In this case, your V team, right? And if
you look at the URL, you can see that
this is the team slug. And every single
thing that comes after this is that
specific product, right? So this is the
content studio page, but then I go to
the viral content database page that is
also scoped to that particular team of
Yorby. So every single one of these
pages basically the main bulk part of
the app is all scoped to this team's
path. And as you saw earlier you can see
that the team's superbased query
postgress query was really slow. You can
consistently see that of all the really
problematic queries that took a really
long time they were all reading from the
teams table. And the reason for that is
is if you go into my code within the
teams like page, the main entry point
throughout my app, I littered my app
like crazy with making sure that the
user is in that correct team and only if
they are in that designated team, they
can perform a certain database
operation. Obviously, this is really
important for security purposes because
you don't want somebody being able to
make a request on a team that they are
not a part of. So, we couldn't rip this
logic out of my app. Then instead the
workaround that I did was instead of
reading from Postgress every single time
for checking like what is a team slug
given an ID or what is a team ID given a
slug and who are the members of our
particular team instead of reading from
Postgress every single time I use Reddus
instead. I cache all the team info into
Reddit because I know that for most
users the team instance does not change
much. It's very stale very read heavy
and very right light. Very few updates
are ever being made. At least right now
not a lot of teams or team members are
being added. But even later on if a user
does delete a team member, add a team
member. It's not that much writing being
done and a lot more reading being done.
So we didn't have to waste Postgress
resources to perform this read operation
and instead I cache it all within
Reddit. And not only that, I also cache
it using React's cache command. I
believe this is a new thing that they
added to React recently in their really
big like server component push. And this
cache function kind of does exactly what
you think it does. It caches the results
of whatever database operation that
you're doing. And especially since when
users are using our app, they are pretty
much always within this scope of their
team. Now what we're doing is every
single time a user just logs in and uses
the app to a specific team. We fetch all
the information about that team whether
sometimes if it's a cash miss it'll read
from Postgress. If it's a cash hit,
it'll read just directly from Reddus and
we fetch that data. We cach it and then
and now whenever users go on any other
page on our app, it is so much faster
because we cache that team information
not only within Reddus but also within
the React cache as well. And that team
information like you saw earlier in my
superb based query performance is one of
the most problematic like slow
performing queries and slow performing
tables that I have in my app. That was a
really really big unlock for this much
faster much just quicker more fluid
performance better caching all around
and especially when you're using a
framework like next.js JS I think
caching is incredibly important and
honestly I know that I am not an expert
on how exactly the next.js cache works.
I think it can get a little esoteric
sometimes and that's something I want to
read up a little bit more on just to
have a better performing next.js app.
But these are some other workarounds
that I did to improve the caching in my
app which led to significantly better
performance within the app. And I think
this is just one of the testaments to
when you're building out your app you
need to use the right tool and the right
database that is best for you. And in
this case, I knew that the team's table
doesn't change much and I know it's a
read really readheavy and very right
light. And in that case, a key value
store like Reddit is a really great
solution for that type of operation. So
those are the main performance
optimizations that I did add into my
app. But here are some things that I
considered doing but just didn't quite
pull the plug on quite yet. And
throughout the process of my app
literally 100xing overnight and just
getting way more users, we have seen a
lot of our systems kind of be a little
bit more broken. For example, in this
viral content database, this content
looks good. So, so far, but every now
and then, whenever we try to scrape a
certain piece of viral content, the
scraping fails and the user is not able
to actually view the video that is
presented in that piece of content. And
then, because I'm a solo developer,
literally building out the entire thing
by myself, I don't have time to manually
go in and like find the individual piece
of content and fix it myself and
re-trigger a rescraping job. So,
instead, I actually created an
automation to handle this directly
within Warp, which is the sponsor of
today's video. Warp recently just
launched their brand new cloud agent
feature which lets you run background
agents for you on a schedule or based
off of any triggers. Similarly, in the
process of our app 100xing and getting
over 10,000 users in the past month, we
have found a lot of users coming from
different countries and we want to make
sure that the website is localized into
their native language and we added
localization support. And I personally
only edit the English strings because
that's my native language. But to make
sure that we update in a lot of other
locals and other languages, we have a
daily job called update localizations
that looks at all of our PRs and updates
any string changes that we made into all
the various other languages that we are
supporting within our app. Like right
now, English, Korean, and German. Warp
has been my go-to terminal since back in
2021, and now they've evolved to
becoming a full-blown like agentic
coding suite of tools. The terminal has
always been my home as a developer, and
then it transitioned into a place where
I can actually get huge amounts of
coding done with agentic coding. And now
Warp has expanded beyond that with
autonomous agents scheduled running in
the background for you to do all sorts
of different tasks for you. And you can
run it in the background based off of a
schedule like I showed you or even a
trigger based off of linear, Slack, web
hook, you name it. For example, if you
look at their native integrations,
there's Slack, Linear, GitHub actions.
So you can kick off any type of workflow
from within any of your favorite
workplace tools like the ones listed
here. Thanks again to Warp for
sponsoring today's video and you can
check out Oz.dev to learn more. So one
thing that I consider doing is using a
readonly replica within Postgress. So
Postgress and by default superbase has
this option as an add-on where you can
have a readonly instance of your
Postgress table. No write operations and
the whole benefit of this is that it
offloads writing to be all done on your
main instance and within this readonly
replica because users are strictly only
using it for reading has less usage
because there's less writing being done.
So there's just more compute available
to have the read operations being done
on this postcrist table. I was
considering this, but honestly the
biggest turnoff for me was the fact that
getting a readonly replica within
Superbase is double the cost of what
your original database is. So since I'm
already on a $200 a month plan on the
base table, I would have to pay another
$200 a month. So $400 a month in total
just to have a readonly replica of my
database. I'm not opposed to doing it if
it was really a big performance boost.
But honestly migrating a lot of read
operations onto better caching with
Reddus that honestly improved the
performance so significantly that right
now I'm not even tempted to have a
readonly replica table. But maybe in the
future if we have a lot more write
operations being done if it becomes
super super right heavy probably better
to have a separate table database
instance only to handle the right
operations and then have this readonly
replica. And the last diabolical
performance optimization I thought about
was migrating my app off of Nex.js to
Tanstack. such a dumb thing to think to
myself because like obviously I spend
way too much time on dev social media
probably just like you do and everyone's
talking about how tan stack start is so
much better and there's a lot of nextjs
hate out there which honestly I get
nextjs does annoy me and if I were to
rewrite an app from a scratch I probably
would use tan stack start but then
because of that obviously when my app
was becoming really slow the first thing
I thought was it's because of nextj it's
not because of my tables it's because of
nextjs I got to rewrite everything to
tan stack that's going to make my app so
much faster when really the root cause
was my postgress tables and that is
where a lot of the slowdown came from.
Now, that's not to say a Tanstack
rewrite is off the table. It's just not
immediately on the table. Obviously, I
have a lot of opinions about the whole
Nex.js versus Tanstack thing, but my
general like really highle overview
thoughts on this is that Nex.js is an
app that's really great if you have a
lot of static pages. I think it's great
for static pages, but if you have a
really high dynamic interactive app or a
page, I think you're better off just
using pure React tan stack with that V.
You know, it's just so much faster than
Turboac and all of that. That's just my
take, but I'm not at that point right
now where I can just rewrite my entire
app off of Next.js into Tanstack quite
yet. Not off the table. If anyone's
interested in actually writing doing
that migration for me, let me know. I
could be interested. Reach out. Leave a
comment and let me know and I might
reach out to you. I got to get off
social media. I got to get off coding
and developer social media. It's so
toxic. Anyways, that is all I have for
today videos. That is all of the
optimizations that I did to my app,
Yorbi, after it 100xed in size in 30
days. It's been a crazy past 30 days.
Super fun, don't get me wrong, but
really crazy. And if you like this type
of content where I talk about my
learnings and what I'm doing, actually
building my startup, Yorby, make sure to
follow this channel. Make sure to
subscribe to the channel so you can get
an alert of everything that I'm building
and all the updates, the highs and the
lows, and everything that comes with
building an app. That's all I got for
today. Thanks for watching and I'll see
you in the next one. Peace.
Ask follow-up questions or revisit key timestamps.
The creator's app, Yorby, a social media marketing platform, experienced explosive growth, going from zero to over 10,000 users and $0 to $4,000 in revenue within 30 days. This rapid scaling exposed severe performance issues as the app was not initially built for such high usage. To address this, the creator implemented several key optimizations: upgrading the Superbase instance, leveraging AI tools like Claude and Gemini to identify and optimize slow database queries and add missing indexes, and significantly improving caching by offloading heavy read operations to Redis and utilizing React's cache command for team-scoped data. The creator also considered, but ultimately decided against, a read-only PostgreSQL replica due to cost and a full rewrite from Next.js to Tanstack, realizing the root cause of performance bottlenecks was primarily database-related.
Videos recently processed by our community