AGI: Don't Know What It Is But Build It Anyway
454 segments
This year, Mr. Alman said that AGI was
imminent. Now, he says that the term AGI
itself is not useful.
Every few months, someone in big tech
declares that we are on the edge of
artificial general intelligence. And
every time that line gets blurred, a few
more billions get raised. I have never
talked about AGI on my channel, but
lately it keeps showing up in the
comments. So, I thought, wouldn't it be
symbolic to share my thoughts and
research on the theme of AGI? Especially
now that we're approaching the end of
the year and I'm wrapping up my series
on the business economics of AI. I will
now be dissecting the technical
feasibility of AGI and when or whether
it'll see the light of day. This is
completely outside of my line of work. I
am not an AI researcher. I would like to
dig into the business of pretending that
we're close to AGI and the economics of
this narrative. Let's dive in.
It is considered that there are three
dimensions so to say in the realm of
artificial intelligence AI,
AGI and ASI.
AI stands for artificial narrow
intelligence. The purpose of the A and I
is to perform a single specific task
extremely well like the way Claude is
generating text or Siri recognizing
voice to a very certain extent or your
phone recognizing your face. This is the
only type of AI that currently exists in
2025. AGI stands for artificial general
intelligence.
AGI is a theoretical term. The operative
word here being theoretical. AGI would
supposedly match human level
intelligence in the ability to learn,
reason or transfer knowledge and ideally
solve novel problems without having to
be retrained. The closest example of Agi
that you might have seen is a human
resembling robot in the movies. And ASI
which stands for artificial super
intelligence is a purely hypothetical
concept of AI that would surpass human
intelligence across all domains. ASI
would hypothetically be able to solve
problems and learn far faster than any
human alive and have the ability to
improve itself autonomously which is
essentially technological singularity if
I may. Let's come back to the topic of
this conversation which is AGI. The
problem of AGI is that very few people
can define what it actually means. Even
prominent AI researchers and scientists
cannot land on a single definition of
this thing. But ironically, there are
quite a few people in the general
public, so to say, who would gladly tell
you the definition of AGI, thinking that
they're well aware of what it is and
what it's supposed to do. Now, if you
search for the definition of AGI, the
word that you're going to come across is
going to be consciousness. Something
like AI with consciousness or conscious
AI. So, some kind of magical AI entity
with cognition. But the question that
isn't really being answered is what is
consciousness?
The paradox is that the people who do
PhD level research when talking about
AGI do so with extreme caution without
any specific timelines and they're doing
so exactly because the definition of AGI
is extremely vague and hardly
quantifiable. But those who do speak
freely and confidently about a GI
usually cannot properly explain what it
means and what consciousness means in
the context of artificial intelligence.
Nevertheless, the noise that they create
is loud enough to induce anxiety into
the part of the population who mistakes
confidence for credibility.
You guys know that I work in product and
a regular discipline that I do is
competitive research. I have to look
through competitor product pages,
pricing tables, LinkedIn announcements,
summarize it all, put it in an email,
send it to myself and to my colleagues.
And I automate a lot of things for my
business outside of work using N8N,
which basically connects APIs, apps, and
models into one system that runs
automations end to end. And if you
automate as much as I do, your workflows
get really expensive really fast because
NA10 needs to be hosted and I need to
pay for API credits for multiple apps
that I have nodes for. Every automation
has a cost and it's fine when it's just
one workflow, but what if you need 50?
The easiest and the most efficient way
to do this is to self-host in the cloud
via virtual private server. And
Hostinger offers one of the best
self-hosted NA10 plans on the market.
And that plan can get you up and running
in literally minutes. With Hostinger,
you can host unlimited workflows,
unlimited parallel executions, and
plenty of readytouse N8N templates. And
all of that at a fraction of a cost and
with fantastic performance because you
can choose what type of server you need
depending on your needs. All you got to
do is go to hosting your website, choose
your plan. KVM2 is actually going to
cover the vast majority of your needs.
It gives you everything you need to get
started with a small number of workflows
and the ability to scale them
indefinitely. They're also having a
Black Friday sale right now which gives
you 60% off. But even without the
discount, Hostinger's plan allows you to
use N8N itself for free and pay only for
VPS hosting. This first of all
automatically saves up to four times
your automation expenses. And secondly,
you get your own server. Go to the check
out page, pick the server location that
is closest to you and the application
you want to install. Plug in your coupon
code to get a discount and you're good
to go. And then once you're in the app,
simply click on manage app and log into
your NA10 account directly. If you're a
content creator, a founder, a business
owner, large or small, and you're
serious about scaling automations or
just want to experiment without burning
your budget, this setup gives you the
best quality for your money. You don't
have to worry about security and privacy
because everything you do stays private.
And don't forget self-hosted NAD is
often required for businesses that must
follow privacy rules like GDPR or SOCK
2. You can check it out at
hostinger.com/tech10
and use the code tech10 at checkout to
get extra discount on any yearly VPS
plans. It's a limited time deal, so
don't miss it. And huge thanks to
Hostinger for sponsoring this part of
the video. Here's a chart of predictions
from the most prominent names in AI
research who are way less optimistic or
certain about the timeline for the AGI
and the predictions he offer are very
different from the media narrative that
makes you believe that AGI is imminent.
the podcast episode that I referenced in
the previous videos in which Dwaresh
Patel interviewed Andre Karpathy and
Cararpathy said that we are at least 10
years away from AGI and he clearly
states that even that prediction is
based on pure extrapolation and
speculation really. I covered
extrapolation bias at length in one of
the previous episodes and in my opinion
the extrapolation bias is one of the
biggest contributors to the current
bubble that is present even among
professional investors. Sam Alman's now
famous statement that he made when
giving a talk at Stanford summarizes the
delusion of chasing AGI whether it makes
sense or not. He said that he doesn't
care if they burn $500 million or $5
billion or $50 billion a year. They're
making AGI and it's totally worth it. So
the way AGI is being described today is
LLM's
multimodality,
some kind of magic
equals AGI. So we're definitely making
AGI. We're burning billions and it's
totally worth it. Except we don't know
what it is and what it really takes.
This framing makes an idea that would
typically be labeled as unsustainable
delusion into a vision.
OpenAI projects burning 115 billion
through 2029 despite posting a 13.5
billion loss against a 4.3 billion in
revenue during 2025 first half. And the
AGI narrative justifies this.
This wouldn't fly for anybody else. But
when Alman says it, it's fine. And in my
view, there is another layer to this.
When we say that we must control AI for
safety,
this argument becomes justification for
market concentration.
Because how do you apply anti- monopoly
rules when critics argue that safety is
used for monopoly and proponents argue
that concentration is necessary for
safety and the argument follows a
structure where AGI is inevitable.
Therefore, whoever gets there first will
capture extraordinary value. Therefore,
current losses don't matter. Now map
this to the problem that I will once
again remind you of. We still cannot
define what AGI is. I found the
following snippet in a fortunes article.
Among the biggest factors in AGI's
sudden fall from grace seems to have
been the roll out of OpenAI's GPT5 model
in early August that landed with a thud.
Yes, the release of GBT5 had mixed
reviews to say the least and I actually
was among those who did not like it at
all at first. And then there is research
from Yale Law Policy Review on how this
anti- monopoly approach to AI governance
shows concentrated control across the AI
stack. The semiconductor market is
dominated by Nvidia with 92% market
share. cloud computing, AWS, Azure,
Google Cloud controlling 63% globally.
What the AGI safety narrative is doing
is that it helps to legitimize this
concentration by framing it as
protective
rather than extractive. This purely
hypothetical conversation about the
imminence of AGI creates superficial
urgency and the urgency benefits
fundraising. And that is why it is being
lobbied so much. For example, Ilia
Sutskver's safe super intelligence
raised $2 billion at a 12 billion
valuation without a working product or
revenue. And Thinking Machines Labs
secured billions purely on AGI promises.
I want to make a quick pause here
because I've been noticing lately that
people in the comments choose to hear
what they would like to hear and I want
to clarify the lens through which I
approach all of my content and all of my
research. I analyze every single topic
from product and business perspective.
My interest is purely academic and I'm
studying how narratives translate into
business. The point I'm trying to make
is that this would never fly in normal
circumstances. This is not about safe
super intelligence. Whether I like it or
not, I have immense respect for Mr.
Saskca and his work. What I'm
highlighting here is that we're talking
about billiondoll investments
justified by an idea that lacks a basic
definition. Another popular fear is how
AGI will affect the labor market. In
fact, I got inspired for this episode
after seeing the comments under my own
videos where people were saying, "You're
so cynical now, but wait till AGI comes
for your job." So, I was like, "You know
what? Maybe I'm the one delusional here.
Let me do the research." So, I did some
digging, found a bunch of AGI related
future of tech, future of job reports,
and after analyzing them for a week, I
stand by what I said. Let me show you.
Goldman Sachs and IMF 300 million jobs
globally could be affected which is 9.1%
of global workforce. My next logical
question is is 300 million a lot
comparing to major historical crisis
from the past. Yes, it is a lot. For
example, great depression, jobs lost 15
million and that was only in the US. The
unemployment rate was 25% at peak. The
duration was 4 years to peak and then 10
plus years afterwards. It really
required World War II mobilization to
recover from the Great Depression. 2008
financial crisis. Jobs lost 27 million
globally. The peak unemployment was 15
million in the US. The recovery took 5
to seven years with wage stagnation. And
lastly, COVID 33 million increase in
global unemployment in 1 year.
Unemployment peak 13%. 6.5 globally. The
recovery pattern was different. It was
K-shaped. High earners gain jobs but low
earners lost. If we step away from the
crisis analogy and consider normal
annual churn in the US, for example,
we're looking at 50 million job
separations every single year in the US
alone. But these are immediately
replaced by the new hires. But the
prediction around AGI and its impact on
employment is net loss, not replacement.
And the more I was reading, the more I
kept questioning what is the likelihood
of this prediction fulfilling itself
without AGI because the entire research
is based on the assumption that AGI
exists. But it doesn't. It doesn't
exist. The report analyzes how
generative AI could impact 300 million
jobs. But the displacement it describes
requires AGI level functionality
performing general cognitive work across
entire occupations replacing fully
replacing human workers at scale
operating without any sort of
supervision. That is not A&I. Their
research objective and the methodology
is solid. They analyzed 800 plus
occupations for automation potential.
But the predictions are based on the
technological assumptions that AGI
exists, which again it doesn't. I'm not
implying that this research is not
substantive or that it isn't worth your
attention. All I'm trying to say is that
it assumes that AGI exists. And remember
how I told you in the last video that a
lot of people hear almost works and
extrapolate that to works. And as a
result of this extrapolation, there is a
lot of general anxiety about something
that doesn't exist. Compared to the
whole AGI narrative, the AI bubble with
its inflated expectations and deceptive
marketing pales in utter insignificance
because at least the technology exists.
You can touch it, you can smell it, you
can use it, you can see it.
People dramatically overestimate current
AI job displacement. The claims about AI
job displacements are highly
questionable, and I made a series of
videos dissecting layoff data. But the
problem is that the AGI narrative keeps
fueling that anxiety. There is a recent
analysis from Brookings and Yale Budget
Lab, great piece of work, by the way. I
highly recommend that you guys read it.
That found that there was no detectable
labor market disruption from GPT's
release, which took place 33 months ago.
There was no disruption on the
economywide level. So, what this
essentially implies is that there is no
documented labor market disruption from
AI,
but we're already worried about AGI. And
this anxiety stems from highly visible
tech layoffs that dominate the headlines
and the fact that there is a lot of
blame being put on AI without noting
that traditional economics like
inflation, interest rates or
restructuring drive business decisions a
lot more than any AI restructuring. And
the last angle that I want to dissect
here is the financial angle because as I
was doing the research I made an
observation that the investment dynamics
around AGI is a bit different from the
ones associated with AI SAS and inflated
VC capital. OpenAI can burn $ 8.5
billion annually and raise 40 billion
more.
Every AI SAS company that you see in
this list has a high multiple. As a
refresher, a multiple is valuation
divided by revenue. For Perplexity, for
example, 18 billion divided by 300
million is 60 times revenue. This means
that investors are paying $60 for every
$1 of annual sales that Perplexity is
generating. All of AI SAS companies
operate on extraordinarily high
multiples, but at least they have a
product. forget whether it brings ROI to
the client or not. At least there is a
product and hopefully a path to
profitability.
But for AGI, there is no product. The
AGI narrative isn't about getting
premium multiples. AI SAS companies get
those automatically. AGI language buys
permission.
Permission to lose money at
extraordinary scale.
The real premium here is tolerance for
loss.
Just so you can compare, open eyes
revenue $15 billion. Annual losses $8
billion, which is 53% of the revenue.
Monthly burn $78 million.
Profitability target 2029, four years
away. 2025 funding raised $40 billion.
Valuation $300 billion. And these are
the numbers for cursor and mjourney.
Both profitable, both with small teams.
Mjourney is actually bootstrapped. This
is what I mean by the business of
pretending we're close to AGI. Not
because it's imminent, but because it
pays off.
So where does this leave us? In my
humble opinion, AGI as it stands is
somewhere between magical thinking and
Terminator.
The irony is that the people closest to
the actual cutting edge research are the
most cautious about promises and
timelines, while the people furthest
from it get intimidated by the
narrative. 33 months after the release
of GBT, we still haven't seen the labor
market disruption, but a new definition
of what AGI means that changes quarterly
is tied to another round of investment.
So, until someone can at least clearly
define what AGI is supposed to mean,
maybe it's time we all just collectively
chill. I mean, holidays are coming. Why
don't we worry a little bit less about
something that doesn't exist?
We hope this was helpful. We'll see you
next time. Bye.
Ask follow-up questions or revisit key timestamps.
The video argues that the concept of Artificial General Intelligence (AGI) is ill-defined and primarily serves as a narrative to justify massive investments, market concentration, and a tolerance for significant financial losses in the tech industry. While experts are cautious about AGI's feasibility and timelines, public anxiety is fueled by vague pronouncements. The speaker distinguishes AGI as a theoretical concept from existing narrow AI and hypothetical super AI, highlighting that predictions about AGI's impact on the labor market are based on a technology that does not yet exist. The narrative ultimately allows companies to burn billions without a clear product or path to profitability.
Videos recently processed by our community