HomeVideos

Why the Smartest AI Bet Right Now Has Nothing to Do With AI (It's Not What You Think)

Now Playing

Why the Smartest AI Bet Right Now Has Nothing to Do With AI (It's Not What You Think)

Transcript

634 segments

0:00

A week or so ago in Davos, Switzerland,

0:02

Elon Musk told the World Economic Forum

0:04

that we're approaching quote abundance

0:06

for all. Ubiquitous AI, ubiquitous

0:08

robotics, everything's going to be

0:10

great. An explosion in the global

0:12

economy, quote, truly beyond all

0:13

precedent. He recommended we not save

0:15

for retirement. Meanwhile, Dario Amade

0:17

predicted half of white collar jobs

0:19

would disappear. But apparently that's

0:21

good because the abundance is just going

0:23

to be everywhere. Look, the abundance

0:25

narrative was everywhere at Davos.

0:26

echoed through every panel, every

0:28

fireside chat, every opra ski

0:30

conversation. But I want to suggest to

0:32

you that the abundance economy is

0:35

probably the wrong frame for most of us

0:37

to think about the next few years. And

0:39

instead, we should think about the

0:41

bottleneck economy. It's much more

0:43

practical, much more likely to getting

0:45

you employed, and much more likely for

0:47

you as a builder or a company leader to

0:50

find ways to succeed in the AI economy.

0:54

So, let's talk about bottlenecks.

0:56

Cognizant released telling research on

0:58

AI claiming that it could unlock could

1:00

is the keyword and a half trillion

1:03

dollars in US labor productivity. But

1:05

there was a massive caveat that no one

1:07

paid attention to from that research.

1:09

The value will only materialize if quote

1:12

businesses can implement it effectively.

1:15

That is the biggest asterisk I've ever

1:16

seen. And most businesses according to

1:19

the CEO of Cognizant, Robbie Kumar, have

1:22

not yet done the hard work. I think

1:24

that's very true. There it is. That's

1:25

the gap between the abundance narrative

1:27

that sounds so good in Switzerland and

1:29

the reality. It's not about capability

1:31

of models. It's about implementation.

1:34

It's about value capture. The AI already

1:37

exists, but the trillion dollar value

1:40

that people like to talk about doesn't

1:41

just show up and flow automatically.

1:43

This is not the fountain of youth. This

1:45

is the story everyone is missing when

1:47

they debate AGI narratives. The

1:49

interesting question is really not

1:51

whether AI creates abundance. It does.

1:54

The interesting question is where are

1:56

the bottlenecks? Because that's where

1:58

value concentrates. Of course, AI is

2:01

creating an unprecedented abundance of

2:03

intelligence. But that just means that

2:05

the bottleneck flows downstream and

2:08

that's where the leverage lives and

2:09

that's where fortunes will be made or

2:11

lost in the next decade. Abundance is

2:13

super handwavy. I'm not interested in

2:14

handwavy. Bottlenecks are specific and

2:17

specificity is where strategy happens.

2:19

It's where careers happen and it's where

2:21

companies happen. So let's talk about

2:23

bottlenecks first. A bottleneck is the

2:25

binding constraint in a system. It's

2:28

it's not a constraint. It is the high

2:31

leverage binding constraint. The one

2:33

that determines actual throughput in the

2:35

system. If you improve anything else,

2:37

you've accomplished nothing because you

2:40

didn't improve the bottleneck. But if

2:41

you improve the bottleneck just a little

2:43

bit, everything will move. Look, this is

2:46

basic systems thinking and it's also

2:48

something that most people ignore. They

2:50

optimize for whatever is visible,

2:52

whatever is comfortable, whatever

2:53

they're already good at. They work

2:55

harder instead of differently. They add

2:56

capacity where there's already lots of

2:59

capacity in the system and they ignore

3:00

the choke point because that's been

3:02

really painful to view and consider and

3:04

address. So many organizations do this.

3:06

The history of the corporation

3:08

ironically illustrates this perfectly.

3:11

Every dominant organizational form

3:13

emerged to dissolve a specific

3:15

bottleneck. The Dutch East India

3:16

companies solved the capital lockup

3:18

problem of multi-year oceanic voyages.

3:21

Railroads cracked the energy constraint

3:23

on overland transport. Banks emerged to

3:25

allocate capital across time. Stock

3:28

exchanges aggregated capital at scales

3:30

that exceeded any private fortune.

3:32

Walmart solved the information

3:34

bottleneck in retail supply chains just

3:36

knowing what was selling where and

3:39

getting it there before stockouts. The

3:41

pattern is consistent. Whoever solves

3:43

the binding constraints captures

3:46

disproportionate value. Everybody else

3:48

participates in the abundance that's

3:49

created. The AI era absolutely has its

3:52

own bottlenecks and they're not the ones

3:54

most people are watching. First, the

3:56

binding constraint on AI capability is

3:59

increasingly atoms, not bits. Jensen

4:01

Hang told Davos that AI needs more

4:03

energy, more land, more power, and more

4:05

trade skilled workers. Contemporary

4:08

hypers scale data centers consume 100

4:10

plus megawws. Training a single frontier

4:13

model can require sustained exoflops of

4:15

compute for weeks. The electricity

4:18

demands are approaching those of small

4:19

nations. In some cases, this matters

4:22

because physical infrastructure operates

4:24

on very different timelines than

4:26

software. You can ship a new model in

4:28

months if you have the compute, but

4:30

building a data center to run it at

4:32

scale that takes moving atoms around.

4:35

That takes time. Permitting alone can

4:38

take years in some cases. Expanding grid

4:40

capacity is even harder. Google recently

4:43

shared that they are bottlenecking on

4:47

the ability to establish connections to

4:49

the grid. This is not the only

4:52

bottleneck in the system, but it's a

4:53

great example of all of the specific

4:56

upstream bottlenecks that are

4:58

constraining the ability of hyperscalers

5:00

to build right now. The result is a

5:02

structural wedge between what's

5:04

technically possible and what is

5:05

deployable today. capability sprints

5:08

ahead while infrastructure really plots.

5:11

We're seeing this also with the memory

5:12

crisis where DRAM prices are just

5:14

skyrocketing because there's not enough

5:16

memory to go around. A model can exist

5:19

in potential, but the physical substrate

5:22

to run it at scale is what's required to

5:24

deliver value. Who captures value from

5:26

this gap? Well, the joke is it's always

5:28

Jensen and it's Nvidia. And that's not

5:30

entirely wrong, but it's also more than

5:32

that. It's whoever can navigate the

5:35

physical constraints faster. It's who

5:37

can pick the better site. It's who can

5:39

get faster permitting. It's who can get

5:41

more efficient construction. Who can do

5:43

smarter energy sourcing. This is not a

5:46

temporary bottleneck. This is

5:48

structural. The companies that

5:49

understand this are securing power

5:51

purchase agreements, advanced memory

5:53

purchase agreements, locking up

5:54

construction capacity, and building

5:56

relationships with utilities years in

5:58

advance. The companies that don't are

6:00

assuming compute will magically appear.

6:02

The chip supply chain is even more

6:04

constrained. TSMC and a handful of other

6:07

fabs control the production of advanced

6:08

semiconductors. Packaging and testing

6:10

and high bandwidth memory all have their

6:12

own separate bottlenecks. As I've called

6:14

out, Nvidia's market position isn't

6:16

really about better chips. It's about

6:18

having chips at all when everyone else

6:21

is capacity constrained. The hardware

6:23

advantage compounds because access to

6:26

compute determines who gets to train the

6:28

next generation of models, who gets a

6:29

seat at the table. And yes, the physical

6:32

layer creates an opportunity for an

6:34

entirely different kind of company. One

6:36

we normally don't think of as an AI

6:37

business. Someone has to build these

6:39

facilities. Someone has to provision the

6:41

power. Someone has to manufacture the

6:43

cooling systems, install the racks,

6:45

connect the fiber. This is what Jensen

6:47

is calling highquality jobs because he

6:51

can't get enough of them and neither can

6:52

any of the other hyperscalers. He says

6:54

trade craft jobs in these kinds of

6:56

spaces have salaries that have nearly

6:58

doubled. And I'm not at all surprised.

7:00

The abundance of AI at the application

7:02

layer depends on scarcity being resolved

7:04

at the physical layer. And that

7:05

resolution means people. The geographic

7:08

distribution matters too. Data centers

7:10

need stable grids, friendly permitting

7:12

environments. Access to cooling whether

7:14

through climate or water. This means

7:16

certain regions effectively become

7:18

strategic assets. It means local

7:20

politics become unexpectedly relevant to

7:22

the trajectory of AI. the infrastructure

7:25

to build AI, the AI that we have in our

7:27

pocket and assume is global, that

7:29

infrastructure lives locally. But that

7:31

is not the only bottleneck. In fact,

7:33

that might be the most well-known

7:35

bottleneck, but there are a bunch of

7:37

others that people talk about less

7:39

often. The trust deficit is the next

7:41

one. When Demuse Hassabi spoke at Davos,

7:44

his biggest concern wasn't technical. It

7:46

was the loss of meaning and purpose in a

7:48

world where productivity is no longer

7:50

the priority. End quote. He also worried

7:53

that we lack quote institutional

7:55

reflection about AI. Look, what he's

7:58

really saying is these are coordination

7:59

problems and coordination runs on trust

8:01

and he's worried about trust. Consider

8:03

what happens when anyone can get

8:06

sophisticated AI content and generate

8:08

whatever they want at the touch of a

8:10

bot. Text, images, video, code, all

8:12

become cheap to produce. The cost of

8:15

generation collapses, but the cost of

8:17

trust doesn't get cheaper. If anything,

8:20

trust gets harder because the difference

8:22

between synthetic and authentic is

8:24

becoming indistinguishable. Every piece

8:26

of content could be fabricated. Every

8:28

credential, you could gain that. Every

8:30

piece of information might be generated

8:33

to manipulate you. When you can't

8:35

distinguish the signal from the noise,

8:36

you're overwhelmed as a human. And you

8:38

look for someone to trust. Trust is the

8:41

infrastructure of coordination. When I

8:44

trust that a counterparty will honor a

8:46

commitment, I don't need to write every

8:47

contingency into legal language. When I

8:50

trust that a credential signals

8:52

competence, I don't need to administer

8:53

all of my own tests. When I trust that

8:56

published information is accurate, I

8:58

don't need to verify it independently.

9:00

Trust reduces transaction costs. It's

9:04

the it's the trust in the system that

9:07

makes coordination possible. Now,

9:09

imagine that trust degrading. You don't

9:11

have to imagine it. You see it and you

9:12

feel it. Transaction costs tend to rise

9:15

across the entire economy. Deals take

9:17

longer. Verification layers multiply.

9:19

Everything gets harder. Who captures

9:21

value here? Whoever can mediate trust.

9:25

The institutions that can verify, that

9:27

can authenticate, that can certify, the

9:29

platforms that develop reputations for

9:31

signal in a world of noise, the networks

9:34

where track records are visible and

9:35

accountability actually exists. We're

9:38

kind of looking for trust banks in the

9:39

21st century. essential infrastructure

9:42

that everyone can rely on, controlling a

9:45

scarce resource that must be accumulated

9:48

over time and that can be allocated

9:50

across different uses. The parallels

9:52

between trust and capital are definitely

9:53

thought-provoking. Here's another

9:55

bottleneck that people aren't talking

9:57

about enough. The integration gap.

9:59

Cognizance research points to something

10:01

specific. The value is conditional on

10:04

implementation.

10:06

$4.5 trillion dollars sitting there

10:08

chained up because organizations can't

10:10

figure out how to use AI effectively.

10:13

This is the integration bottleneck. AI

10:16

has the general capacity but no specific

10:19

context. And we know after a couple of

10:22

years of implementation at the corporate

10:24

level that a general capability is a

10:28

tool that works well for individuals and

10:30

without specific work on the part of the

10:32

company, it just dies at the team level.

10:35

it does not go anywhere. And so, yes, a

10:38

general AI can write code, but it

10:40

doesn't know your code base. A general

10:41

AI can draft strategy, but it doesn't

10:44

know your competitive dynamics. It can

10:46

it can talk about board politics

10:48

generally, but it doesn't know your

10:50

board. It can talk about the product

10:52

strategy of someone in your category,

10:54

but it doesn't know you. The gap between

10:56

AI can do this, and AI does this

10:58

usefully right here is $4.5 trillion.

11:02

Bridging it requires context that's

11:04

often tacit, right? It embeds practices.

11:07

It embeds relationships. It doesn't just

11:09

embed documents. The person who's been

11:11

at the company for 20 years knows that

11:14

things aren't written down anywhere. The

11:16

AI doesn't. This knowledge is not

11:19

promptable. The interface between

11:21

general AI capability and specific

11:23

organizational reality is where value

11:25

gets lost or captured. And some

11:27

companies are going to figure out how to

11:28

solve this integration problem and

11:30

unlock massive productivity gains by

11:32

tying AI into their workflows. Others

11:34

are going to deploy AI tools by the side

11:36

that sit unused or worse get actively

11:39

misused. And they're going to generate

11:41

outputs that look deceptively productive

11:43

and that do not connect to anything that

11:45

matters. The difference isn't the AI or

11:48

the tool. The AI is increasingly a

11:50

commodity, guys. The difference is the

11:53

organizational capacity to integrate.

11:56

Who builds that capacity? That's not

11:58

obvious. Maybe it's a new category of

12:00

consultancy that specializes in AI or

12:03

fit. Maybe it's internal roles that

12:05

don't exist yet. People whose job it is

12:08

to translate between what the business

12:09

needs and what AI can do. Maybe it's

12:12

software that encodes organizational

12:15

context in ways that make AI outputs

12:17

more relevant. Whatever the form, this

12:20

is a bottleneck. And bottlenecks are

12:22

where value concentrates. The

12:24

coordination problem is broader than

12:26

trust. AI doesn't just dissolve the

12:28

challenge of getting humans to agree

12:30

magically. It doesn't make them align

12:32

magically. It might make coordination

12:34

even harder. When anyone can generate

12:37

sophisticated arguments for any

12:38

position, groups have even more trouble

12:41

reaching consensus or alignment. Larry's

12:44

warning at Davos was really pointed. If

12:45

AI does to white collar workers what

12:48

globalization did to blue collar

12:49

workers, we need to confront that

12:51

reality directly. It's very comforting

12:53

for him to say that, isn't it? Sitting

12:54

in his little Sharon Davos. But he's

12:56

describing a coordination problem. How

12:58

do we actually share the gains from AI

13:00

in ways that don't trigger social

13:02

disruption? That's a question of human

13:04

alignment. And really, no one at Davos

13:07

has those answers. And everyone wanted

13:08

to talk over a cocktail glass about

13:10

them. The IMF managing director

13:12

certainly had a quotable saying that a

13:14

tsunami was hitting the labor market and

13:16

40% of jobs globally would be affected

13:18

and quote we don't know how to make it

13:20

inclusive. Well, honestly, the people

13:23

who are the closest to knowing how to

13:25

put AI and jobs together aren't the ones

13:27

going to Davos. They're the ones who are

13:29

actually building workflows where AI and

13:32

people work together. They don't get

13:34

those invitations. You know what's

13:36

really interesting? I've spent the first

13:38

part of this video talking about

13:39

bottlenecks and most of them seem like

13:42

they apply to companies, but everything

13:45

above applies to individuals too. The

13:48

bottleneck principle is a fractal

13:51

principle. You are also a system with

13:54

binding constraints. Your output and

13:56

your impact and your leverage are

13:58

functions of which bottleneck you're

14:00

solving and whether you're optimizing

14:02

the right constraint. The old individual

14:04

bottlenecks are dissolving. Access to

14:07

information is abundant. Access to tools

14:09

is cheap. Skill acquisition is rapidly

14:11

getting easier. It used to take 5 years

14:14

to become a proficient programmer or

14:16

more. AI compresses or eliminates those

14:18

runways. Dario Amade noted at Davos that

14:20

his own engineers no longer program from

14:22

scratch. They supervise and edit the

14:24

work of models. And this is something

14:26

that's come out of OpenAI as well. And

14:27

we're hearing it over and over again

14:29

from other extremely experienced

14:31

engineers who are now saying we don't

14:32

really touch code. This is disorienting

14:35

if your identity was built around skills

14:37

that are commoditizing like programming.

14:39

But disorientation is not a strategy not

14:42

for your career or mine. The question is

14:45

where are the new individual

14:46

bottlenecks? And honestly I was not

14:49

happy with what the Davos guys said.

14:50

Again I feel like they asked lots of

14:52

questions and didn't have answers.

14:54

Hubis's advice to young people was to

14:56

become incredibly competent with AI

14:58

tools. That's a throwaway line. That's

15:00

not a great line. I want to think more

15:02

deliberately. Tool fluency is table

15:04

stakes. The constraint shifts to what

15:06

you do with those tools. Taste and

15:08

judgment become really critical. When

15:11

generation is really cheap because

15:12

people have all those tools, the

15:14

curation of what's good is expensive.

15:17

Knowing what to make, when to stop,

15:19

what's good enough versus what's

15:21

actually good. These are capacities that

15:23

actually still take a lot of time to

15:25

learn. The AI can generate a hundred

15:27

options, but knowing which option is

15:30

right is still human terrain. The

15:32

challenge is that taste develops slowly

15:34

while AI devalues output. So if you

15:37

spend 3 years developing good taste in

15:39

design and AI makes okay design a

15:42

commodity before you can capitalize on

15:44

your extra 10% or 20% of taste, you end

15:47

up losing a race you didn't know you

15:49

were running. I feel and hear that

15:51

frustration from a lot of early career

15:53

folks right now. The window to good

15:55

taste is getting narrower and the people

16:01

who are surviving and thriving and

16:03

developing good taste are narrowing

16:05

their focus earlier. It used to be that

16:08

when you developed good taste, you were

16:10

really broad to start with and then you

16:12

discovered how to narrow over time as

16:14

you learned. These days the folks I see

16:17

who have extraordinary taste are diving

16:19

in super deeply on something. So they

16:22

are rapidly pushing to the frontier past

16:24

the edge of where an AI good enough is

16:27

acceptable because we all know yes AI in

16:30

many ways has solved front-end design

16:32

but if you want extraordinary design

16:34

people are still turning to humans who

16:36

have extraordinary taste. That kind of

16:39

dynamic is going to persist in a lot of

16:41

different corners of the economy and

16:43

it's going to supply a lot of different

16:44

jobs. Here's another one. Problem

16:46

finding eclipses problem solving. AI

16:49

solves wellsp specified problems with

16:52

increasing fluency. But specifying the

16:55

right problem and framing it right that

16:57

remains very very human. What should we

17:00

build? What is wrong here? Have I had

17:02

time to think about it? What question if

17:04

answered would unlock everything else?

17:06

Our education system has largely

17:08

optimized for problem solving and the

17:11

market is increasingly rewarding problem

17:13

finding. If you are good at looking for

17:16

problems and good at talking about them

17:18

and framing them, you're in good shape.

17:20

The analyst who knows which questions to

17:23

ask and which problems matter vastly

17:25

outpaces the analyst who can answer any

17:28

question. The skill increasingly is not

17:30

execution, it's direction setting. It's

17:32

a management skill. Context and

17:34

institutional knowledge are becoming

17:36

moes for individuals in the way that

17:39

data is becoming a moat for companies.

17:41

AI is general usefulness is specific.

17:44

The person who understands why the

17:47

organization really operates the way it

17:49

does, what the stakeholder actually

17:51

wants beneath what they're saying. That

17:53

tacit knowledge is very hard to

17:56

replicate and increasingly valuable. And

17:58

this creates a really strange dynamic.

18:00

Juniors who would historically have

18:02

accumulated context through years of

18:04

apprenticeship now face a very

18:06

compressed path. Why spend 5 years

18:08

learning how the organization works when

18:10

AI can help you skip the grunt work? But

18:12

the grunt work was also where that

18:14

context got absorbed and the implicit

18:16

knowledge that made senior people really

18:19

valuable often came from thousands of

18:21

little exposures that never happen if AI

18:24

handles all the tasks. So, how do you

18:26

develop institutional knowledge without

18:28

that slow accumulation? Honestly, I

18:32

think it still takes slow accumulation

18:34

and people are trying to speedrun it and

18:36

they're going to learn that the hard

18:37

way. No one has a better answer yet.

18:39

There is no fast forward to 20 years of

18:42

deep experience in a domain. I'll give

18:44

you one more. Execution and follow

18:46

through are emerging as a binding

18:48

constraint for many. I know I said that

18:50

solving problems was going out of style

18:53

and finding problems was in style. Well,

18:55

there's an element of follow-th through

18:57

that we still see as a bottleneck. AI

19:00

can generate a lot of plans. It can

19:01

generate a workout plan for me tomorrow,

19:04

but I have to show up to the gym.

19:05

Turning any of these plans that AI can

19:08

generate into reality requires a human

19:10

to decide and commit and to persist and

19:13

to navigate politics, to hold people

19:15

accountable, to keep going when things

19:18

get hard. Execution has always been

19:20

underrated because it's much less

19:22

legible than ideation. People love to

19:25

ask, "What about Steve's brilliant mind

19:27

when he created the iPhone?" They don't

19:30

ask, "What about Steve's relentless

19:32

execution to get it done?" And call

19:33

Gorilla Glass and make them produce the

19:36

glass he knew was right for the iPhone.

19:38

A brilliant strategy document is

19:40

visible. It might get you a promotion in

19:42

some companies, but the grinding work of

19:44

implementation. Steve calling Google and

19:47

saying, "The yellow in the O on Google

19:50

looks terrible on the iPhone. and my

19:52

engineers will be at your door to fix

19:54

it. That's grinding work of

19:56

implementation. That's not a strategy

19:57

document. Tolerance for ambiguity

20:00

separates those who thrive from those

20:02

who freeze. The environment is shifting

20:04

really fast, guys. Best practices are

20:07

shifting all the time. People are

20:10

desperate for stable ground in that

20:12

world. And the constraint that you face

20:15

is actually your ability to metabolize

20:17

change. How much uncertainty can you

20:21

hold on to in a rapidly changing world

20:24

without freezing while continuing to

20:27

execute and follow through on a

20:28

longerterm perspective? People who are

20:31

able to master that balancing act are in

20:34

huge demand. All of this adds up to a

20:36

leverage shift. The old model of talent

20:39

development was super linear. You

20:41

acquired skills, you traded your time

20:43

for money, you let it accumulate slowly.

20:46

The new model has a really different

20:47

shape. Some individuals are discovering

20:49

X leverage X leverage through AI

20:52

augmentation not because they work

20:54

harder but because they've identified

20:55

their bottleneck and directly dissolved

20:58

it. Maybe a developer was bottlenecked

21:00

on boilerplate. Maybe a strategist was

21:02

limited by analysis bandwidth. Whatever

21:04

it is, they found the constraint and

21:06

removed it and unlocked capacity that

21:08

was laden. Most of us are not finding

21:11

that leverage for ourselves because we

21:13

are optimizing against the old preAI

21:16

constraints. We're still trying to prove

21:18

we have the skills when the skills are

21:21

commoditizing. The diagnostic question

21:23

for each of us is deeply personal. What

21:25

is constraining my output right now?

21:28

It's not what I wish was constraining

21:30

me, right? It's not what was

21:32

constraining me 3 years ago. It's not

21:34

the constraint I built my identity

21:35

around solving so I can be proud of it.

21:38

It's the actual binding constraint

21:40

today. Now, for some of us, it is tool

21:42

fluency because we haven't genuinely

21:44

integrated AI into a workflow. And for

21:47

others, it's taste. Maybe it's problem

21:49

finding for you. The bottleneck is going

21:51

to be specific to you. Solving it

21:53

requires first honesty about what's

21:55

actually holding you back. I keep going

21:57

back to Davos and the abundance

21:59

narrative that dominated there. It feels

22:01

clangy. It feels out of touch. The

22:04

conditional is doing a lot of work in

22:06

these predictions. Yes, the capability

22:09

might exist. I increasingly don't doubt

22:11

that. But the value capture depends on

22:14

solving bottlenecks that are

22:16

organizational, institutional, physical,

22:18

and social, not technical. And that is

22:21

hard work. That is hard human work. I

22:23

believe the businesses and people that

22:25

are going to thrive in the next 10 years

22:27

are going to be the ones that correctly

22:29

identify where scarcity has run off to,

22:32

where it has migrated to, into physical

22:35

infrastructure, into trust, into

22:37

integration, into coordination, and

22:40

build systems and build careers out of

22:42

addressing those constraints.

22:44

Intelligence is getting cheaper. The

22:46

promise of abundance is absolutely real.

22:49

AI is going to keep getting smarter.

22:52

cognitive output is going to keep

22:53

getting easier to produce every single

22:55

month. Abundant,

22:57

but abundance doesn't create value

23:00

directly. Abundance shifts where

23:02

scarcity lives. And we haven't been

23:04

honest about that. The question isn't

23:06

whether to believe in the coming

23:08

abundance as an article of faith. No,

23:10

no, no, no, no, no. The question is

23:12

where are the bottlenecks and are you

23:15

positioning yourself and your business

23:17

to solve them? That's really the only

23:20

question that matters and it doesn't get

23:22

enough airtime.

Interactive Summary

The video challenges the prevailing

Suggested questions

7 ready-made prompts