HomeVideos

Why You Should Bet Your Career on Local AI

Now Playing

Why You Should Bet Your Career on Local AI

Transcript

166 segments

0:00

Cloud AI and local AI sound like

0:02

competing technologies, but one of them

0:04

is creating a unique career opportunity

0:06

that barely anyone is understanding

0:08

right now. So, by the end of this video,

0:09

you'll know exactly about this secret

0:12

and which local AI skills are worth

0:14

investing in based on my own career

0:16

going to a senior engineer and hundreds

0:18

of hours of testing all kinds of AI

0:20

models on my RTX 1590. Recently, I

0:24

ranked 14 local AI use cases in my

0:26

latest video, and only three matched or

0:28

beat cloud alternatives. Identic coding

0:31

locally is nowhere near the same as

0:33

cloud code. Five code with local models

0:35

just flat out doesn't work. AI agents

0:37

get confused the moment you give them

0:38

more than a couple of tools. So, if

0:40

local AI loses to cloud most of the

0:42

time, why should you care about local

0:43

AI? Because the job market doesn't need

0:46

local to be better at everything. It

0:48

needs someone who can run local AI on

0:51

company hardware when the data cannot

0:53

leave the building. And almost nobody

0:55

knows how to do that properly. Let me

0:57

walk you through the numbers. Edge AI is

1:00

a $25 billion market in 2025 projected

1:04

to hit $143 billion by 2034 at a 21%

1:08

growth rate. That's a hundred billion

1:10

trajectory. and multiple research firms

1:12

independently came to the same

1:14

conclusion. So when you're at a hospital

1:16

running AI on patient records or a bank

1:19

processing financial data or a defense

1:21

contractor working airgapped, you need

1:23

someone who can run models on your own

1:26

infrastructure. And this kind of need is

1:29

really not hypothetical. Google deployed

1:31

an airgapped AI appliance for the

1:34

military in 2025. Seaman's health

1:37

engineers run AI for radiation treatment

1:39

planning entirely at the edge and these

1:42

use cases are deployed in production

1:44

right now. They all need engineers who

1:46

understand local AI inference. So who's

1:49

building out that local inference? Well,

1:51

84% of developers use AI tools, but only

1:54

18% are actually involved in building AI

1:57

integrations and you know 3/4 said that

2:00

they don't even plan to use AI for

2:02

deployment and monitoring. Almost

2:04

everyone just consumes AI through cloud

2:06

APIs and codes with it. But barely

2:08

anyone knows how to deploy a model, tune

2:10

it for specific hardware or run

2:12

inference fully locally. And I

2:14

personally was in that position too. Two

2:16

years ago, I was using Judy Boutique,

2:18

GitHub Copilots, other coding agents

2:20

like everyone else. And I had no idea

2:22

how any of it worked under the hood. And

2:24

now I've spent hundreds of hours testing

2:26

local models on my RTX 5090. And I just

2:29

realized that a lot of them fell short.

2:32

I build a fullstack app with cloud code

2:34

pointed at local models through LM

2:36

Studio. Local models work, but they

2:39

choke on larger projects. The context

2:41

window fills up, inference gets slow,

2:43

and the modeler starts making mistakes.

2:45

I was spending more time debugging the

2:47

model's output than actually building

2:48

the app. If you continue watching this

2:50

video, I'll walk you through how you can

2:51

avoid these mistakes and actually learn

2:53

to work with local AI effectively very

2:56

quickly. After spending so much time, I

2:58

found use cases that seem to work

3:00

perfectly locally. Speech to text is

3:02

generally a solved problem. Using models

3:05

like faster whisper with large V3 Turbo,

3:08

I process every single video on this

3:10

channel. The raw transcript comes out of

3:12

Whisper and then I run it through a

3:13

local LM to clean up filler words and

3:16

extract the key insights so I can more

3:18

easily create my next video. And that

3:20

two-stage pipeline runs entirely on my

3:22

hardware and gives me results that match

3:25

any cloud service I've tried. while I

3:26

still own my local data. Other use cases

3:29

like image generation and recognition

3:31

allows for real use cases to work from

3:34

your own home automation to enterprise

3:37

camera systems. Now, the pattern across

3:39

all of these use cases that truly work

3:41

is clear. A lot of them are boring,

3:43

well-defined use cases, but they

3:45

consistently outperform all the flashy

3:48

use cases with local models that don't

3:49

really work. And these boring use cases,

3:52

they happen to be exactly what

3:53

enterprises need. transcription

3:55

pipelines, document processing, image

3:57

generation, code assistant that keeps

3:59

proprietary code off thirdparty servers

4:02

and almost half of all enterprises

4:03

already use a hybrid cloud edge

4:06

architecture. Now this hybrid model is

4:08

where it's going. You can use cloud

4:10

models for the complex agentic work

4:11

where you need frontier intelligence and

4:13

you can use local models for the high

4:15

volume privacy sensitive tasks where

4:18

then match or beat cloud anyway. So, how

4:21

do you get started with this to

4:22

capitalize on the career opportunity?

4:25

Well, [snorts] let's say you're a

4:26

backend engineer and you already know

4:27

Docker, then you're closer to this than

4:28

you think. You can just add a rack

4:30

system on top of your core knowledge.

4:32

You can create a portfolio piece that

4:34

shows that you can deploy AI on private

4:36

infrastructure. If you're a student or

4:38

self-taught developer just getting

4:39

started into AI, you can start with code

4:42

autocomplete, install continue dev

4:44

connected to a local Quen model through

4:46

LM Studio. I've got plenty of resources

4:49

that I'll share with you in just a

4:50

little bit. And you can use this while

4:52

you code. This way, you're not going to

4:54

match cloud models, but you will at the

4:55

very least learn how local models

4:57

behave, what their limitations are, and

4:59

you'll have a self-hosted initial

5:01

co-pilot setup that won't cost you

5:03

anything. If you already working in

5:05

DevOps, envelops, cloud infrastructure,

5:08

well, this is your passes path into an

5:10

AI role. You already understand

5:12

deployment, monitoring, and scaling. And

5:14

the companies that need edge AI

5:16

deployment are looking for basically

5:17

your background already. Now the great

5:19

part is that universities haven't caught

5:21

up this opportunity yet. Developer

5:23

surveys barely even track local AI

5:25

deployment as a new skill category. Now

5:28

the market is growing quickly with those

5:29

with real local AI skills being able to

5:31

earn much more. So if this career path

5:34

seems interesting to you, I can help you

5:36

get started. I have over 15 local AI

5:39

projects you can get access to for free

5:41

in the description down below. And I'll

5:43

even give you two simple steps to

5:44

accelerate your AI career. First, you

5:47

want to subscribe to this channel to

5:49

keep yourself informed about the truth

5:51

about AI careers. And of course, you

5:53

want to get those free projects from the

5:55

description right

Interactive Summary

The video highlights a unique and underserved career opportunity in local AI, or Edge AI, despite cloud AI's general performance superiority. This niche is crucial for scenarios requiring data privacy and security, where information cannot leave company premises (e.g., healthcare, finance, defense). The Edge AI market is projected for massive growth, yet a significant skill gap exists as most developers use cloud AI APIs and lack local deployment expertise. The speaker identifies effective local AI use cases like speech-to-text, image recognition, and secure code assistance for proprietary data, emphasizing a future hybrid cloud-edge architecture. Practical advice is given for backend engineers, students, and DevOps professionals to capitalize on this market, with resources provided to get started.

Suggested questions

8 ready-made prompts