HomeVideos

Cursor vs Codex vs Claude vs Zed vs Anti-Gravity (I Tested Them All)

Now Playing

Cursor vs Codex vs Claude vs Zed vs Anti-Gravity (I Tested Them All)

Transcript

1096 segments

0:00

I spend way too much time playing around

0:01

with all the different AI code editors

0:03

and agentic coding solutions out there.

0:05

And in this video, I'm going to be going

0:07

through some of the biggest players in

0:08

this space with some incumbents as well

0:10

as some new players in the space. Give a

0:11

little review on every single one of

0:12

them, how they compare to each other,

0:14

and then at the very end of the video,

0:15

I'm going to give a couple of

0:16

recommendations on what you should use.

0:18

So, the code editors and AI coding

0:20

solutions I'm going to compare today is

0:22

going to be anti-gravity versus cursor

0:24

versus zed versus cloud code versus open

0:26

AAI codeex. Buckle in. This is going to

0:28

be a long one. Let's get into it. Okay,

0:30

first let's talk about Zed. There's

0:32

actually so much lore and story behind

0:34

Zed, and I know I'm going to butcher the

0:36

details and get a lot of details wrong,

0:37

but I believe from a highle point of

0:39

view, essentially the creators of Zed

0:41

were the original creators of a lot of

0:42

the really popular previous text editors

0:44

like VS Code or Atom from back in the

0:46

day now. And they realized that by

0:48

building Atom and VS Code on Electron

0:51

created a lot of performance issues and

0:52

a kind of slow and clunky editor. So

0:54

then they went out to create zed to be a

0:57

really blazing fast editor. I believe

0:58

it's based off of Rust. So really fast,

1:00

really high performance. And I have to

1:02

say Zed really delivers on that.

1:03

Whenever I run multiple instances of Zed

1:05

to run multiple projects with like

1:07

parallel work trees and parallel agents,

1:09

rarely does my laptop begin overheating

1:11

when I use zed. But I cannot say the

1:13

same for that whenever I use a VS

1:14

codebased editor like cursor or

1:16

anti-gravity or even VS code itself when

1:18

I have multiple VS code windows and my

1:20

laptop sounds like it is about to

1:21

explode. Zed, I do not run into that

1:23

issue. So my overall take on Zed and

1:25

that's something that I really do enjoy

1:26

about Zed. It seems to be a really

1:28

performant editor and I think that is

1:29

the main selling point of Zed. Super

1:31

fast, really quick, really snappy. I

1:34

love it. They also have a lot of really

1:35

interesting collaboration features which

1:37

admittedly I don't use because I'm

1:39

primarily a solo developer. Essentially,

1:41

Zed has literal like almost Slackesque

1:43

communication built into it where you

1:45

have channels and you can like call and

1:47

literally conduct meetings. And I

1:48

believe there's also live like pair

1:49

programming support directly built into

1:51

Zed, which admittedly I don't really use

1:53

just because I'm a solo developer. I

1:55

don't really have a team to lean on.

1:56

Well, you can see it right here in this

1:57

tab in the collab panel. Unfortunately,

1:59

once again, I can't demo it, but it does

2:01

seem like when you have your entire

2:02

team, you can communicate, make sure

2:04

nothing gets lost, and literally live

2:05

inside of Zed to power your entire team

2:08

for collaboration and communication. So,

2:09

that's one of the big features that I

2:11

believe Zed originally launched with,

2:12

but now in the whole AI age that's been

2:15

coming out about, they've been leaning

2:16

into the whole AI ecosystem a lot more

2:18

with creating their own agentic system

2:20

right here. And then one thing that Zed

2:22

created, which I really do like, is this

2:24

thing called agent client protocol. It's

2:26

basically similar to MCP but

2:28

specifically designed for agents. What's

2:29

nice about this protocol is that you can

2:31

essentially you know sign in with Gemini

2:33

CLI, Codeex CLI, Cloud Code CLI and then

2:36

with this protocol it creates basically

2:38

a kind of a standard way of agents

2:40

communicating with code and going

2:42

through workflows within any other tool

2:43

in the future. Very similar to how MCP

2:45

where when you really think about it is

2:46

just like opening up a standard for AI

2:48

and LLMs to communicate with API

2:50

endpoints. I think what the ACP the

2:52

Asian client protocol doing is very

2:54

similar to MCP but specifically for

2:56

agentic coding work so that you can try

2:58

to unify all of these coding agents

3:00

under one common standard. Now how

3:02

commonly is this adopted? I'm not really

3:04

too sure and I'm not going to give an

3:05

opinion on that because I'm like

3:06

honestly not qualified to do so. What I

3:08

will talk about is the AI coding

3:10

experience specifically within Zed. Now,

3:12

what I do like about it is the fact that

3:13

you can sign in with cloud code, codec

3:15

CLI, Gemini CLI, or I believe you can

3:17

also add more agents through open router

3:20

or any of these other models as well.

3:22

Very nice. You get model choices and

3:24

that's great. But at this point, that's

3:25

kind of table stakes for any type of AI

3:27

code editor out there in 2026. And

3:29

unfortunately, I just don't think Zed

3:31

has enough AI coding features to really

3:33

push me over. they have AI tab to

3:35

complete as we've all grown accustomed

3:36

to. But specifically with their agentic

3:38

coding solution, for example, right now

3:40

they don't have the ability to have

3:41

multiple threads going at once. So for

3:43

example, let's say I say like update the

3:46

client.ts

3:48

with better logging. Great. This works

3:50

using the agent client protocol. But

3:52

then if I want to do another parallel

3:54

workstream, I can't. It closes out the

3:56

previous workflow and it's opening up a

3:57

new one, completely erasing the previous

3:59

instance. And right now in 2026, the

4:01

meta for AI coding and programming in

4:03

general is definitely moving towards

4:05

parallel work streams where you can work

4:06

on multiple things at once. And that

4:08

quite frankly just is not really

4:10

supported directly within here zed. Now

4:12

obviously there are workarounds to this.

4:13

You can start off one task here. Then

4:15

you can open up a CLI tool like cloud

4:17

code or Gemini CLI codec and then you

4:20

can let it start going from here as well

4:21

and that's fine. So there are

4:23

workarounds and you still get a faster

4:24

solution to it. But at the same time I

4:26

feel like it shouldn't be this way in

4:28

2026. I think Zed's biggest value

4:30

proposition is the fact that it's fast.

4:32

That is it. Biggest value proposition is

4:34

still definitely not in the whole AI

4:36

coding realm. I think its AI features

4:37

are always going to be lagging behind

4:39

the bleeding edge editors like cursor or

4:41

even anti-gravity as well. And then this

4:43

is another thing as well because zed is

4:45

kind of built from scratch and it's not

4:47

based off of a VS code fork like cursor

4:49

or anti-gravity. Sometimes the feature

4:51

parody isn't just always there. Let me

4:53

show you one very small example of this.

4:54

So let's say I am going to take a

4:57

screenshot of this web page. I really

4:58

like this, right? And I want to make a

5:00

change to this. In a VS codebase editor,

5:02

I can take this image, drop it in here,

5:03

and then it would normally populate

5:05

cloud code. I'll even show you right

5:06

here. Let's take another screenshot

5:07

right here of this random screenshot.

5:09

Maybe if it was real code of a real app,

5:11

I can get it to edit it some way. And

5:12

I'm going to drop it into my terminal,

5:14

the cloud code. It shows up right there.

5:15

It registers from the temp directory and

5:18

gets this image. But then if I try to do

5:20

the same exact workflow within zed, take

5:22

the screenshot, drop it in to cloud

5:24

code, nothing. Nothing gets inserted.

5:26

Now once again there are workarounds

5:27

where I can get this image, save it to

5:29

my desktop and then from there I can go

5:31

to my desktop, get that screenshot and

5:33

then from there I can paste it in. But

5:35

once again that's a workaround that I

5:36

just feel like shouldn't be there. And I

5:38

think that's the biggest downside with

5:39

zed because zed is not based off of a VS

5:42

code fork like anti-gravity or cursor or

5:44

VS code itself and it has to build

5:46

everything from scratch. The Zed team is

5:48

then forced to actually take all the

5:50

really popular coding tools which treat

5:52

VS Code as a first class citizen. they

5:54

have to rebuild that feature themselves.

5:55

Another example is a claude code for VS

5:57

code plug-in exists and then that lets

6:00

you run cloud code in a nice graphical

6:02

interface directly within your VS

6:04

codebased code editor like for example

6:05

with here within cursor and then I can

6:07

open up anti-gravity and once again have

6:09

the same exact extension support

6:10

immediately out of the box because they

6:12

are both based out of the VS code fork.

6:14

Zed doesn't have that. That's why they

6:15

had to create their own AI coding

6:17

interface here with their agent client

6:18

protocol and they can only do one thread

6:20

at a time. You can work on multiple work

6:22

streams at once natively within Zed. So

6:24

my overall opinion on Zed is I like it.

6:26

I wanted to like it more because I do

6:28

really appreciate how fast and how

6:30

snappy it is. But because it's not part

6:32

of the quote unquote industry standard

6:34

right now of the most popular AI code

6:35

editors, which are VS Code forks, it's

6:37

always lagging behind in features and

6:39

it's always playing catch-up and you

6:40

never feel like you're on the bleeding

6:41

edge, which for some people is not

6:42

important, but that is something you

6:44

need to know. And with any of these AI

6:45

code editors, one of the most important

6:47

parts about using these tools is to get

6:49

context into your app and write your

6:51

prompts out as fast as you can. And for

6:53

Cursor and other VS Codebased code

6:54

editors, there's actually support to tag

6:57

files using a speechtoext tool. For

6:58

example, right here, you can say update

7:01

the personal content.ts file.

7:04

And you can see the file is

7:05

automatically tagged just by speaking

7:07

out the prompt. I'm a huge advocate of

7:08

using speechto text tools in your AI

7:10

coding workflow because typing long

7:11

prompts is so slow and timeconuming. And

7:14

because of this, I find myself kind of

7:15

cutting corners and leaving out

7:16

important context compared to if I were

7:18

talking about this feature with a

7:19

co-worker, for example. And that's where

7:21

Whisper Flow comes in as a really great

7:23

tool for developers because it

7:24

understands developer terminology,

7:26

formats variables correctly, and even

7:28

lets you tag files automatically inside

7:29

Cursor and other VS Code editors just

7:31

like I showed you. And being able to

7:33

speak your prompts and tag files

7:34

significantly speeds up your developer

7:36

workflow. They also have a brand new

7:38

style feature which is a contextaware

7:40

speechtoext tool that can automatically

7:41

format your messages to look more

7:43

conversational in certain apps and more

7:45

formal in other apps. For example, in

7:47

iMessage I can say, "Hey, what's up? Do

7:49

you want to get coffee tomorrow?"

7:52

And you can see it's not capitalized.

7:54

It's way more casual, much more

7:55

conversational. But then if I were to

7:56

say the same exact message in an email,

7:58

I could say, "Hey, what's up? Do you

7:59

want to get coffee for tomorrow?" Much

8:02

more proper grammar. Everything's

8:03

capitalized. Same sentence, just

8:05

different context. And while they are

8:06

the sponsor of today's video, you can

8:08

see that I use them a lot and I've

8:09

actually transcribed over 60,000 words

8:12

myself using Whisper Flow. And they also

8:13

have this really useful tool called

8:15

snippets where you can create like

8:16

shorthands for phrases to map to certain

8:19

outputs. So for example, I if I just say

8:21

my email address, instead of pasting my

8:23

email address, it'll paste in my actual

8:25

email address. You can reach out to me

8:26

at my email address

8:30

right there. Just say it once and flow

8:31

drops in the fully formatted final

8:33

version. I'll include a link in the

8:35

description of this video if you want to

8:36

try it out yourself and you can use code

8:38

YATB for an extra free month of Flow

8:40

Pro. Highly recommend them. They're an

8:42

awesome tool and I use them all the

8:43

time. And once again, thanks to the

8:44

WhisperFlow team for sponsoring this

8:46

portion of the video. All right, next

8:47

up, let's talk about Cursor. I don't

8:49

think Cursor needs any introduction

8:51

because it is the de facto probably most

8:53

popular AI coding ID coding tool out

8:56

there. It was the first one that came

8:57

out and it is incredibly popular. And if

8:59

you watch my recent video where I talk

9:01

about my AI coding workflow in 2026, you

9:04

will see that cursor is the main editor

9:06

that I use. So cursor, it is super

9:08

familiar to most people because it's VS

9:09

Codebased. So it's a very familiar UI if

9:11

you grew up using VS Code. But then

9:13

obviously the claim to fame is the fact

9:15

that it has your whole AI coding chat

9:17

panel over here. You have all the

9:18

different models that you can use.

9:20

Composer 1 is the in-house model that

9:22

they built themselves. Opus, Sonnet,

9:24

GPT, Gemini, everything. You get all the

9:26

models you want directly there. And they

9:28

have a really great like tab to complete

9:30

code prediction system which is like you

9:32

see right there. These are just table

9:34

stakes things that we've just gotten

9:36

used to as the standard for AI coding

9:38

tools. And a lot of the standard was

9:39

probably created by cursor because they

9:41

were the first and biggest competitors

9:42

out there. Now obviously they have a lot

9:44

of other features that they've added

9:45

into cursor as well. For example, you

9:47

can open up a browser. So when you start

9:49

a local dev server, for example, you can

9:51

open it up directly within your browser

9:53

tab. And what makes that particularly

9:54

interesting is that when you go over

9:56

here, you can start making changes. You

9:58

can highlight things. It gets referenced

9:59

in the chat. You can tell it update this

10:01

text to see to say hello world. You

10:04

know, makes all these changes for you.

10:06

Has a really nice ecosystem. They also

10:08

have a lot of other tooling that they

10:09

have built out as well such as agent

10:11

review which is a tool that goes out and

10:13

when you make changes, it'll go out and

10:14

try to find any potential bugs that your

10:16

code is making like a little code review

10:18

tool. They also have other tools like

10:19

bugbot, cursor CLI, cursor cloud where

10:21

you can make changes directly within a

10:23

cloud hosted instance on your of your

10:25

project by cursor. They also have this

10:26

new UI mode called the agent mode where

10:29

it is an agent first UI where you can

10:31

have multiple parallel workst strings

10:33

going on at once. Now the biggest pros

10:34

about cursor is the fact that it is kind

10:36

of the industry standard tool out there

10:38

and for any other AI plugin or AI coding

10:41

tool out there whether that be an MCP or

10:43

anything else. Cursor is pretty much

10:44

always guaranteed to have first class

10:46

instructions because it's going to be

10:48

treated as a first class citizen in the

10:50

AI coding world. That's the biggest pro

10:51

for it. Another pro is the fact that

10:53

cursor is kind of always going to be on

10:55

the bleeding edge. They come out with

10:56

newest features all the time and right

10:58

now I don't see any AI coding editor,

11:00

specifically an editor that ships faster

11:02

than cursor. And if you are also someone

11:04

that's on a little bit more of a budget,

11:06

then I do think cursor is great because

11:08

you do get the model picker where you

11:09

get access to so many different models

11:11

for one price within one piece of

11:12

software. But that is also going to lead

11:14

into some of the biggest cons about

11:16

cursor. And the biggest con about cursor

11:18

is the fact that honestly it's going to

11:19

get pretty expensive. While they do have

11:21

their own in-house model with composer

11:23

1, they are also this middleman layer

11:25

for access and usage to Opus, Sonnet,

11:28

GPT, Gemini, whatever other models that

11:30

are out there. And I personally believe

11:31

if there is one particular model you

11:33

like the most, you are much better off

11:34

getting subscription straight from that

11:36

party provider because you will get more

11:38

access and usage directly from them than

11:40

you would with cursor. For example, if

11:42

you are a big Opus and anthropic fan in

11:44

general, you're better off getting a

11:45

cloud code subscription. If you're a big

11:47

OpenAI and GPT fan, you're better off

11:49

getting a codec subscription. So on and

11:51

so forth. But if you're someone that's

11:52

not super particular about which model

11:54

you want to use and just using one model

11:55

specifically and you want to use a bunch

11:57

of different models, cursor is a pretty

11:58

good solution to that. And also their

11:59

composer one model, I actually really do

12:01

like it. I think it's a good model. The

12:03

biggest difference with Composer 1

12:04

versus the other models is that Composer

12:06

is a much faster model compared to the

12:08

other GPT 5.2, to sonnet opus models and

12:11

it sacrifices a little bit of raw

12:13

intelligence and coding power but it

12:14

makes up for that with faster token

12:16

output and faster code changes so you

12:18

can stay in a little bit more of a flow

12:19

state but that's a general highle

12:21

overview of cursor and while I'm going

12:23

to transition now to talking about

12:24

anti-gravity we're definitely going to

12:25

come back to talk more about cursor as

12:27

well because we're not quite done yet

12:28

and we're always going to use it as a

12:30

comparison point too. All right, next up

12:32

let's talk about anti-gravity.

12:34

Anti-gravity. Anti-gravity. Boy, do a

12:36

lot of thoughts about this one because

12:37

there's a lot I like a lot that I don't

12:38

like. Let's talk about it. Now, before

12:40

getting into it, for those of you that

12:41

don't know the kind of controversial

12:43

history of anti-gravity. So,

12:44

anti-gravity is an AI coding editor to

12:46

compete with the likes of Cursor that is

12:48

built by Google. But Google actually

12:50

acquired the founders and a really small

12:52

core team of the original team behind

12:54

Windsurf. Probably the second place AI

12:56

coding editor compared to Curser a year

12:57

or two ago and probably today too. So

12:59

while windsurf still exists essentially

13:00

Google hired of a founding team like a

13:03

20 30 people and only took those 20 to

13:05

30 people took them over to Google

13:06

created anti-gravity which is basically

13:08

a fork of windsurf which windsurf is a

13:11

fork of VS code so it's a fork of a fork

13:13

and there's a lot of controversy about

13:15

this and you should read up on that it's

13:16

a pretty interesting story but let's

13:18

talk more about the code editor itself

13:19

with anti-gravity so anti-gravity like I

13:21

said it's a VS code fork so it's very

13:23

familiar environment that we are all

13:25

used to and honestly when you open it up

13:27

it looks more VS codeesque and cursor.

13:29

Cursor has definitely tweaked the UI a

13:30

little bit more compared to anti-gravity

13:32

because anti-gravity looks more like a

13:34

pure VS Code that we are very much so

13:36

used to. So once again, they have all

13:38

the table stake tools that you need in

13:39

any AI code editor in 2026. They have

13:42

the tab to complete auto prediction

13:44

right there, right? Bang did that. They

13:46

have the chat panel over here. They have

13:47

a model picker over here. Choose any

13:49

type of different model that you want.

13:51

Really great. But the thing I

13:52

particularly like about anti-gravity,

13:54

which I find the most interesting, is

13:56

its agentic coding inbox. Like this

13:58

agent manager panel that we're looking

14:00

at right here. Essentially, you can just

14:02

create a bunch of different workspaces

14:04

here. So, for example, this is a project

14:05

that I'm working on of another small

14:07

tool I've been cooking up on the side.

14:09

And this is the main product that I'm

14:10

working on, Yorby, which you know, I

14:11

I'll plug it right now. Give me a little

14:12

I'll give one plug per video as one of

14:14

the founders and creators. You know,

14:15

Yori, it is a social media marketing

14:17

tool designed to help you find content

14:19

inspiration to market your business on

14:20

social media as well as create content

14:22

to market your business on social media

14:24

way faster. The way that we do that is

14:25

we have two primary features we wanted

14:27

to shout out. Number one is this viral

14:29

content database. This is a database of

14:31

viral content that other businesses,

14:32

specifically businesses have used to

14:34

market their business on social media

14:35

for you to get some info on what content

14:37

to make to market your business on

14:38

social media. And then let's say you

14:39

find a piece of content that you find

14:40

interesting. Then open up that piece of

14:42

content in our content studio. Then you

14:44

can remix this piece of content to fit

14:46

your brand, your niche, whatever you

14:48

want, while still maintaining that same

14:50

original format of the video. So, for

14:52

example, in this conversation, you can

14:53

see that I asked Yori to recreate this

14:55

video for a fictional dating app called

14:58

Wingmates, printed out an entire script

15:00

specifically catered to the app that I

15:02

provided here, but it still maintains

15:04

the same viral format and viral spirit

15:06

of this original video right here. We're

15:08

essentially trying to make the cursor

15:09

for marketing. Enough of the plug. Let's

15:10

get back to the anti-gravity review. And

15:12

you can see that I have two different

15:13

instances of Yori. And I just have this

15:15

because these are basically GitHub work

15:16

trees of one another so that I can work

15:18

on multiple tasks within Yori in

15:20

complete isolation so that the changes

15:22

don't overlap with one another. So from

15:24

within each of these workspace, you can

15:26

kick off some type of agent decoding

15:27

task like I did right here refactoring a

15:29

transcription API. Then you chat with it

15:31

all here and you can basically manage

15:33

every single one of your agents that's

15:34

making changes in these code in the

15:36

code. Then when you want to focus in and

15:38

see the actual code changes yourself,

15:40

you have two options to do so. Number

15:41

one, you can click on review changes.

15:42

Just review the changes over here in

15:44

this little lightweight code editor,

15:45

text reviewing tool. Or if you press

15:47

command E, you will then actually get

15:49

transported into the full-blown editor

15:52

experience of that repository of that

15:54

specific workspace. I really love this

15:56

UI. It's actually very similar to how

15:58

Codeex does their whole UI as well. You

16:01

know, with the whole workspace here, you

16:02

have very siloed projects here and

16:04

there. And then from there you can open

16:06

up one of these threads, one of these

16:07

work streams directly within your editor

16:10

of choice like cursor right here. But

16:11

what's nice about anti-gravity is the

16:13

fact that they had this directly built

16:15

into the editor. And that is the best

16:17

thing about anti-gravity. The fact that

16:19

it's I think the best parallel

16:21

workstream experience out there. As

16:23

you're working on different projects,

16:24

you're like, "Okay, this is making some

16:25

changes. Let me view the changes." It

16:26

opens up a completely separate window.

16:28

Then let's say I go to Yori schema 2.

16:30

Press command E again. I open up its own

16:32

completely separate window. really

16:34

really great for that workflow

16:35

specifically just quickly switching

16:37

through all the different projects and

16:38

making changes there. And because

16:40

anti-gravity is owned by Google, Google

16:42

definitely treats Gemini as their first

16:44

class experience, their first class

16:46

model, the first class citizens directly

16:48

within anti-gravity. They have really

16:50

cool things as well where anti-gravity

16:51

automatically connects to Google Chrome

16:53

where it can open up Google Chrome on

16:55

its own and go browse around and

16:57

interact with the page. And once again,

16:58

because it's tied with Google, you can

17:00

also generate images with Nano Banana

17:02

and then use these images as background

17:04

images in your app or icons in general

17:05

in your app as well. So, it has that

17:07

really tight Google integration, which

17:09

is great. But the question then becomes,

17:10

is that enough to actually convince you

17:12

to use anti-gravity? And in my personal

17:14

experience, I think when you're choosing

17:15

any type of AI coding solution out

17:17

there, the biggest benefit of choosing

17:19

one AI coding solution over the other is

17:22

the most amount of usage for that

17:23

provider's models. For example, since

17:25

anti-gravity is owned by Google, what

17:27

you are really paying for is the maximum

17:30

amount of usage for the Gemini family of

17:32

models, Gemini 3 Flash or Gemini 3 Pro.

17:35

Yes, you still get access to Claude,

17:36

Sonnet, Opus, GPT, any type of model

17:39

within their model picker, and you still

17:40

get access to it, but in my opinion,

17:42

what you're really paying for is the

17:44

maximum usage for the Gemini models for

17:46

Google's offering. Similarly to when

17:47

you're buying codecs because you don't

17:48

have a model picker, you're straight up

17:50

just purchasing this for the maximum

17:52

amount of GPT usage compared to any

17:55

other provider with claude code as well.

17:57

You are purchasing cla code for the

17:58

maximum usage for cloud code models cla

18:02

or opus. And if I'm being honest, I have

18:04

really tried to like Gemini for coding.

18:06

I actually think Gemini is really good

18:07

for UI design and creativity in that

18:10

front, but man, for coding, Gemini, I

18:12

just don't think it's that good,

18:13

especially when GPT 5.3 and Opus 4.6 six

18:16

just came out. Gemini just pales in

18:18

comparison. I really tried to use Gemini

18:20

exclusively within anti-gravity to make

18:21

code changes and it just felt like it

18:23

would get so lost. It would take so much

18:25

longer to complete certain tasks and it

18:26

was really frustrating. But whenever I

18:28

do make any type of UI changes, like

18:30

front-end changes to make things look

18:31

better, more intuitive, more natural. I

18:33

still default to using Gemini because I

18:35

think that's where Gemini shine, UI

18:36

design. But in terms of more complex

18:38

coding work, I have had a bit of a

18:40

subpar experience compared to GPT 5.3 as

18:43

well as Opus 4.6. But I want to make one

18:45

more point about the multiworkspace tool

18:47

here. So like I said, I really love

18:49

anti-gravity's implementation of this

18:51

multi-workspace workflow to have

18:53

multiple parallel AI agents going on at

18:55

once. Now does cursor have that? Kind of

18:58

yes, but kind of no. Because within

19:00

cursor, you actually can open up

19:02

multiple workspaces as well. If you go

19:04

over here to file, you can do add folder

19:07

to workspace and then you can then

19:09

import various other project that you

19:10

want. So this is the Yorb schema 2

19:12

project. I could import Yori schema 1 as

19:14

well. I can also then import Let's

19:16

create the same setup as anti-gravity.

19:18

Let's import the Monty directory as

19:21

well, right? And now that I have all

19:22

three of these separate different

19:24

workplaces directly within cursor,

19:26

technically, yes, I could go out and

19:28

make changes using the code editor of

19:31

cursor. And then now that I have all

19:33

three of these workspaces loaded, I

19:34

could then theoretically make a coding

19:36

changes to any of these workspaces and

19:38

directories that I imported. But the

19:40

thing is technically you still run the

19:42

risk. But the thing is when I go to make

19:43

code changes directly within the chat

19:45

panel within cursor for these multiple

19:46

workspaces, the agent is not siloed to

19:49

just modifying Monty files or just

19:51

modifying Yorbi schema 1 or Yorbi schema

19:53

2 files. There are no guard rails in

19:55

place to prevent it from making code

19:57

changes to other open workspaces in here

19:59

as well. So even if I tell it to make

20:01

changes only in like the Monty directory

20:03

here, it can theoretically still have

20:05

access to the Yorbi schema 1 and Yorbi

20:08

schema 2. Now, chances of that happening

20:09

are probably pretty slim if you're

20:11

working on completely separate projects

20:12

like Monty versus Yorby. But if I'm

20:14

telling the code editor to then make

20:16

changes across my Yorbby project because

20:18

there's Yorbi in both the schema 1 and

20:20

schema 2 directory, they could overwrite

20:22

changes across both projects and that is

20:24

a little scary to think about.

20:25

Obviously, not the worst thing in the

20:27

world. All the changes are undoable. So,

20:29

yes, there is still multi-workspace

20:31

support directly within cursor, but

20:33

anti-gravity does it better. So, that is

20:35

anti-gravity kind of in a nutshell.

20:36

really great UI in my opinion, but what

20:39

you're paying for once again is the best

20:41

maximum usage of Gemini model. And if

20:43

I'm being honest, if I had to choose

20:45

between anti-gravity versus cursor or

20:47

like a composer 1 model directly from

20:49

cursor, I honestly might just choose a

20:50

composer 1 model over the Gemini models

20:52

because then if I actually wanted to do

20:54

some any UI design work, I could still

20:56

just use Gemini 3 Pro here. Less use

20:58

compared to if I used it with an

20:59

anti-gravity, but I still get access to

21:00

it with cursor. All right, now let's

21:02

talk about Codeex. Now, Codex just came

21:05

out a couple of days ago, and it has

21:06

been getting glazed like crazy. People

21:09

are like, "Holy [ __ ] this is the future

21:10

of coding, software engineering, and I

21:12

don't disagree with it, but I also don't

21:15

like want to circle jerk it too hard.

21:16

Like, it's good, but let me let me let

21:18

me explain. So, obviously app is pretty

21:21

[ __ ] sexy." Like I said earlier,

21:23

comparing it to anti-gravity, you get

21:24

all the different workspaces. You can

21:26

kick off different threads within here.

21:28

And what's nice is you from here, you

21:30

can actually do it within a local

21:31

project. So make code changes to your

21:33

local repository or you can do it in a

21:35

completely separate work tree have its

21:36

own isolated code changes dedicated in

21:38

that specific work tree or you can also

21:40

just push it up to the codeex cloud

21:42

within OpenAI and let all the code

21:44

changes happen there. They also have

21:45

some really nice UI things as well such

21:47

as if you go over here we can look at

21:49

the changes and directly from here. Oh,

21:52

if this was an actual real code change

21:53

you can quickly commit it push it create

21:55

a PR. It has support to opening up that

21:58

code change that you have directly on

22:00

any type of editor that you have. Really

22:02

great support there as well. And the UI

22:04

is just really gorgeous. It's so pretty.

22:07

Honestly, shout out to the opens team.

22:09

You guys really cooked with the UI UX

22:11

component of Codeex. Obviously, there's

22:13

a lot of really sweaty stuff that you

22:14

can do as well, like installing new

22:16

skills. They also have support for MCPs

22:18

as well, and that's all nice. But what I

22:20

will say, like I said before in all the

22:22

previous code reviews that I have done

22:24

for all the editors out there, the

22:25

number one thing you are paying for with

22:27

codecs is the best and maximum usage of

22:30

GPT 5.2, GPT 5.3 for that particular

22:34

price point. Yes, the UI is good. Don't

22:37

get me wrong, I do agree with that take,

22:38

but once again, it's actually not all

22:41

that dissimilar from anti-gravity's

22:43

Asian manager. It's really the same. You

22:45

have these separate workspaces where you

22:46

can make siloed changes directly within

22:49

here. Then when you want, you can then

22:50

go deeper in to look at the changes

22:52

manually in a full-blown text editor.

22:54

And Codeex pretty much does the same

22:56

thing. You can make the changes here

22:58

locally. Then you can open up the code

23:00

editor to view the code changes in its

23:02

own dedicated editor instance. Very

23:03

similar it workflow as anti-gravity. Now

23:06

yes, Codeex does have a couple of

23:07

benefits as well such as the fact that

23:09

you can uh make the changes directly

23:10

within the cloud and not be isolated

23:13

only to your local environment. Like

23:14

anti-gravity for example, you can only

23:16

make changes to your local environment.

23:17

And I really think the biggest benefit,

23:19

the biggest upside from codeex is the

23:22

fact that the GPT models are really,

23:24

really [ __ ] good. I was kind of just

23:26

drinking the clawed anthropic Kool-Aid

23:28

for the past year and a half or so. I

23:30

was like, GPT is trash, bro. Sucks at

23:32

coding. And I do think OpenAI coding

23:34

models like legitimately were trash for

23:35

a long time, but nowadays they're quite

23:37

good. I've been really impressed with

23:39

5.3 codecs. And if anything, I might

23:41

even argue that it's better than cloud

23:42

code models. like I've gotten really

23:44

good results, better results from using

23:46

GPT 5.3 Codeex over an Opus 4.6 as of

23:49

late. Obviously, model performance

23:51

changes all the time. New models come

23:52

out all the time, but at least at the

23:53

time of recording this video, 5.3 Codex

23:55

has really impressed me with the work

23:57

that it can do. And I'm going to talk a

23:58

little bit more about this at the end of

24:00

the video for some recommendations that

24:01

I have in my opinion, like non-expert

24:03

opinion, just random dude on the

24:04

internet's opinion for what the best

24:06

coding setup is. But for a little bit of

24:08

a preview of that, I think the best

24:10

solution you're going to find is getting

24:11

some type of picking one of the coding

24:14

editors, whether that be zed or cursor

24:16

or anti-gravity. Honestly, I'd probably

24:17

pick anti-gravity or cursor, not so much

24:19

zed, and then pairing that with a model

24:21

provider solution like a codeex or a

24:24

cloud code because once again, the UI is

24:26

good, don't get me wrong. Really well

24:27

built, but the real game changer, the

24:29

real thing that you're paying for here

24:30

is just the most amount of GPT 5.3 usage

24:33

out of any other piece of software out

24:35

there for that particular price point.

24:36

And now kind of on that note, let's

24:38

switch over to Claude Code. All right,

24:39

so now let's talk about Claude Code. Now

24:41

Claude Code, I feel like it's kind of in

24:43

the same realm as cursor as being one of

24:45

the more de facto industry standard

24:47

tools out there. It's super super

24:48

popular, but Claude Code is essentially

24:50

very similar to OpenAI codeex, but

24:52

inside with OpenAI codeex models, you

24:53

get access to anthropic cloud models

24:55

with Sonnet and Opus and Haiku. I

24:57

primarily use cloud code directly within

24:59

my code editor, whether that be just in

25:01

the terminal within using the cloud code

25:03

CLI tool or using the VS code extension

25:06

within anti-gravity or cursor. And once

25:08

again with cloud code, what you're

25:09

paying for is getting the maximum amount

25:11

of anthropic model usage. Sonnet or Opus

25:14

right now opus 4.6 the brand new model

25:16

that came out. So you're really paying

25:17

for the most enthropic model usage out

25:19

of any other tool that you can get for

25:20

that particular price point. And cloud

25:22

code has really evolved a lot as well

25:24

because now within the cloud code

25:25

extension you can actually use the cloud

25:28

code chrome extension to open up a tab.

25:30

So you can actually use claude codes

25:32

chrome extension with pairing with cloud

25:34

code extension within VS code to let the

25:37

cloud code instance here have full

25:39

access to the actual web app that you're

25:42

working on right here. So you can see

25:43

right here this is indicating that

25:44

claude actually has control over this

25:46

entire page. So this is really useful

25:48

when I'm trying to debug a certain

25:49

scenario or run test cases. you can tell

25:51

cloud code blah blah blah make some

25:53

changes test these changes with this

25:54

specific user flow on this browser

25:56

instance and see what the flow is like

25:58

and see any UX or UI improvements that

26:00

you can find that's kind of one workflow

26:01

that I particularly have been using it

26:03

on so this has been really great so

26:04

cloud code once again it's kind of the

26:06

king of AI agent coding tools out there

26:09

using it in the CLI or using it as a VS

26:11

code extension but I think another

26:12

feature that a lot of people kind of

26:13

forgot and don't talk about is the fact

26:15

that you can run cloud code in the cloud

26:17

as well directly from within the cloud

26:18

app you know like the default is set to

26:20

make these changes in a cloud

26:21

environment. I particularly use this

26:23

when I'm just doing kind of smaller

26:24

changes, like less technically complex

26:26

changes, like for example, just changing

26:28

a constant variable from URL 1 to URL 2.

26:31

Just adding really simple changes. I I

26:33

trust the cloud environment and then I

26:35

can just review the code changes within

26:36

GitHub. I rarely, for some reason, don't

26:39

use cloud code from the cloud app itself

26:41

to make any local changes. And the

26:42

reason for that is probably because I

26:44

just prefer doing that within cloud code

26:46

here instead. Oh, I guess one thing I

26:48

didn't mention about the Codeex app is

26:50

the fact that with Codex, you can also

26:51

download a Codeex CLI tool similar to

26:53

how there's a Cloud Code CLI tool and

26:55

you can just run Codeex directly within

26:57

here within your text editor of choice.

26:59

But going back to Claude, I do think

27:00

Claude Code is still very very good. I

27:03

have actually been more impressed, like

27:04

I mentioned earlier, more impressed with

27:05

the GPT 5.3 Codex model than I have with

27:08

Opus 4.6. I do find 5.3 Codex to be a

27:11

slower model compared to Opus 4.6, but

27:13

it just performs a lot better. It's like

27:15

way more detail oriented. It applies

27:16

more scrutiny to the changes that it

27:18

makes compared to an Opus 4.6. And cloud

27:20

code similar to codecs also have work

27:23

tree support as well. So you can make

27:24

dedicated changes in its own GitHub work

27:26

tree so that it isolates all the changes

27:28

into that work tree and it doesn't muddy

27:30

any of the code changes that you're

27:31

making throughout your application. Once

27:33

again, Codeex and Cloud Code are

27:34

basically the same tool. The only

27:36

difference is the UI of the application,

27:38

which I will personally say I think

27:39

Codeex does have a better UI as well as

27:41

the strength of the model, which once

27:42

again right now I'm going to say Codeex

27:44

is a better model for me and my workflow

27:46

compared to Opus 4.6. It varies on

27:48

everybody's experience. So you should

27:49

definitely test out both models and see

27:51

which one you like. For me right now,

27:52

I'm more swayed with Codeex's UI as well

27:54

as their actual models as well. So that

27:56

is a quick overview of like all the code

27:58

models and coding tools out there. Now,

28:00

let's step back and actually do a deeper

28:02

dive into what my recommendations are

28:03

for the general people of what tools to

28:05

use and purchase. Okay, now that I have

28:07

kind of gone over all of the AI coding

28:10

tools, let's talk about some

28:11

recommendations and what tools are best

28:13

for certain types of people. And once

28:15

again, I want to be very clear. I'm

28:16

literally not an expert. I'm a random

28:18

dude on the internet that codes a lot

28:20

and makes videos on the internet. Don't

28:21

trust my word as like the law of the

28:23

lane. And this is just like a general

28:24

recommendation from a random from like a

28:26

friend or something. So, don't take it

28:27

too seriously. I want to disclose that I

28:28

do not pay for cursor myself. I'm a part

28:30

of like a creator program where they

28:32

actually cover the cost of cursor for

28:34

me. I've also been in similar programs

28:36

for Claude and Anthropic. And I've also

28:38

been in similar programs for Gemini as

28:39

well. But I actually did pay for

28:41

anti-gravity usage straight up out of

28:43

pocket. And I'm no longer in that

28:44

program by Anthropic. So I also pay for

28:46

Claude code out of pocket, but I still

28:47

am in that like creator usage free usage

28:50

for cursor right now just because I have

28:52

a following on the internet. I've never

28:53

worked with Zed. I've never worked with

28:55

OpenAI. So, I just want to make that

28:56

very clear right now. Now that you have

28:58

that context, that's not going to change

28:59

any of my recommendations. Though, I'm

29:00

going to be brutally honest to say like

29:02

if cursor is trash, I will say cursor is

29:03

trash. If I think clot is trash, I'll

29:05

say it's trash. So, just to be very

29:06

clear, that's what I'm saying. Okay, so

29:08

now let's get into some actual

29:09

recommendations. Honestly, I would say

29:10

that if you're someone that doesn't use

29:12

that much AI and you don't care that

29:14

much about like the bleeding edge of AI

29:16

or you're someone that just loves being

29:18

in the terminal and like you're a

29:19

terminal junkie, honestly, Zed might be

29:21

a good option for you. Like Zed is

29:23

really fast and I love that about Zed.

29:25

so much more performant over VS code

29:27

forks. But I just personally couldn't

29:28

use it just cuz I think the AI first

29:30

class citizen support just wasn't quite

29:31

there. But if you're somebody that

29:32

doesn't care about the bleeding edge and

29:33

you just want to have a really good

29:35

coding experience and you love living in

29:36

the terminal and really comfortable with

29:38

CLI tools, I think Zed's actually a

29:40

really good option. I'm not the right

29:41

archetype for it, but I'm sure there are

29:42

plenty of you that exist out there. All

29:44

right, so scenario number one. If you

29:45

are the budget conscious person where

29:47

you can really only afford one

29:49

subscription at the lowest cheap tier,

29:50

which I think right now is hovering

29:52

around $20 a month, your choice is going

29:53

to be between anti-gravity or cursor.

29:55

And I'm going to be very upfront and say

29:57

there is such little difference between

29:59

the two of them. I think cursor has the

30:01

advantage in the fact that it's treated

30:02

as a first class citizen for a lot of AI

30:04

tooling out there like MCPS and all that

30:07

stuff. So you're always going to find

30:08

documentation on how to add certain tool

30:10

into cursor. Whereas with anti-gravity,

30:11

it's not going to be treated with nearly

30:13

as much first class citizen status as

30:15

cursor. But at the time of filming this

30:17

video right now, I do think the agent

30:19

manager panel of anti-gravity is really,

30:21

really good and something I desperately

30:23

wish Cursor had. I will say though,

30:24

because cursor moves really fast and

30:26

they ship bleeding edge stuff all the

30:27

time, I bet you by the end of this

30:29

month, at the end of February, Cursor is

30:31

going to come out with something that

30:32

has this new agent manager UI spin to

30:35

it. Because not only does anti-gravity

30:37

have it, but also Codeex has it. and

30:38

people are responding really positively

30:40

towards it. But with all that being

30:42

said, I would say I would probably pay

30:45

for cursor if over anti-gravity if I

30:48

could only choose one just because of

30:50

the first class citizen support out

30:51

there. But if you don't care about that

30:53

and and you really love using Gemini as

30:55

a coding agent, then I would potentially

30:57

argue that anti-gravity is going to be a

30:58

better bang for your buck because you'll

30:59

probably get a lot more Gemini usage

31:01

than you would even like a composer one

31:03

usage directly within Cursor. And then

31:04

you can also pick and choose from the

31:05

different models to use as well. Now in

31:07

terms of what my overall recommendation

31:08

is for the best setup, I would say just

31:10

you need to purchase two subscriptions.

31:12

One subscription to an AI code editor

31:14

like a cursor or an anti-gravity and

31:15

then one subscription to a directly

31:17

provided like codeex or claude code

31:19

provider directly from OpenAI or

31:21

anthropic so that you can get the

31:22

maximum usage of either a GPT model or a

31:25

cloud and anthropic model. Like for

31:27

example, in my most recent video where I

31:28

talked about my AI coding workflow for

31:30

2026, I talked about how my go-to

31:32

workflow right now and for a long time

31:34

has been cursor with claude code. But

31:36

because things change so fast, like

31:38

literally in the span of 2 weeks of me

31:40

launching that video, Codex comes out

31:41

and it's [ __ ] good, way better than

31:43

Opus 4.6 in my personal usage. I would

31:46

probably actually switch my subscription

31:47

off of Claude and use Codeex right now

31:50

instead. So, I would say never lock

31:52

yourself into any type of yearly plan.

31:54

Only subscribe to monthly because AI

31:56

changes so fast. You're always going to

31:57

be tweaking and changing your tools. So

31:59

basically once again pick one code

32:01

editor cursor or anti-gravity or even

32:03

zed if you want to and then just pick

32:04

one of either codeex or clog. I think

32:07

that'll give you a plenty of usage

32:08

across all the different models as well

32:10

as one like flagship model that you get

32:12

the most amount of usage for. And at the

32:14

time of filming this video, which is in

32:15

February of 2026, I would personally

32:17

probably if I could only pick one, I

32:19

would do probably Cursor and Codeex as

32:22

my go-to model provider and editor

32:24

solution. But once again, things change

32:25

and I'm sure Anthropic is going to come

32:27

out with a crazy banger model that's

32:28

going to be better than OpenAI's codeex

32:30

models very, very soon as well. All

32:31

right, so that is it for this video.

32:32

This was a doozy. This was a fat one. I

32:34

hope you enjoyed the video and if you

32:36

did, make sure to thumbs up the video,

32:37

like it, share with your friends, and if

32:39

you want to see more of my content, then

32:40

make sure to subscribe to this channel

32:41

as well. Let me know in the comments

32:43

down below if you think I'm right, if

32:44

you think I'm wrong, you agree, you

32:45

disagree with me. Let me know what your

32:47

coding setups are as well. I'm pretty

32:48

curious to see what other people are

32:49

using. But that is all I got for today.

32:50

Thanks so much for watching. I'll see

32:52

you in the next one. Peace.

Interactive Summary

The video provides an in-depth comparison of the top AI code editors and agentic coding solutions available in 2026. The speaker reviews Zed for its high-performance Rust base, Cursor for its industry-standard feature set and VS Code familiarity, Google's Anti-gravity for its superior parallel workspace management, and OpenAI's Codeex for its exceptional UI and GPT model performance. The review concludes with a recommendation to pair a flexible code editor with a direct model provider subscription to maximize efficiency and stay current with rapidly evolving AI capabilities.

Suggested questions

5 ready-made prompts