HomeVideos

The Sunday Download | Week 3 | Feb 15, 2026

Now Playing

The Sunday Download | Week 3 | Feb 15, 2026

Transcript

378 segments

0:00

everyone and welcome back to week three of the Sunday

0:02

download.

0:03

This past week has been incredible in the world of AI.

0:06

There have been a bunch of model drops.

0:09

There have been tons of things that are new in the world of

0:12

AI, and I'm going to be covering it in this video.

0:15

And I'm also going to be talking about what you guys can

0:17

expect in this upcoming week for the Bridge Mine community.

0:20

We're going to be having a very, very event filled week,

0:23

but I want to start out by covering what has occurred in

0:26

the past week.

0:27

We're going to cover the Vibathon.

0:28

We're going to cover some of these new models.

0:31

But with that being said,

0:31

I do have a light goal of 100 likes on this video.

0:34

So if you guys haven't already liked,

0:35

subscribed or joined the Bridge Mine discord community,

0:38

which is the fastest growing vibe coding community on the

0:41

internet, make sure you do so.

0:42

But the first thing I want to dive into is the newly

0:45

released GPT 5.3 codec spark.

0:48

So if you guys missed this,

0:49

this is a new model that released three days ago on

0:51

February 12th from open AI.

0:53

And this is their first model that is, you know,

0:56

their first model that is in partnership with Cerebrus.

0:59

So they are able to deliver more than 1000 tokens per

1:03

second.

1:04

And they say,

1:04

while remaining highly capable for real world tasks.

1:08

Okay.

1:08

So this is very, very interesting.

1:11

As you guys know,

1:12

they did partner with Cerebrus and we were expecting a

1:15

model to come out.

1:16

One thing I do want to note though,

1:17

is that this is not available in the API yet.

1:19

So you can only use this right now through Codex.

1:23

They did the same thing with 5.3 Codex,

1:24

which is very interesting.

1:25

I don't know why they haven't put it in open router yet.

1:27

That's just interesting or giving it API access.

1:29

It's just weird.

1:30

But one thing I do want to cover is only has a 128,000

1:33

context window.

1:34

So it's also only text only.

1:36

So it's not super capable.

1:38

When you look at the benchmarks,

1:39

they basically are making the point that it doesn't lose a

1:42

ton of intelligence.

1:43

But I'm going to show you guys something from the bridge

1:44

bench in a second that basically says otherwise.

1:46

But here's how it does on Swedish pro.

1:48

You can see that, you know, extra high spark is at 51.5.

1:52

Normal 5.3 Codex, extra high is 56%.

1:55

So you're talking about a 5% difference,

1:57

which is actually pretty big.

1:59

But in the grand scheme of things,

2:00

for a thousand tokens speed up, I mean, that's insane.

2:03

So it could be worth it, right?

2:04

Here's how it does on the terminal bench 2.0 benchmark.

2:07

So they're basically making the point, hey,

2:08

we had this massive speed increase without losing a ton of

2:11

intelligence, which is the case.

2:13

But one thing I want to show you guys now,

2:14

and this also is one thing that is new this week with

2:17

Bridge Mind.

2:17

So this past week,

2:18

during our live streams of vibe coding an app until I make

2:22

a million dollars, we're currently on day 134.

2:26

And you guys can see here that I created a benchmark, very,

2:30

very pinpointed at vibe coding.

2:32

So this is Bridge Minds official vibe coding benchmark.

2:35

And we were able to do a leaderboard.

2:37

And then we also came up with this creative HTML.

2:40

And this is a benchmark where we give different models one

2:43

shot to be able to create a singular HTML file with styling

2:48

so that we can kind of get a visual representation of how

2:50

good the model is.

2:51

And I'm using this as an example for 5.3 Codex Spark,

2:54

because let's just go to this lava lamp, for example,

2:57

right?

2:57

Every other model created a decent lava lamp exec minimax.

3:00

But look at what Codex did.

3:01

Codex Spark doesn't create the lava lamp, right?

3:04

And we can go to another in what is it?

3:06

It's let's go to the neon sign flickering on that one's a

3:08

little bit better.

3:09

Another one to look at is the aquarium fish tank.

3:12

That one is okay.

3:14

Let's look at the solar system.

3:16

That one did a pretty decent job.

3:18

But the hot air balloon ride, you can see that this one,

3:21

there's no hot air balloon even in in the in the HTML file,

3:25

right?

3:25

And if I go over to my ex here, you're going to see this,

3:28

you know, the comparison here as well.

3:30

So here's Codex Spark, here's Opus 4.6, here's JLM 5,

3:33

and here's minimax M 2.5.

3:34

And you guys can see it's like, okay, 5.3 Codex Spark,

3:38

they had that speed up, which is great.

3:40

But know that this model is definitely going to be a little

3:42

bit more unreliable and more prone to hallucinations.

3:45

And in a lot of cases,

3:46

that's what we're seeing from models that are that fast,

3:49

right?

3:49

1000 tokens per second is just absolutely off the charts,

3:52

right?

3:52

But you know,

3:53

this is definitely something to take note of is that, hey,

3:55

58.4%.

3:56

That's like, that I'm pretty sure that's better than GPT-5.

3:59

It's just that sometimes with the speed up,

4:01

even though it'll perform well on benchmarks in practice,

4:04

something about it speeding up to that point,

4:07

it just causes a lot of hallucinations.

4:08

So we'll see how this develops over time,

4:10

but definitely something to take note of is GPT-5.3 Codex

4:13

Spark.

4:14

You can use this in Codex now,

4:16

and I've tried it out myself.

4:17

It is very fast.

4:18

So take note of that.

4:20

Another model release that we got was JLM 5.

4:22

So this is a pretty big release.

4:25

So JLM 5 scored 77.8% on Sweet Bench.

4:29

It's doing very well in LM Arena.

4:32

Let's go to actually go over to LM Arena.

4:34

And one thing I will say about this model is that it isn't

4:39

super reliable.

4:41

It will get more reliable.

4:42

It is getting number six.

4:44

It outperformed Gemini 3 Pro and Kimi K2.5.

4:47

So this is definitely a model you want to check out.

4:49

It's very cost affordable.

4:50

I think it's $1 per million on the input and $3.20 per

4:54

million on the output.

4:55

So very cost affordable.

4:56

A lot of people that are more like budget vibe coders

4:58

definitely want to take a look at this.

5:01

People are using it in open code and are having very,

5:03

very good results.

5:05

Now, if we go to the bridge bench,

5:06

because it is a little bit more unreliable with this

5:09

launch, it actually scored very poorly on the bridge bench.

5:12

I'm going to put it through the bridge bench again,

5:14

once they get this model a little bit more stable, right?

5:17

It just released.

5:18

So we'll give them the benefit of the doubt,

5:19

but you can see here that it actually scored last on the

5:22

bridge bench, 41.5,

5:23

and then it only completed 57.7% of the tasks, right?

5:27

So it didn't do great there.

5:28

Another model that released was Minimax M2.5.

5:32

So this model was a benchmark beast.

5:35

Look at this thing.

5:36

80.2% on SWE bench verified, 55.4% on SWE bench pro,

5:41

and it performed very, very well in the bridge bench.

5:45

You can see 59.7 beating out GPT 5.2 codecs and just right

5:50

under only 0.4% off of Cloud Opus 4.6.

5:53

Now,

5:53

one thing I want to touch on with this model is it's very,

5:56

very cheap.

5:57

And when actually put to the test inside of this creative

6:01

HTML, look at this, it actually didn't do that great.

6:05

Look at this.

6:05

That's the lava lamp.

6:06

You can see some of these other models.

6:07

Let's go over the neon sign flickering on this.

6:09

Another good example.

6:10

Look at this.

6:11

Look at Minimax.

6:12

It's spelled open wrong.

6:13

It's spelled open.

6:15

O-B-E-N spaced it all out.

6:16

So very interesting that, you know, even though it's bent,

6:19

it's bench maxed, in practice with the creative HTML,

6:23

it did not do a very great job.

6:25

So let me know what you guys think in the comment section

6:27

down below.

6:28

The last model that got released,

6:30

and it's not really a new model for vibe coding,

6:32

but definitely something that you guys want to be like in

6:36

the know on is this new Gemini 3 deep think model.

6:40

Okay.

6:40

So let me go back over to the tweet.

6:41

Just as a quick cover,

6:42

I did subscribe to basically the Google AI ultra plan.

6:48

And this is the reason I did that is because this model

6:50

released, you can't use it via the API.

6:53

You can only use this model inside of like the actual

6:56

Gemini app, right?

6:58

Like almost like the chat GPT, but for Gemini, right?

7:00

You can't use this inside of anti-gravity.

7:02

You can't use this via the API.

7:04

And one thing to note is that it scores absolutely off the

7:07

charts on code forces, 3,455

7:10

while Claude Opus 4.6 only got a 2,352 score.

7:15

So this model is insane.

7:17

Now,

7:17

one thing to note is that if we go over to the bridge bench,

7:20

and this is, this is the part that's like just crazy.

7:23

If you look at this open sign,

7:24

this is what it created for our HTML open test, right?

7:28

Look at how unbelievable this is.

7:29

This is by far the highest quality output that we got was

7:35

this open sign.

7:36

You know, we can go over to another one.

7:38

Let's just go over to Claude Opus 4.6, for example,

7:40

and you guys can be the judge of this, but you know,

7:42

that one's pretty good.

7:43

But like, look at the E, right?

7:44

It's like, doesn't look a little bit off, right?

7:48

But Gemini 3 deep thing just absolutely crushed it on this.

7:52

So it did take 20 minutes to create this HTML file while

7:56

Opus 4.6 produced it in like a couple seconds.

7:58

So that's one thing to note is that even though this,

8:01

this model is absolutely off the charts on benchmarks,

8:04

well, hey,

8:04

one of the reasons that it is is because you give it a

8:06

prompt and it thinks for like 20 to 40 minutes.

8:09

So definitely something to know about is that Google is

8:11

experimenting here with this model.

8:14

It's again, it's not in the API.

8:16

So don't go by the Google AI ultra plan.

8:18

If you think that you're going to get this in anti-gravity,

8:19

it's not available in anti-gravity.

8:21

It's not available via the API.

8:23

So you can't really vibe code with it, right?

8:25

But it's definitely interesting to see a model that

8:28

performs this well on code forces.

8:31

The next thing I want to cover is the BridgeMind Vibathon.

8:35

So the Vibathon is now closed.

8:37

Submissions are closed.

8:38

You can see here,

8:39

but we got 67 submissions for the BridgeMind Vibathon.

8:44

This is absolutely incredible.

8:46

A couple,

8:46

you can actually check it out here at BridgeMind Vibathon.

8:48

Just click join Vibathon.

8:50

And we are going to have a very event filled week for the

8:53

BridgeMind Vibathon.

8:55

There are a ton of people that submitted.

8:57

Here's quad H.

8:58

This is going to be pretty cool to see what he's got.

9:03

But I'm really excited to see, to see this from you guys.

9:06

We're going to have multiple events this week.

9:09

I'll be releasing that in the discord here shortly about

9:11

what to expect this week.

9:13

But we are going to have a very event filled week for the

9:15

BridgeMind Vibathon.

9:17

We're going to be selecting and voting winners this week.

9:20

And we're going to select who is going to get first place,

9:22

second place, third place.

9:23

Again, the respective prizes are 2,500, 1,500, and 1,000.

9:29

And the winners are going to be invited onto the BridgeMind

9:31

livestream to be able to demo their projects,

9:34

discuss the future of agentic development,

9:36

and share their stories.

9:37

So really excited to do that with you guys this week.

9:40

That's going to be a really cool thing.

9:42

Also, we do have a project sharing event on Tuesday,

9:44

but we're going to be also doing some other events on

9:47

stream this week with the BridgeMind Vibathon.

9:50

Now,

9:50

one thing I want to share with you guys is that basically

9:54

subscriptions to BridgeMind Pro are really taking off.

9:58

We actually already are over 100 paying Pro subscribers.

10:02

And I haven't even really tried to market this that hard,

10:05

right?

10:05

Like I haven't made any YouTube videos on it.

10:08

I haven't been putting out a bunch of tweets on it.

10:09

This is just happening organically.

10:11

So I appreciate all of you guys that have subscribed to

10:14

BridgeMind Pro.

10:15

Again,

10:16

right now it's 50% off because we have not yet gotten

10:20

BridgeVoice and BridgeSpace to stable releases,

10:22

both on macOS, Windows, and Linux.

10:25

So, you know, right now,

10:26

if you guys want to take advantage,

10:27

you get 50% off your first three months.

10:30

But with that being said, this upcoming week,

10:32

something that you guys are going to see more of on

10:35

BridgeMind is that I have a couple good ideas for marketing

10:38

the BridgeMCP.

10:40

This is a task management MCP where AI agents are able to

10:43

collaborate with each other, right?

10:45

Because you're able to create tasks that have the

10:47

instructions and knowledge that are needed to be able to

10:50

pass it off to several different agents.

10:52

So for example, if I have a Cloudbot that's working,

10:56

I can basically work from my Cloud instance and then pass

10:59

it off to a Cloudbot that will go do something for me and

11:02

then update the instructions, update the knowledge,

11:04

update the status.

11:05

And this you guys are going to be seeing more of.

11:07

I'm going to be really working hard on marketing and I'm

11:10

going to be marketing on X,

11:11

I'm going to be marketing on YouTube.

11:12

And I think that we're going to hopefully see a jump from

11:15

100 paying subscribers to 200 this week.

11:18

And like I said,

11:19

things are just really going well for BridgeMind right now.

11:23

So thank you to all of you guys that are supporting and

11:25

basically being beta testers for BridgeMind, right?

11:28

Because we don't have BridgeCode out yet.

11:30

We don't have BridgeSpace or BridgeVoice to stable

11:32

releases, but they are now out.

11:34

They are in production.

11:35

But again, if we go back to our goals,

11:37

our goal is by March 1st,

11:39

we want to get to stable releases of both BridgeSpace and

11:43

BridgeVoice.

11:44

So I'm really excited to be working on that more this week

11:46

in our series of vibe cooling an app until I make a million

11:49

dollars.

11:50

So that's going to be great.

11:51

But that's really just everything that I want to cover.

11:54

This has been a crazy week in the world of AI.

11:56

All these new models are, you know,

11:58

we're kind of getting more of a feel for them, right?

12:01

Like it'll take a little bit of time to know, okay,

12:03

here's the place where you can use Jilin 5 in your

12:05

workflow,

12:05

or Minimax is actually really good at this particular

12:08

thing.

12:08

But we just got dumped on this past week and it's great to

12:13

be able to try them out.

12:14

It's great to have some more budget models.

12:16

I do think that this is going to push Opus and, you know,

12:21

Codex models that have to make them a little bit cheaper.

12:24

Because right now, hey,

12:26

if you're looking at OpenRouter and you're just doing a

12:29

quick cost comparison, right?

12:31

I mean,

12:31

these Chinese models are blowing Frontier American Labs out

12:36

of the park because it's like, I mean,

12:37

you're talking about 20 cents per million on the input and

12:40

a dollar per million on the output for Minimax M2.5.

12:43

Now,

12:43

I will say this model is way worse than Claude Opus 4.6,

12:46

like by a long shot, right?

12:47

But, you know,

12:48

I hope that Claude and Anthropic that they can figure out

12:52

how to, you know,

12:53

hopefully they release a Sonic model here soon that we get

12:55

that's, you know,

12:56

more cost affordable and faster and hopefully smarter too.

12:59

But, you know,

13:00

just good to see all the competition in the space because

13:02

at the end of the day,

13:03

that is better for us as the people using this technology

13:07

is that the more competition,

13:08

the better because it's going to push Frontier Labs to push

13:11

out models sooner and push out better models,

13:13

make them faster because that's just better for the

13:15

consumer.

13:16

So that's going to wrap it up for this week.

13:18

Again,

13:19

this week is going to be full of events for the Bridge Mind

13:21

Vibathon, over 67 entries,

13:23

which is crazy and really excited to be able to select

13:27

winners this week.

13:28

I'll release something on X as well as a Discord about what

13:30

the events are going to look like.

13:32

I haven't really decided yet,

13:33

but we'll talk about it probably on stream as well and

13:35

figure out, okay, what do you guys want to do?

13:37

But I'm super excited to be able to push Bridge Mind

13:39

forward this week.

13:40

We're going to keep, again,

13:41

I'm going to target 200 paying subscribers this week and

13:44

let's just keep moving forward, guys.

13:46

It's going to be a great week.

13:47

And with that being said,

13:48

I will see you guys tomorrow on stream.

13:50

I'll see you guys in the future.

Interactive Summary

This video covers recent AI model releases and updates within the Bridge Mine community. Key releases discussed include GPT 5.3 Codex Spark, JLM 5, Minimax M2.5, and Gemini 3 Deep Think. The GPT 5.3 Codex Spark is noted for its speed but potential unreliability. JLM 5 shows promise but also has reliability issues. Minimax M2.5 is praised for its benchmark performance and affordability, though its practical application in creative tasks is questioned. Gemini 3 Deep Think demonstrates exceptional performance, particularly in coding, but is not yet available via API. The video also highlights the BridgeMind Vibathon, with 67 submissions, and upcoming community events. Additionally, it touches on the growth of BridgeMind Pro subscriptions and future marketing plans for their AI agent task management system, BridgeMCP. The presenter emphasizes the benefits of increased competition in the AI space for consumers.

Suggested questions

5 ready-made prompts