The Sunday Download | Week 3 | Feb 15, 2026
378 segments
everyone and welcome back to week three of the Sunday
download.
This past week has been incredible in the world of AI.
There have been a bunch of model drops.
There have been tons of things that are new in the world of
AI, and I'm going to be covering it in this video.
And I'm also going to be talking about what you guys can
expect in this upcoming week for the Bridge Mine community.
We're going to be having a very, very event filled week,
but I want to start out by covering what has occurred in
the past week.
We're going to cover the Vibathon.
We're going to cover some of these new models.
But with that being said,
I do have a light goal of 100 likes on this video.
So if you guys haven't already liked,
subscribed or joined the Bridge Mine discord community,
which is the fastest growing vibe coding community on the
internet, make sure you do so.
But the first thing I want to dive into is the newly
released GPT 5.3 codec spark.
So if you guys missed this,
this is a new model that released three days ago on
February 12th from open AI.
And this is their first model that is, you know,
their first model that is in partnership with Cerebrus.
So they are able to deliver more than 1000 tokens per
second.
And they say,
while remaining highly capable for real world tasks.
Okay.
So this is very, very interesting.
As you guys know,
they did partner with Cerebrus and we were expecting a
model to come out.
One thing I do want to note though,
is that this is not available in the API yet.
So you can only use this right now through Codex.
They did the same thing with 5.3 Codex,
which is very interesting.
I don't know why they haven't put it in open router yet.
That's just interesting or giving it API access.
It's just weird.
But one thing I do want to cover is only has a 128,000
context window.
So it's also only text only.
So it's not super capable.
When you look at the benchmarks,
they basically are making the point that it doesn't lose a
ton of intelligence.
But I'm going to show you guys something from the bridge
bench in a second that basically says otherwise.
But here's how it does on Swedish pro.
You can see that, you know, extra high spark is at 51.5.
Normal 5.3 Codex, extra high is 56%.
So you're talking about a 5% difference,
which is actually pretty big.
But in the grand scheme of things,
for a thousand tokens speed up, I mean, that's insane.
So it could be worth it, right?
Here's how it does on the terminal bench 2.0 benchmark.
So they're basically making the point, hey,
we had this massive speed increase without losing a ton of
intelligence, which is the case.
But one thing I want to show you guys now,
and this also is one thing that is new this week with
Bridge Mind.
So this past week,
during our live streams of vibe coding an app until I make
a million dollars, we're currently on day 134.
And you guys can see here that I created a benchmark, very,
very pinpointed at vibe coding.
So this is Bridge Minds official vibe coding benchmark.
And we were able to do a leaderboard.
And then we also came up with this creative HTML.
And this is a benchmark where we give different models one
shot to be able to create a singular HTML file with styling
so that we can kind of get a visual representation of how
good the model is.
And I'm using this as an example for 5.3 Codex Spark,
because let's just go to this lava lamp, for example,
right?
Every other model created a decent lava lamp exec minimax.
But look at what Codex did.
Codex Spark doesn't create the lava lamp, right?
And we can go to another in what is it?
It's let's go to the neon sign flickering on that one's a
little bit better.
Another one to look at is the aquarium fish tank.
That one is okay.
Let's look at the solar system.
That one did a pretty decent job.
But the hot air balloon ride, you can see that this one,
there's no hot air balloon even in in the in the HTML file,
right?
And if I go over to my ex here, you're going to see this,
you know, the comparison here as well.
So here's Codex Spark, here's Opus 4.6, here's JLM 5,
and here's minimax M 2.5.
And you guys can see it's like, okay, 5.3 Codex Spark,
they had that speed up, which is great.
But know that this model is definitely going to be a little
bit more unreliable and more prone to hallucinations.
And in a lot of cases,
that's what we're seeing from models that are that fast,
right?
1000 tokens per second is just absolutely off the charts,
right?
But you know,
this is definitely something to take note of is that, hey,
58.4%.
That's like, that I'm pretty sure that's better than GPT-5.
It's just that sometimes with the speed up,
even though it'll perform well on benchmarks in practice,
something about it speeding up to that point,
it just causes a lot of hallucinations.
So we'll see how this develops over time,
but definitely something to take note of is GPT-5.3 Codex
Spark.
You can use this in Codex now,
and I've tried it out myself.
It is very fast.
So take note of that.
Another model release that we got was JLM 5.
So this is a pretty big release.
So JLM 5 scored 77.8% on Sweet Bench.
It's doing very well in LM Arena.
Let's go to actually go over to LM Arena.
And one thing I will say about this model is that it isn't
super reliable.
It will get more reliable.
It is getting number six.
It outperformed Gemini 3 Pro and Kimi K2.5.
So this is definitely a model you want to check out.
It's very cost affordable.
I think it's $1 per million on the input and $3.20 per
million on the output.
So very cost affordable.
A lot of people that are more like budget vibe coders
definitely want to take a look at this.
People are using it in open code and are having very,
very good results.
Now, if we go to the bridge bench,
because it is a little bit more unreliable with this
launch, it actually scored very poorly on the bridge bench.
I'm going to put it through the bridge bench again,
once they get this model a little bit more stable, right?
It just released.
So we'll give them the benefit of the doubt,
but you can see here that it actually scored last on the
bridge bench, 41.5,
and then it only completed 57.7% of the tasks, right?
So it didn't do great there.
Another model that released was Minimax M2.5.
So this model was a benchmark beast.
Look at this thing.
80.2% on SWE bench verified, 55.4% on SWE bench pro,
and it performed very, very well in the bridge bench.
You can see 59.7 beating out GPT 5.2 codecs and just right
under only 0.4% off of Cloud Opus 4.6.
Now,
one thing I want to touch on with this model is it's very,
very cheap.
And when actually put to the test inside of this creative
HTML, look at this, it actually didn't do that great.
Look at this.
That's the lava lamp.
You can see some of these other models.
Let's go over the neon sign flickering on this.
Another good example.
Look at this.
Look at Minimax.
It's spelled open wrong.
It's spelled open.
O-B-E-N spaced it all out.
So very interesting that, you know, even though it's bent,
it's bench maxed, in practice with the creative HTML,
it did not do a very great job.
So let me know what you guys think in the comment section
down below.
The last model that got released,
and it's not really a new model for vibe coding,
but definitely something that you guys want to be like in
the know on is this new Gemini 3 deep think model.
Okay.
So let me go back over to the tweet.
Just as a quick cover,
I did subscribe to basically the Google AI ultra plan.
And this is the reason I did that is because this model
released, you can't use it via the API.
You can only use this model inside of like the actual
Gemini app, right?
Like almost like the chat GPT, but for Gemini, right?
You can't use this inside of anti-gravity.
You can't use this via the API.
And one thing to note is that it scores absolutely off the
charts on code forces, 3,455
while Claude Opus 4.6 only got a 2,352 score.
So this model is insane.
Now,
one thing to note is that if we go over to the bridge bench,
and this is, this is the part that's like just crazy.
If you look at this open sign,
this is what it created for our HTML open test, right?
Look at how unbelievable this is.
This is by far the highest quality output that we got was
this open sign.
You know, we can go over to another one.
Let's just go over to Claude Opus 4.6, for example,
and you guys can be the judge of this, but you know,
that one's pretty good.
But like, look at the E, right?
It's like, doesn't look a little bit off, right?
But Gemini 3 deep thing just absolutely crushed it on this.
So it did take 20 minutes to create this HTML file while
Opus 4.6 produced it in like a couple seconds.
So that's one thing to note is that even though this,
this model is absolutely off the charts on benchmarks,
well, hey,
one of the reasons that it is is because you give it a
prompt and it thinks for like 20 to 40 minutes.
So definitely something to know about is that Google is
experimenting here with this model.
It's again, it's not in the API.
So don't go by the Google AI ultra plan.
If you think that you're going to get this in anti-gravity,
it's not available in anti-gravity.
It's not available via the API.
So you can't really vibe code with it, right?
But it's definitely interesting to see a model that
performs this well on code forces.
The next thing I want to cover is the BridgeMind Vibathon.
So the Vibathon is now closed.
Submissions are closed.
You can see here,
but we got 67 submissions for the BridgeMind Vibathon.
This is absolutely incredible.
A couple,
you can actually check it out here at BridgeMind Vibathon.
Just click join Vibathon.
And we are going to have a very event filled week for the
BridgeMind Vibathon.
There are a ton of people that submitted.
Here's quad H.
This is going to be pretty cool to see what he's got.
But I'm really excited to see, to see this from you guys.
We're going to have multiple events this week.
I'll be releasing that in the discord here shortly about
what to expect this week.
But we are going to have a very event filled week for the
BridgeMind Vibathon.
We're going to be selecting and voting winners this week.
And we're going to select who is going to get first place,
second place, third place.
Again, the respective prizes are 2,500, 1,500, and 1,000.
And the winners are going to be invited onto the BridgeMind
livestream to be able to demo their projects,
discuss the future of agentic development,
and share their stories.
So really excited to do that with you guys this week.
That's going to be a really cool thing.
Also, we do have a project sharing event on Tuesday,
but we're going to be also doing some other events on
stream this week with the BridgeMind Vibathon.
Now,
one thing I want to share with you guys is that basically
subscriptions to BridgeMind Pro are really taking off.
We actually already are over 100 paying Pro subscribers.
And I haven't even really tried to market this that hard,
right?
Like I haven't made any YouTube videos on it.
I haven't been putting out a bunch of tweets on it.
This is just happening organically.
So I appreciate all of you guys that have subscribed to
BridgeMind Pro.
Again,
right now it's 50% off because we have not yet gotten
BridgeVoice and BridgeSpace to stable releases,
both on macOS, Windows, and Linux.
So, you know, right now,
if you guys want to take advantage,
you get 50% off your first three months.
But with that being said, this upcoming week,
something that you guys are going to see more of on
BridgeMind is that I have a couple good ideas for marketing
the BridgeMCP.
This is a task management MCP where AI agents are able to
collaborate with each other, right?
Because you're able to create tasks that have the
instructions and knowledge that are needed to be able to
pass it off to several different agents.
So for example, if I have a Cloudbot that's working,
I can basically work from my Cloud instance and then pass
it off to a Cloudbot that will go do something for me and
then update the instructions, update the knowledge,
update the status.
And this you guys are going to be seeing more of.
I'm going to be really working hard on marketing and I'm
going to be marketing on X,
I'm going to be marketing on YouTube.
And I think that we're going to hopefully see a jump from
100 paying subscribers to 200 this week.
And like I said,
things are just really going well for BridgeMind right now.
So thank you to all of you guys that are supporting and
basically being beta testers for BridgeMind, right?
Because we don't have BridgeCode out yet.
We don't have BridgeSpace or BridgeVoice to stable
releases, but they are now out.
They are in production.
But again, if we go back to our goals,
our goal is by March 1st,
we want to get to stable releases of both BridgeSpace and
BridgeVoice.
So I'm really excited to be working on that more this week
in our series of vibe cooling an app until I make a million
dollars.
So that's going to be great.
But that's really just everything that I want to cover.
This has been a crazy week in the world of AI.
All these new models are, you know,
we're kind of getting more of a feel for them, right?
Like it'll take a little bit of time to know, okay,
here's the place where you can use Jilin 5 in your
workflow,
or Minimax is actually really good at this particular
thing.
But we just got dumped on this past week and it's great to
be able to try them out.
It's great to have some more budget models.
I do think that this is going to push Opus and, you know,
Codex models that have to make them a little bit cheaper.
Because right now, hey,
if you're looking at OpenRouter and you're just doing a
quick cost comparison, right?
I mean,
these Chinese models are blowing Frontier American Labs out
of the park because it's like, I mean,
you're talking about 20 cents per million on the input and
a dollar per million on the output for Minimax M2.5.
Now,
I will say this model is way worse than Claude Opus 4.6,
like by a long shot, right?
But, you know,
I hope that Claude and Anthropic that they can figure out
how to, you know,
hopefully they release a Sonic model here soon that we get
that's, you know,
more cost affordable and faster and hopefully smarter too.
But, you know,
just good to see all the competition in the space because
at the end of the day,
that is better for us as the people using this technology
is that the more competition,
the better because it's going to push Frontier Labs to push
out models sooner and push out better models,
make them faster because that's just better for the
consumer.
So that's going to wrap it up for this week.
Again,
this week is going to be full of events for the Bridge Mind
Vibathon, over 67 entries,
which is crazy and really excited to be able to select
winners this week.
I'll release something on X as well as a Discord about what
the events are going to look like.
I haven't really decided yet,
but we'll talk about it probably on stream as well and
figure out, okay, what do you guys want to do?
But I'm super excited to be able to push Bridge Mind
forward this week.
We're going to keep, again,
I'm going to target 200 paying subscribers this week and
let's just keep moving forward, guys.
It's going to be a great week.
And with that being said,
I will see you guys tomorrow on stream.
I'll see you guys in the future.
Ask follow-up questions or revisit key timestamps.
This video covers recent AI model releases and updates within the Bridge Mine community. Key releases discussed include GPT 5.3 Codex Spark, JLM 5, Minimax M2.5, and Gemini 3 Deep Think. The GPT 5.3 Codex Spark is noted for its speed but potential unreliability. JLM 5 shows promise but also has reliability issues. Minimax M2.5 is praised for its benchmark performance and affordability, though its practical application in creative tasks is questioned. Gemini 3 Deep Think demonstrates exceptional performance, particularly in coding, but is not yet available via API. The video also highlights the BridgeMind Vibathon, with 67 submissions, and upcoming community events. Additionally, it touches on the growth of BridgeMind Pro subscriptions and future marketing plans for their AI agent task management system, BridgeMCP. The presenter emphasizes the benefits of increased competition in the AI space for consumers.
Videos recently processed by our community