Crimson Desert GPU Benchmarks, Bugs, Simulation Error Tests, & Intel Arc Gets Screwed
1006 segments
Turns out we got access to the buggy
mess that is Crimson Desert at the same
time as Intel's Arc GPU team. Crimson
Desert was a nightmare to test on launch
day. We didn't get early access. So, we
started right when the game came out,
which included a day zero patch that had
some launch optimizations included in
our tests. We immediately ran into
problems across both Nvidia and AMD and
across three different test systems. We
experienced various game freezes, full
system shutdowns, crashes to desktop,
and whatever the this is. Available
in both pink and yellow. Fortunately,
unlike Nvidia and AMD, Intel didn't have
any of these problems with its ARC GPUs.
That's because you can't use them to
play this game because Pearl Abyss
refused to work with Intel throughout
the development process, including Intel
offering Pearl Abyss early hardware and
early drivers. And we'll talk about that
as well. Uh, so they are probably
working on a solution on the weekend
right now because they got it when
everyone else did. Beyond the bugs, draw
distance issues, and pop-in problems,
and general noisiness in the game's
graphics, we did actually manage to run
25 GPUs through testing in a span of
just 10 hours. So, a huge thanks to my
team for pushing through that with me.
It's also our first gaming benchmark
debut of our recently detailed
simulation time error testing
methodology. We're using Crimson Desert
to test for issues in game object motion
separately from frame rate and frame
time in charts like this. We had a lot
to balance here between trying to turn
this around as fast as possible to help
as many people as relevant as possible
for the game's launch. Uh while also
wanting to benchmark a ton of GPUs,
debut our simulation time error,
previously known as animation error
methodology. Really super excited about
that by the way. These are the first
kind of like real charts that we've done
for it. uh and also trying to test the
different settings, the different areas
of the game, all that stuff. Now, again,
we didn't have early access, so this is
the launch version of the game,
including its first patch. We ended up
going with 25 GPUs, including the GTX
1080 Ti, 1070, and 1060. We ran three
resolutions, every settings option on a
couple GPUs, and then we tested across
several hours of gameplay, including
some distant towns and environments,
just to make sure we understood if the
game's performance changed later on the
game. So, I'm not too familiar with the
world of Pi. Well, I'm gonna go with I
hope I got that right. It makes some
very mad Black Desert now Crimson Desert
players if I don't. Uh, so we went later
into the game as well and tested that.
So, this is in addition to the new
chart. We've got a lot to go through.
Let's get into it. Before that, this
video is brought to you by Arctic and
their MX7 thermal paste. Arctic's new
MX7 is a higher viscosity paste with
Arctic claiming that the increase in
viscosity from filler content and the
texture benefit improve its performance
while reducing the risk of the pumpout
effect over time. MX7 is intended for
longer endurance applications and to be
left in used for extended service life.
Arctic also now has a counterfeit paste
checking website because believe it or
not, knockoff paste from non- firstparty
sources is actually relatively common
for the big brands to experience. So
they have a site to check the validity
of it. Get a tube of Arctic's MX7
thermal paste at the link in the
description below. So, for testing,
we're using our GPU reviews test bench.
It's a 9800 X3D. It's slightly
overclocked just to maintain a level
clock without any fluctuations. And then
we started as soon as the game came out,
which means we have all of the contents
up at the launch time onward. And uh I
stayed up all night running benchmarks.
Then we're playing Crimson Desert.
I don't know if
we don't know if this is where it gets
the name Crimson from. This is uh this
is AI NPC detection.
>> It's AI NPC detection.
>> Yeah. See this? This kind of looks like
Marathon actually.
>> DLSS5 looks really good.
>> The team came in in the morning and
relieved me of my duty for testing. So,
thank you to the team for that.
>> The game works really well.
>> Just die again.
>> Yeah.
this time. I got it though.
And then we got this together. Uh we're
going to start with some of the bugs we
ran into. Then we'll go into the charts.
We've got a bunch of research here as
well for how the game performs across
combat, non-combat situations, wandering
in the town, all kinds of stuff so we
can get a full picture of how does this
game perform. And I guess if you want to
talk about things like uh poppingin,
we've got a little bit of that too, just
because it's not really a graphics
focused deep dive, but there was some
stuff that we just couldn't help but
notice and talk about. As we got into
Crimson Desert testing, we ran into the
fun part. Tons of bugs that seemed to be
from the game rather than the drivers.
The issues occurred on both AMD and
Nvidia, with one exception, which was
the drivers. Intel's GPUs just don't
work. Crimson Desert rejected the B580
when detected and told us that it can't
be used to play the game. We asked Intel
for its driver support plans, and the
company informed Gamers Nexus that
Pralabus did not work with or enable
Intel to work on drivers in advance of
the game's launch, meaning they got
access the same time as the public did.
Intel stated, quote, "We're aware that
Crimson Desert currently doesn't launch
on systems with the Intel GPUs, and
we're hugely disappointed that players
using Intel graphics hardware can't jump
into the world of Pi at launch. Getting
games running smoothly is a partnership
between developers and hardware makers.
Over the past several years, we've
reached out to Pearl Abyss many times to
help test, validate, and optimize
support for Intel Graphics, providing
early hardware, drivers, and engineering
resources across multiple generations,
including Alchemist, Battle Mage,
Meteor, and Lunar Lake. Our teams are
deeply committed to helping all studios
deliver the best experience possible,
providing open tools, documentation, and
direct engineering support to make sure
their games run well for everyone,
including the tens of millions of
players using Intel GPUs. We remain
ready to assist Pearl Abyss however we
can. For details on the choice not to
enable Intel support at launch, please
reach out directly to Pearl Abyss." End
quote, which is about as direct as you
can get from Intel. And we did just ask
one more clarifying question. If Intel's
between the lines comment here is that
Pearl Abyss didn't grant Intel access
before launch, Intel's representative
stated, quote, correct. Pearl Abyss did
not provide early access to game codes.
End quote. So, Intel here is really
getting screwed and boxed out,
especially if they're providing hardware
and drivers to Pearl Abyss seems kind of
one-sided at that point. The fact that
based on what Intel has written here,
several media publications had access to
this game before Intel did also is
bizarre. Either way, Intel is probably
working on this now, but it sounds like
they basically got a start at the same
time as we started benchmarking, which
would have been 6 PM on launch day
Eastern time. So, anyway, that's the
news on the Intel stuff, but there's
more. We also had issues with hard
crashes, and not just the desktop, we
had two machines that separately
experienced a complete shutdown of the
system from Crimson Desert. One was a
1080 Ti, and the other was an RX7800 XT.
It's possible that there was some other
issue on the 1080 Ti system. So, we
assumed it was unrelated. However, it
then happened again on the 7800 XT card
and in a completely different computer
that hasn't ever had any problems with
this before. This is like our most
stable bench platform we have. We also
had regular issues with all moving
objects in the scene being rendered
suddenly with pink boxes around them,
resembling that maybe hit boxes or
something similar to that. Uh, we
occasionally saw these with yellow boxes
as well, but generally they'd randomly
appear and disappear. And we saw these
on both AMD and Nvidia. It was not every
single time. I think I have maybe four
or five times during our testing of all
these cards. On a few instances, we had
the game lock up and freeze without
completely shutting down the system,
though. We could kill the game with task
manager. Still shouldn't happen. And all
that's just what we found in the test
session. The game seems halfbaked and
not ready for launch as there should
never be any hard shutdowns or game
freezing issues. And that's just
sticking to the bugs. There's other
problems too, like one thing we noticed
was the loading in the game is painfully
slow. In some situations, you have to
sit through a minute or more of loading
animation. And this is on machines with
reasonably fast SSDs as well. So, it
does just seem to be slow with loading.
uh pretty painful when you're changing
settings a lot because that requires
closing the game and reopening it for
every settings change to make sure
there's accuracy and that means a lot of
loading. We won't get too into this
part, but we also noticed that poppingin
is extremely bad in this game. You can
see small elements like grass and plants
popping in just 5 ft in front of the
character. At that point, it's more
distracting to have the objects pop in
there than to just not have them at all.
The same is true for objects in the
background and the midground where even
the highest end of systems at hundreds
of frames per second with overhead
available to draw more things farther
away. They're not given the ability to
extend the draw distance. We couldn't
find any options for this in the
settings. We weren't able to find a way
to do this in the configuration file
from just a quick look. It's possible
someone's found a way to do that by now,
but uh this seems like a likely mod in
the future. Not really sure why they've
limited the draw distance in this way,
but the the popin is just really
distracting. And that's just in general,
the game just has a lot of kind of
shimmery noise all over the place. And
there's ways to try and deal with the
noise in the graphic settings, but
there's nothing we could find to do
about the popin. This next section goes
over our research for designating a test
area candidate. The point of this is to
find something that's representative of
play at large in general without uh
being something that's difficult to
replicate like combat would be. For this
process, Patrick and I played the game
for a couple hours each and wandered
around, including to some outer areas
near Deminis, the Circus Estate, and
near the first town of Hernand. And I
hope I'm pronouncing these things
correctly. I we don't test with
headphones on. So, we manually capture
these and assign names to each with the
point being that we want to learn and
understand how the game performs in
different situations before committing
to a test scenario. Here's the result
with the 5060Ti 16 GB which we used
because we wanted something that was a
moderate performer, not too high-end,
not too low-end, uh, but also modern.
This is at 1440p, and like most of our
testing, we do not use upscaling of any
kind unless it's noted separately. The
range is huge for the lows where some
situations resulted in heavy hitching
and stuttering. Primarily, we noticed
these during what seemed like invisible
loading sequences when changing from one
outdoor cell to another, such as
crossing the bridge on the horse early
in the game when you first get the
horse. Lows dropped to 12 1/2 FPS during
this brief sequence when we looked out
over the uh environment around the area,
leading to temporary brief stuttering
when at 1440p ultra without RT enabled.
This is in spite of a good average at
nearly 70 FPS. And just as a note, the
gameplay capture we're showing you is
separate. it is not the same thing. Uh
we isolate the gameplay capture from the
benchmark capture so there's no
influence on the data and then we don't
run external capture during testing just
for a lot of reasons. So the footage you
see throughout this is captured
separately from the performance numbers.
We also observed this behavior in our
inner city roaming 3 test sequence where
we walked from the lowest part of the
Hernand Castle outer town to the castle
gates. We saw a drop to 21.6 FPS for the
lows despite a great average frame rate.
The 1% lows also suffer here at 40 FPS.
The real problem is the disparity
between the average and the lows with
that gap being noticeable. Most of the
rest of the results are relatively
consistent. The 1% lows are still more
distant from the average than we see in
some other games, but the 0.1% lows and
1% lows are mostly near each other,
which is good. The range for average FPS
here was nearly 28 FPS average from 77
FPS in the fields to 49.8 FPS in one of
the fights. That's a large range. The
average of all scenes measured when
removing things like black screens or
menu pauses, things like that, spans a
few hours of gameplay and was 67 FPS.
Our final test candidate was In a City,
which averaged 70 FPS. This is higher
than the overall average, but it's the
most consistent runto run for testing
that we could find while still being
real damn close to the overall average
we had, which again was 67. Areas that
are lower than this in any meaningful
way often involve combat, which is not
repeatable runto run. Here's a frame
time plot for that bridge scene where we
have a long range visibility. Everything
looks fine. If you truncate the scale to
50 milliseconds with frameto frame
consistency overall good. That frame
rate itself is also okay. It's faster
than 60 fps. But it's really the
consistency of the pacing that we care
about. The problem emerges at the end
where we countered honestly one of the
most impressive frame time spikes I've
ever seen. And not in a good way. The
chart now shows the worst frame time
spike that, at least to my memory, we've
plotted in years. It jumped up to 691
milliseconds for one frame. That means
that we were staring at the same frame
for nearly an entire second without any
kind of movement. Enough for you to
think that the game is going to
imminently crash. It recovered, but it
was a hard stutter here. Uh, this
doesn't repeat in the same spot every
time, but it does happen just generally
occasionally during play on some of
these GPUs. The 5060Ti was holding a 13
to 15 millisecond frame time on average
here, which is again greater than the 60
fps that plot at 16.667 milliseconds.
So, it's not like the frame rate was
bad, but that spike definitely was. And
this isn't a one-off. This again, it
occurs just kind of randomly throughout
play. It seemed like it tended to be
when you maybe load or dump a cell as
you transition around the map or
something. Here's the same in-game
scaling chart for the 9070 XT. Now, the
range for average FPS is about 39 FPS
from 115 FPS to 77 or so. The 0.1% lows
range from 99 FPS to 19 or about 80 FPS
for the actual range top to bottom of
that at.1% which is crazy. The 970 XT
experienced the worst performance during
a loading screen to the campfire. But
fortunately, that's not an area where
you need responsive controls because
it's a loading screen. Technically, you
can move the mouse, but doesn't really
matter. In terms of actual play, we had
some hitches when walking to the cabin,
but otherwise, this card was much more
stable for its frame interval pacing at
1440p than the weaker 5060Ti. The
average for all tests for this card was
96 FPS across a couple hours of play.
The test candidate that we approved for
its reliability and consistency averaged
100 FPS. So, it's about 4 FPS higher
than the global average, which we think
is a representative match while still
being repeatable, unlike combat, which
is not repeatable runto run. Our next
section was also part of our early
research. For this, we went through and
tested various presets in a fixed test
path. Unless otherwise noted, we did
most of this without modifying any
graphics presets and while keeping
upscalers. The purpose of these quick
tests is to understand what our scaling
is and to help quickly choose a preset
for baseline testing. We tested from
minimum to max presets, plus RT toggled
on and off with max, including cinematic
mode, maximum lighting, ray
reconstruction, and DAA. Technically,
max isn't its own preset, but that was
us toggling the rest of the things all
the way up. DAA was not used for any
other settings. Minimum looks like a
fuzzy, blurry, muddy mess and is hardly
playable. We still tested it, but uh it
just it really doesn't look good. cards
that are otherwise insufficient can
technically run this. It's just not a
good experience. This though is how they
can uh technically say on their sheets
that they can accommodate something like
a 1060. It's with this setting. Here's
the chart with the 5060 Ti with FPS
values shown. The 5060 Ti ran at 151 FPS
average with 1440p minimum, which looks
awful. That's an improvement on the low
settings 91 FPS average of 65.7%. So,
this is where you would gain the most
performance. But again, it really I mean
it's subjective. It's up to you. You can
do whatever you want. Personally, I I
just I wouldn't play this game on
minimum. I'd rather not play it. It just
it looks that bad to me. But if you can
deal with it, then great. More power to
you. Medium ran at 74 FPS average here.
So, low is better by 24%, but obviously
looks much worse. High has a performance
cost about the same as medium with the
two indistinguishable in performance of
this test. Ultra without RT consistently
ran a couple frames per second average
faster than ultra with RT. That's why we
chose to disable RT for most of our
tests. it's almost no impact on the
frame rate performance and we wanted to
maintain maximum compatibility for older
cards. So, the 1080 Ti was high on my
list for wanting to run for this. Uh,
disabling RT ensures we can do that
without any issues. Since RT on versus
off performed about the same on both AMD
and Nvidia, we toggled it to better
accommodate those without hardware RT
support. Cinematic is the next large
drop in performance down to 62 FPS
average. Ultra performs about 12% better
with RT on than that. enabling max
lighting, which is not part of any
preset, drops cinematic massively for
performance. It falls from 62 FPS to 37
FPS average. Finally, max lighting,
cinematic presets, and RT reconstruction
with DAA ran at 18.4 FPS. This is on a
5060 Ti, remember, we also ran this on a
5090 and it was playable. Here's the
same for the 9070 XT, although we didn't
run minimum for this one. Top to bottom,
there's a total range of 38 FPS average
in these results in a like for like
test. That's about the same range as we
saw gamewide leaving this area. Medium
and high don't really offer much from
each other in terms of performance
differences, just like last time with
the 5060 Ti. And high and ultra are also
close together. Toggling RT with ultra
showed about a 3.6% uplift with it off
for ultra versus ultra. So, not enough
to warrant leaving it on for
benchmarking purposes as again it would
cause concerns for things like the 1080
Ti for like for like comparisons that we
really wanted to include. Up next, this
simulation time error. This is actually
really cool. We previously ran a white
paper piece doing a big research deep
dive on this stuff. It's a new type of
metric that we present with new types of
testing and uh we used to call it
animation error, but we've gone back to
simulation time error just to reduce
confusion. So this is when there's a
mismatch between the pace at which the
frames are shown and the events that are
shown in those frames. It's our attempt
to quantify a feeling basically how the
game feels. You can have a smooth frame
rate on paper but if the images you're
seeing don't line up with that rate the
game may feel stuttery or laggy anyway
and it's not the same as frame times.
This can be a different problem. The
opposite is also true where you could
stagger frames chronologically to make
movement look correct but end up with a
bad frame rate number. Those are the
theoretical extremes. In practice,
higher frame rate equals better the
majority of the time. Subjectively,
we're adding these animation error
charts for a little more nuance, though.
Let's get into it. First, here's an
example of what we'd consider healthy
frame times and simulation time error,
which we previously called animation
error. We've renamed it to reduce
confusion from popular demand for this
better name. This is a plot of the test
pass that we call Abyss Puzzles part one
on the 5060Ti. This logged for nearly 3
minutes straight at an average of 69
frames per second derived from an
average frame time of 14 1.5
milliseconds. This is during actual
gameplay as well. As you can see on the
plot, frame times never deviated
significantly from the 14 1/2 average,
which is good. Simulation time error
wasn't zero, which is technically ideal
to be zero that is, but it was rarely
greater than 4 milliseconds in either a
positive or negative direction. In our
experience thus far, this is not a bad
result. We see a few greater excursions
that align with the frame time spikes,
such as around the 5700 second mark,
where frame time spike to about 19
milliseconds and simulation error jumps
to plus or - 4 milliseconds. The frame
time dip around frame 8,000 to about 12
milliseconds aligns with an increase in
frame rate. And normally that would be
seen as a good thing. However, because
it's inconsistent with its immediate
neighbors, it would actually have been
better to hold a worse frame rate for a
higher frame time, which would result in
a better or smoother experience. We can
also see this in the simulation time
error data. Simulation time error gets
more scattered as well towards the back
quarter of this chart, but so do frame
times. We've adjusted the vertical scale
for this plot of the inner city roaming
3 test pass on the same card. There's a
clear individual frame time spike up
above 60 milliseconds. So again, some
inconsistency issues. This is also
reflected in our usual 1% and.1% load
calculations alongside a predictable up
and down simulation time error deviation
here showing - 82 milliseconds and plus
63 milliseconds at the same time. More
interestingly, there are some big
simulation time error deviations that
appear alongside smaller frame time
spikes. For example, the final frame
time spike is to 26 milliseconds, but
simulation time error bounces down to
minus 39 milliseconds and up to 19
milliseconds, followed by a couple of
rippling smaller deviations. These are
situations where the 1% and.1% numbers
might not look that bad in a bar chart,
but the subjective feel of the game is
significantly worse than the numbers
would indicate. In fact, they wouldn't
even look that bad in a frame time plot
other than the couple excursions here.
It's situations like this why we added
simulation time error testing defined in
our animation error methodology white
paper by its previous name. Pretty cool
though to see it at work in one of our
first public use cases outside of that
white paper we published. Finally,
here's an example of an actual test pass
on the GTX 1070, a venerable card. The
frame times are high, but they're
consistent, which means we won't see a
wide gap between the average and the
lows on our usual FPS bar chart.
Simulation time error reflects what we
actually felt though as a player rather
than what we're told about frame time
pacing. What we felt was a laggy and
sluggish unresponsiveness to inputs.
Almost every frame has a significant
simulation time error value, and there
are consistently frames that deviate 30
milliseconds or more from zero. The
scattered nature of the simulation time
error dots and the wide variability in
the values makes it easy to spot just by
the chart that this is a bad experience.
Despite a technically smooth frame time
pacing, it just doesn't feel good when
you're playing the game. Even if the
numbers otherwise look okay. It is a
slow frame rate, but the pacing looked
okay. This is another great example
though for when our simulation time
error testing methodology can highlight
something hidden by both frame rate and
even by frame times. For comparing
simulation time error across GPUs, we
have a couple choices. The most
straightforward is to take the total of
all sim time errors for each frame,
specifically the absolute values, and
then divide that by the total of all
frame times, which should always be
almost exactly 40 seconds for these
tests. That gives us a number for the
average error per frame. This chart
doesn't exactly follow the frame rate
performance stack, but it's in the
ballpark. Cards like the 5090 and 7800
XTX have relatively low error per frame,
while the 2070 and 2060 KO are on the
other end of the chart. And the 1070 is
massively worse in a ridiculous way
here. In the most simplistic way, this
chart orders simulation time error from
best to worst. Many of the devices are
functionally tied, which is why you see
the 9070 XT, 4090, 5090, and 5070 all
the same. Unlike frame rate, this metric
is typically either bad or not, as
opposed to frame rate where there's more
of a sliding scale or gradient. However,
during our discussions with Intel's Tom
Peterson, we brainstormed a way to
represent simulation time error per
frame relative to frame times. The
reasoning is that, for example, 1
millisecond of error is arguably more
significant relative to a 6 millisecond
average frame time than it is to a 12
millisecond average. This doesn't make
for a very exciting chart at the top end
since on nearly every GPU we tested, the
percent sim time error was between 2%
and 3%. The 1070 was again a huge
outlier at 11.9% and the 5090 FE likely
because of its extremely high
performance with more of a risk of
occasionally bouncing off of other
limitations as well lands at 3.8%. Now,
in some situations, this might be enough
to have some perceived stuttering or
lag. But in this case, the 5090 averaged
186 FPS in this test, which is high
enough to help offset the issues. For
the GPU comparison charts with standard
FPS, 1% and.1% lows, we'll start with
1080p ultra with RT disabled. Remember
that ultra in this game is more like
high in most games as cinematic takes
the place of ultra and the proximity of
high results to ultra is relatively
close and medium to high is even closer.
In the 10 hours that we tested this
before going into production, we managed
to test 25 GPUs across three resolutions
with two additional resolution and
settings tests, plus the simulation time
error testing and everything else you're
seeing here. Without early access, we
filled this chart in as best we could.
We'll start with the interesting
numbers. The GTX 1066 GB is technically
supported by the game. For these
settings, it's not playable, but nearly
20 FPS average isn't as bad as it sounds
to the old 1060. You could play on lower
settings. It's just not a good
experience. And because we're trying to
pack in as many cards as we can, we're
focusing on the directly comparable
numbers here rather than kind of tuning
for each individual card. The more
interesting way to look at this and the
1070 would be with relative scaling
numbers. The GTX 1070 leads the 1060 by
17%. Even more interesting is that
owners of the GTX 1080 Ti. 36 FPS for
ultra really isn't bad. You could drop
to low if you needed to and do some
settings tuning to make this workable.
It's also interesting since users who
sprung for the 1080 Ti could still
reasonably get some life out of it yet.
And comparing to 1060 and 1070 scaling,
this 1080 Ti has managed to hold on a
little bit longer. The 2060 KO outdoes
it though, benefiting from its more
modern hardware and running at 45 FPS
average. And to go through the 60 class
cards, the 2060 to 3060 is showing a 23%
improvement. Then the 3060 to 4060 post
a 19% uplift, with the 4060 to 5060 82
FPS average at a 24% uplift. Other
popular cards of the past include the
RTX 3070 posting an 81 FPS average in
this configuration and sitting between
the 2080 Ti and 5060. The 3080 remains
popular and held 105 FPS average with
good lows. Actually, overall, the low
scaling is relatively consistent across
the stack. Look into AMD now. The 9070
XT ran at 123 FPS average with the 9070
immediately below it. The 7900 XTX
outdoes the 9070 XT in this title by
24%, sitting between the 5070 Ti and the
5080. The 9070 GRE that we found in
China and recently reviewed in a
separate video ran at 96.5 FPS average.
about the same as the 5060 Ti. And the
9020 GRU is a really interesting card. A
lot of people didn't catch that review,
but you should go on the channel. It's a
couple weeks back and check it out. It
is an interesting one. As for the 9060
XT, predictably, that was just below the
9070 GRE at 88 FPS average. Let's move
on to the next chart. At 1440p, the 5090
has some of its performance shaved off
from the 208 FPS average at 1080p to 186
FPS average here, which shows that we
are GPU bound. That's useful data. Some
cards fall off the chart due to poor
performance from their age. The 1070
remains now at 18 FPS average, which is
really only useful as an academic point
for scaling purposes. The 1080 Ti isn't
quite hitting 30 FPS average here, so
that's struggling as well. You can
always play at lower settings, but 1080p
helps most for these cards in
particular. Up at the top, the 7800 XTX
is the closest competition from AMD to
Nvidia's 5070Ti and 5080, followed by
the 9070 XT at 100 FPS average. The 9070
XT is led by the 5070Ti by about 16%
here, and it leads the 9070 by about
7.5%. Lows for all these cards are
comparable in that neither brand has a
particular advantage in frame time
consistency over the other. Older cards
that may be interesting include the 3070
at 61 FPS average, about the same as the
4060 Ti, 2080 Ti, and 5060 on the AMD
side. The 960 XT is just ahead of that
and alongside the 7700 XT. 4K Ultra is
next. So, the top of the chart comes
down to 124 FPS average with the RTX5090
FE now, and everything else gets trimmed
below it. The 7900 XTX remains
competitive from AMD. The 9070 XT is now
almost tied with the 570Ti, but still
just behind it, and the 9070 is about
the same as the 5070 here. Owners of the
3080 will see performance similar to a
5070 with the new 9070 GRE not far
behind. Things get worse from there
without any tuning, of course. The 3060
runs at about the level of the modern
50/50, except that the 3060 was actually
kind of a respectable card, and the 1080
Ti manages to hang in there at 18 FPS
average. Not playable, but honestly,
it's better than we'd expect of the
Pascal flagship now 10 years odd. Not
bad for the 1080 Ti. This chart is with
ray tracing enabled, the cinematic
graphics preset rather than ultra 4K,
and tested with the few cards that can
kind of do it. We also threw our max
graphics settings test with the 5090 on
here just for comparison to put it
somewhere. that involved enabling DAA
and ray reconstruction alongside max
lighting. So, it's not comparable to any
of these other cards here other than the
5090 entry. And that comparison is only
that maxing the graphics out at the next
step cost about 70 plus% of the
performance. As for the comparable
items, the 5090 ran cinematic with RT at
111 FPS average, followed by the 580 at
72. That means the 590 has a lead of
56%. The 7800 XTX right at 60 fps
average, and the 9070 nonXT at 46, the
5070 about the same as that. We also did
this at 1440p in this test. The 5090 ran
at 171 FPS average, leading the 5080 by
47%. 7900 XTX was below the 5080 and led
the 9070 XT by about 16% here. The 507Ti
sits between. One last note for RT on
versus off differences. So, as we said
earlier, for most of the cards, we were
not seeing a difference beyond a couple
FPS. The 90 series, 50 series, and 40
series, no real difference. RT on versus
off. It was like a couple frames worse
with it on. But one exception to that
was we did do a quick test on the 7900
XTX with RT on versus off. And in that
instance, we saw almost up to 8%
improvement by turning RT off. So in
that situation, you wouldn't really be
able to compare results of testing with
RT on versus RT off for some of these
GPU generations. I you shouldn't anyway,
but in this case, the scaling is
greater. With an older generation card
like a 7900 XTX, we didn't go back to
the 20 series or the 30 series.
Presumably, you might see more scaling
there as well, but certainly for AMD,
they had the largest gain in RT
performance relative to themselves in
the 90 series versus the 7000 series.
So, that's it. That's as much as we
could get done inside the launch window
that we had. Sometimes we get early
access to this stuff. Sometimes we
don't. Uh, I I kind of prefer not just
because it seems like these companies
always ship a day one patch. There's
always a concern of like, has this
changed anything performance-wise? I
don't I I kind of doubt it's any serious
changes. is I mean for those who did run
tests early probably not a lot has moved
in terms of the frame rate but Pearl
Abyss for whatever it's worth did claim
that they made some performance
optimizations. We don't have before data
uh but we do have the launch data at
least that's what you've got and then as
far as Intel I mean as soon as their ARC
GPUs work in this we'll test them. It's
just from Intel this is the most direct
I've ever seen Intel be in relation to a
another business like a partner where
they just straight up said yeah Pearl
Abyss didn't let us do anything. They
wouldn't help us. we couldn't help them
and super direct from Intel. They said
they didn't get access. They gave or
offered Prolabus all kinds of
engineering resources which would
normally be programmers uh shader
programming stuff like that from their
their team and then hardware drivers and
it sounds like the I don't know for
whatever reason Prolis just did not work
with them. Now Prab Abyss this is a an
AMD sponsored title but obviously they
works with Nvidia but Nvidia has 95% of
the market so I don't know. Nvidia also
owns part of Intel though, so you'd
think maybe through that chain. But
anyway, ARC is still getting the short
straw. That's what I'm saying. It's a
lot of words to say. Intel ARC continues
to get picked on in the GPU market. Like
in the CPUs, they deserve it.
All right. They stagnated for like eight
years or something. They deserve getting
picked on in the CPUs and bullied a
little bit, but in the GPUs, like come
on. What What did Arc ever do to you?
Pearl Abyss. Maybe maybe they're really
mad about Skylake still 14 nanometer
plus++. Um yeah, the game's got I don't
know. I I was reading some of the Steam
reviews to try and see how widespread
issues are. There were a lot of
complaints about Poppins. So that seems
to be consistent with how we felt about
it. I saw some people also trying to
tune it and saying that the settings
file uh either didn't have the option or
was hashed or the option was uh not
obvious to them. And then I did see a
number of people complaining about
crashes as well online. And so that
seems like that wasn't exclusive just to
us. And just to be clear once again,
that was on three different computers
with two different GPU vendors. So it's
not a computer problem like this. It was
consistent. And it was on different GPUs
as well. So I I don't know. It's they
they seem I think they were not ready
for launch, but that seems to be the
default state for video games these
days. So it's happening again. I don't
know what card those are the cards I've
tested so far over there. I don't know
what card this is. Maybe six or seven.
It's happened twice now. This is a 5080
this time. Last time was a 60s something
or other. And uh yeah,
looks pretty good.
I like the graphics.
I think it's that looks like the DLSS5
slop filter. Oh,
it went away. Would this be a valid test
run if I run it, or do I need to wait
another five minutes to load the
save file? Anyway, I don't know how fun
the game is or isn't. We're just here to
test it for this one. And uh what we
found is that medium, high, and ultra
settings don't differentiate too much.
Ray tracing on versus off was a very
slight performance cost. With it on, it
was like, I don't know, maybe 4% was the
most we saw. Cost, meaning performance
goes down with RT on, which you would
expect. Um, we did test with it off so
that we could accommodate the tensors
without any issues. Maxing out the
lighting setting costs a ton in
performance. So, if you're the type of
person who wants to just basically max
all the settings and it's too heavy for
performance, uh, the first thing to
change down would be lighting and you
leave everything else up and that gains
you I mean it's like 30 40% in some
cases. So, it's a pretty big impact for
that particular setting. And um,
otherwise that's it for now. I mean, we
can do more on this. I wanted to get
something out relatively early uh with
the limited time we had for testing. Did
that just change color? You can uh post
your comments below if you have anything
specific you want us to test otherwise
for this game. Um but let us know what
you think. The simulation time error
charts, previously known as animation
error. Really excited about those. Got a
lot of work to do on them still. They
are experimental charts, so we're still
developing those. Um and yeah, subscribe
for more. Go to store.gamersex.net to
help out directly. patreon.com/gamers
nexus. And my thanks to Patrick on the
team for coming in to help me develop
the test plan and do the test design. He
sat with me as we worked through the
game and figured out what his
performance is like. Mike and Tannon for
coming in and relieving me of the
testing duties as soon as I got in in
the morning and then because I was up
all night running the test solo, they
took over for me. So, thank you. And
then Vitali and Tim for pushing through
on the edit and getting that done so you
all have this great video. So, thanks
for watching. Subscribe for more. We'll
see you all next time.
Ask follow-up questions or revisit key timestamps.
Gamers Nexus provides a comprehensive technical analysis of Crimson Desert's launch, highlighting it as a 'buggy mess' with significant performance and stability issues. The review covers the developers' refusal to cooperate with Intel, leading to zero support for Arc GPUs at launch, and introduces a new testing metric called 'Simulation Time Error' to measure game feel beyond simple FPS. Testing across 25 different GPUs reveals issues ranging from system-wide shutdowns and graphical glitches to severe asset pop-in and the massive performance cost of specific settings like 'Max Lighting'.
Videos recently processed by our community