AMD Ryzen 7 7800X3D vs. Intel Core i9-14900K: The Definitive Test

Status
Not open for further replies.
Intel should be banned for wasting so much energy.
We live in the parallel universe where they get government contracts instead. Regulations will probably attempt to control the consumer before they attempt to regulate a big corporation.


Update the 7800X3D is selling at $339 @ Newegg 7950X3D as low as $489 @ B&H
I have seen the 7950X as low as $430 last week but it went back up in price to $489.
The cheapest i9 14900k is $549 currently.
 
Last edited:
If you only care about gaming, then right now only 4 products in the CPU market make sense. There's the 12400F and Ryzen 5600 at $110 and $120 respectively, the Ryzen 7600 at $190, and the 7800X3D at $340. Everything else you can just pretend doesn't exist, 14900K included.

I don't have high hopes for Arrow Lake at the high-end, but I'm interested to see what they bring in that $200-ish mid-range, which typically gives you 90% of the gaming performance of the high-end chips for half the price.
 
If you only care about gaming, then right now only 4 products in the CPU market make sense. There's the 12400F and Ryzen 5600 at $110 and $120 respectively, the Ryzen 7600 at $190, and the 7800X3D at $340. Everything else you can just pretend doesn't exist, 14900K included.

I don't have high hopes for Arrow Lake at the high-end, but I'm interested to see what they bring in that $200-ish mid-range, which typically gives you 90% of the gaming performance of the high-end chips for half the price.
If Arrowlake indeed removed hyperthreading just to stay competitive with improved node that will be sad imo.
 
The article states 7800X3D with 6000MHz DDR5 is 1:1.
False, this is 2:1. The memory controller isn't running at 6000MHz.
 
The article states 7800X3D with 6000MHz DDR5 is 1:1.
False, this is 2:1. The memory controller isn't running at 6000MHz.
Guess the confusion is that with dual channel RAM, it's "effectively" 1:1....

Thing is, I'd have liked Techspot to try higher clocked RAM with the AMD - just because it's the sweetspot, doesn't mean you still don't get a bit of an improvement with RAM faster than 6000MHz.

I've got mine clocked at 6600, and it runs a bit faster - and if it's stable, why not?
 
If you get an i9 strictly for gaming, you're plain dumb or wasting money is not a problem for you. As pointed above, an i5 or Ryzens are cheaper and have the same perf or slightly better. Thing is, most people want to do more with their PC. I keep seeing this imo, worthless gaming benchmarks on CPUs, where you get 5-10 frames better on 1080, using a 4090. Is this realistic enough for anyone to care? How about adding some productivity benchmarks, Techspot? You keep saying "oh, Intel is better there, but that's a different subject". No, it's not. Let's have the full picture.
 
Guess the confusion is that with dual channel RAM, it's "effectively" 1:1....

Thing is, I'd have liked Techspot to try higher clocked RAM with the AMD - just because it's the sweetspot, doesn't mean you still don't get a bit of an improvement with RAM faster than 6000MHz.

I've got mine clocked at 6600, and it runs a bit faster - and if it's stable, why not?
I’ve heard faster RAM has even less effect on the X3D parts. I assume you might get at least some benefit though.

I’ve left mine on 6000MHz, not seen enough evidence I’d get much extra out of my 7950X3D pushing it further.
 
Guess the confusion is that with dual channel RAM, it's "effectively" 1:1....

Thing is, I'd have liked Techspot to try higher clocked RAM with the AMD - just because it's the sweetspot, doesn't mean you still don't get a bit of an improvement with RAM faster than 6000MHz.

I've got mine clocked at 6600, and it runs a bit faster - and if it's stable, why not?
Pretty sure AMD themselves said it's 2:1
 
Doesn't say much for Intel when 24 cores equal or trail just 8 of AMD's. Why Intel keep adding more and more of those "E" cores I'll never know, as it's clearly not doing a great deal.
 
Doesn't say much for Intel when 24 cores equal or trail just 8 of AMD's. Why Intel keep adding more and more of those "E" cores I'll never know, as it's clearly not doing a great deal.
They dominate Cinebench benchmarks, as well as other specific workload related tests and or benchmarks, and if that's the only thing you care about (these people somehow exist) then Intel's E cores are important, they also help with background tasks freeing up the P cores to stretch their legs and not get bogged down with gaming. Don't forget power drawn is of no importance either.
 
If you get an i9 strictly for gaming, you're plain dumb or wasting money is not a problem for you. As pointed above, an i5 or Ryzens are cheaper and have the same perf or slightly better. Thing is, most people want to do more with their PC. I keep seeing this imo, worthless gaming benchmarks on CPUs, where you get 5-10 frames better on 1080, using a 4090. Is this realistic enough for anyone to care? How about adding some productivity benchmarks, Techspot? You keep saying "oh, Intel is better there, but that's a different subject". No, it's not. Let's have the full picture.
Realistic scenarios are of no importance here anymore, just like pairing a 4090 with a R5 3600 to prove a point that CPUs do matter for 4K gaming and everyone who disagree should just STFU.
 
"realistically, there's little reason not to use the advertised 'extreme' profile"
Yeah, the little reason is that few more frames, at 1080p settings that a 4090 is already wasted on.
The big reason not to use Extreme is the extra 100W of power it's costing you (and the environment) every hour you play a game on
Play at GPU limited settings, on average/1%low fps benchmarks, Extreme is a complete waste, unless a frame time analysis shows outlier spikes are noticeably lower (but they are unlikely to be caused by insufficient CPU clock, they're probably IO related and extra powah won't do a thing to help anyway)

Both i9 and R9 out of the box use stupid extra power for benefit that is subjectively unobservable and objectively within margin of error
 
I don’t get it, something is weird about this analysis and I think Techspot “AMD bias” is on again… During gaming a 13900K or a 14900K rarely goes above 100/120W, and only for some spikes. So the 125 or 253W PL1 limit shouldn’t make any difference. Because even in the “Performance” setting, the CPU can boost to 253W for a few seconds. The 253 vs 125W PL1 setting can make a difference in rendering or video encoding benchmarks, when all cores workload is sustained for a long time.
 
It's funny how 10 years ago, intel were the cool and energy efficient CPUs and AMD we're the hot power hogs.
It's also funny how 6 years ago intel were releasing low core count cpus that were fast just for gaming and everyone was flaming them non stop. Now amd does the same and they treat it like the second coming, lol.
 
I don’t get it, something is weird about this analysis and I think Techspot “AMD bias” is on again… During gaming a 13900K or a 14900K rarely goes above 100/120W, and only for some spikes. So the 125 or 253W PL1 limit shouldn’t make any difference. Because even in the “Performance” setting, the CPU can boost to 253W for a few seconds. The 253 vs 125W PL1 setting can make a difference in rendering or video encoding benchmarks, when all cores workload is sustained for a long time.
Not true at all. They do go above 125w a lot.
 
For "The Definitive Test" this article surely is lacking.

As one commenter mentioned already, such write-up without productivity apps is rather-one sided. I doubt many people buy this processor strictly for gaming.

Secondly, the McDonaldization of TechSpot's articles seems to be at play again. I get it it's much easier and quicker to write a feature using the same template as for many previous ones, but it just seems like a wasted opportunity and of real interest only to very narrow audience. Ok, the fact that the difference of Extreme vs Performance is small is useful to high-end Intel users, but did it really need 24 games tested to establish that?

If you cut down that number to much smaller, say 5-6 representative games (different engines, CPU/GPU intensive, etc), and instead added a GPU that is less halo and more down to earth than 4090, then the results would be much more useful to a broader range of readers (maybe even use another lower end processor for comparison). The fact they could be possibly "boring", ie with small differentials, is irrelevant because this should not be entertainment but factual writing.

But kudos for at least including 1440p / 4K results.
 
For "The Definitive Test" this article surely is lacking.

As one commenter mentioned already, such write-up without productivity apps is rather-one sided. I doubt many people buy this processor strictly for gaming.

Secondly, the McDonaldization of TechSpot's articles seems to be at play again. I get it it's much easier and quicker to write a feature using the same template as for many previous ones, but it just seems like a wasted opportunity and of real interest only to very narrow audience. Ok, the fact that the difference of Extreme vs Performance is small is useful to high-end Intel users, but did it really need 24 games tested to establish that?

If you cut down that number to much smaller, say 5-6 representative games (different engines, CPU/GPU intensive, etc), and instead added a GPU that is less halo and more down to earth than 4090, then the results would be much more useful to a broader range of readers (maybe even use another lower end processor for comparison). The fact they could be possibly "boring", ie with small differentials, is irrelevant because this should not be entertainment but factual writing.

But kudos for at least including 1440p / 4K results.
Sorry, but why would they cut down the GPU in a CPU test? You can extrapolate the results of a smaller GPU by just pretending that, for example, the 4K/1440p numbers are for 1440p/1080p.

As for the game samples, fewer titles just means that there is a much higher change that one outlier game can influence the final results. It can also exclude the "broader audience" (as you put it) since it doesn't include their favorite games or games they want to buy in the future.

You want productivity? Here's the general results from the many known reviews: the 14900k is slightly better than the 7950x/7950x3D by a few percent, generally trading blows depending on the task. If you want the best workstation CPU (in the consumer market, not HEDT), without caring about power draw, upgradability or ECC memory then you go with the 14900k.
 
Sorry, but why would they cut down the GPU in a CPU test? You can extrapolate the results…
Ah see, the reason I just visit these comment sections rather than joining in these days is because way too many people do not understand why the tests are done the way they are, and extrapolate the data they need.

I legitimately feel sorry for Techspot on these articles, they’re really good, usually my go-to source, then you click on comments and find loads of people who just didn’t understand what they just read.
 
Status
Not open for further replies.
Back