It's Unnecessary But, AMD Is Basically Lying About CPU Performance

Status
Not open for further replies.
Maybe amd used a low end motherboard and those limit the current, boost clock and boost time. On the other hand, is it ok to advertise intel CPU performance on super expensive motherboards?
 
AMD is right here. It makes no sense to get $300 CPU and put double that for GPU. $300 CPU and RX6600 makes perfect sense.

To be more precise, running games on GPU limited scenario is only thing that actually makes sense IRL. Low resolution benchmarks are for situations that almost never exist on real life scenarios.
 
AMD is right here. It makes no sense to get $300 CPU and put double that for GPU. $300 CPU and RX6600 makes perfect sense.

To be more precise, running games on GPU limited scenario is only thing that actually makes sense IRL. Low resolution benchmarks are for situations that almost never exist on real life scenarios.
Testing cpus on a gpu limited scenario makes 0 sense.

2 cpus cost the same, how would you know which one is faster, and therefore to buy, if the tests performed are gpu bound?
 
Testing cpus on a gpu limited scenario makes 0 sense.

2 cpus cost the same, how would you know which one is faster, and therefore to buy, if the tests performed are gpu bound?
It also makes no sense to run non-GPU limited scenario. I tend to maximize GPU settings on games so that game looks as good as possible. Not lower settings to make it run 1000+ FPS.

AMD seems to be faster while being GPU bound. We have seen before that there are speed differences between systems even on GPU limited scenarios.
 
Guess they have been doing it for so long they have forgotten how to tell the truth, even when it's to their advantage! LOL
 
It also makes no sense to run non-GPU limited scenario. I tend to maximize GPU settings on games so that game looks as good as possible. Not lower settings to make it run 1000+ FPS.

AMD seems to be faster while being GPU bound. We have seen before that there are speed differences between systems even on GPU limited scenarios.
No. Just no.

Benchmarks have to be done with cpu bound settings. Period. Just stop
 
Testing cpus on a gpu limited scenario makes 0 sense.

2 cpus cost the same, how would you know which one is faster, and therefore to buy, if the tests performed are gpu bound?

-Agreed, but it's also the reason I wish more reviewers would test "real world" gaming set-ups more often as opposed to always min/maxing the CPU/GPU combo.

Might make a fun article to review a $500/$1000/$1500 Intel/AMD/NV systems and see if there is really much difference between them in Raster performance and how much difference there is in RT...

People buy parts based on the min/max numbers, but often pair their GPU/CPU with a far less powerful GPU/CPU than the review and could in theory have the exact same performance regardless of which vendor they went with.
 
-Agreed, but it's also the reason I wish more reviewers would test "real world" gaming set-ups more often as opposed to always min/maxing the CPU/GPU combo.

Might make a fun article to review a $500/$1000/$1500 Intel/AMD/NV systems and see if there is really much difference between them in Raster performance and how much difference there is in RT...

People buy parts based on the min/max numbers, but often pair their GPU/CPU with a far less powerful GPU/CPU than the review and could in theory have the exact same performance regardless of which vendor they went with.
There already are reviews of cpu and reviews of gpu. If the gpu you have gets 70 fps at the resolution you are gaming at and the cpu can get 100 fps with a 4090, then it can also get the 70 that your current gpu does.

You don't need to crosstest like you are suggesting.
 
Like it or not, their chart is accurate... but downright misleading...

It is funny, because Intel have been doing that for years and they are right now in the middle of MANY controversies of their own and nobody is making a fuzz about it. It is just fair fight at this point...
---------------------------------------------------------------------------------------------------------------------------

- Intel denies reports that it identified a root cause for Core i9 crashing issues — investigation continues

- Intel slapped with class action lawsuit over foundry revenues — litigants allege securities fraud

- Intel Hits Back At AMD’s Data Center AI Performance Claims: Says 5th Gen Xeon Faster Than AMD EPYC Turin Using Proper Optimizations

"At Computex, AMD highlighted the leadership performance of 'Turin' on AI workloads compared to 5th gen Intel Xeon processors using the most up-to-date, publicly available software available at the time for both AMD and our competitor. The data for “Emerald Rapids” that Intel highlighted in their blog uses a software library that was released on June 7, after our Computex event. It is also important to note the new software stack was only used for the Intel platform and the performance optimizations were not applied to the AMD platform. 4th Gen EPYC CPUs continue to be the performance leader, and we expect “Turin” to remain deliver leadership performance across a broad range of workloads when we launch later this year.

AMD PR to Wccftech"
 
Testing cpus on a gpu limited scenario makes 0 sense.

2 cpus cost the same, how would you know which one is faster, and therefore to buy, if the tests performed are gpu bound?
That's not what AMD said... they just said that if you are in a GPU bound scenario, buying an intel CPU would not make any difference... and they are right.
 
It consumes way more than a 7800x3D or a 5800x3D and probably even a 5950x.
Are we going to be making things up or just look at the data? The 13600k at 1080p with a 4090 consumes 73w. The 5950x consumes 85. So no it doesn't consume more than the 5950x but it is a lot faster. Now move on,
 
That's not what AMD said... they just said that if you are in a GPU bound scenario, buying an intel CPU would not make any difference... and they are right.
Buying an amd cpu won't make a difference either. That's just some captain obvious stuff.
 
-Agreed, but it's also the reason I wish more reviewers would test "real world" gaming set-ups more often as opposed to always min/maxing the CPU/GPU combo.

Might make a fun article to review a $500/$1000/$1500 Intel/AMD/NV systems and see if there is really much difference between them in Raster performance and how much difference there is in RT...

People buy parts based on the min/max numbers, but often pair their GPU/CPU with a far less powerful GPU/CPU than the review and could in theory have the exact same performance regardless of which vendor they went with.
I think you are right. If you tested with the GPUs people actually buy then we would quickly see that really we just need whatever is on offer for $200. At the moment a 12600K is remarkably well priced with a motherboard, just get that. These X3D chips are an absolute rip-off if you ask me. They suffer in productivity to give you like 10% more performance in games that you wont notice unless you are running like a watercooled 4090. AND they want more money for them. And the similar with the xx900 models from Intel. Just pretend they dont exist.

It used to be that people would want the best and pay for it. But these days you get so little back for doing that I dont think people are doing that as much anymore.
 
Regardless of the main subject here, Ryzen 5k series proved to be really good value over time. 5800x being only 13% slower than a 13600k (despite RAM speeds) tells a good story of how well AM4 aged. AM4 is still going strong it seems.
 
You are all aware that all those test scenarios are actuly almost never in real life experience, right? Also, come on, who has some mid range cpu or mid-high with 4090....
All those benchmarks matter to....only the benchmarking guys.
 
You are all aware that all those test scenarios are actuly almost never in real life experience, right? Also, come on, who has some mid range cpu or mid-high with 4090....
All those benchmarks matter to....only the benchmarking guys.
2 CPUs have the exact same price.

CPU A is much faster in gaming than CPU B but I don't know that cause reviewers instead of testing with a 4090 they are testing with a 6600. So I buy CPU B, and down the line when I upgrade my GPU I realize my CPU is holding it back.

So I bought the wrong CPU for my usecase because the CPU review - instead of testing the actual CPU, they were testing the 6600.

Got it now?
 
The 13600k consumes 300 watts? In games? Okay bud

B/c I am a nice guy I didn't even mention accelerated degradation due to electromigration as Intel disregarded their own safety limits and allowed the chips to run at ridiculous voltages and temperatures indefinitely just to gain the edge over AMD.

As for 74W consumption, nice try but no banana. Maybe if you play Starcraft I at 1024X768 frame capped.
 
It also makes no sense to run non-GPU limited scenario. I tend to maximize GPU settings on games so that game looks as good as possible. Not lower settings to make it run 1000+ FPS.

AMD seems to be faster while being GPU bound. We have seen before that there are speed differences between systems even on GPU limited scenarios.

1: Not everyone focuses on having every setting at maximum values; there's a lot of us who want a CPU that will handle 1440p for the next 4-5 years.

2: There will always be *some* noise even in GPU bound situations.
 
Status
Not open for further replies.
Back