• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

9900KS...

Soldato
Joined
6 Feb 2019
Posts
17,634
$513 5GHz Special Edition CPU - Intel 9900KS Review
https://www.youtube.com/watch?v=xPhlL0dZ64M

Pauls Hardware reporting cost at $513....

Can someone explain this?

The 3900x gets a higher CPU score and yet it also gets a lower GPU and lower Overall score.


nswpducn.1wo.png
 
Soldato
Joined
15 Jun 2005
Posts
2,751
Location
Edinburgh
Can someone explain this?

The 3900x gets a higher CPU score and yet it also gets a lower GPU and lower Overall score.
The overall score is calculated using a weighted algorithm, which evaluates how well balanced the system is. If the GPU score is lower it doesn’t help so much to keep increasing the CPU beyond a certain point. This is intended to reflect how CPU and GPU performance relate when playing a real game.
 
Soldato
Joined
6 Feb 2019
Posts
17,634
The overall score is calculated using a weighted algorithm, which evaluates how well balanced the system is. If the GPU score is lower it doesn’t help so much to keep increasing the CPU beyond a certain point. This is intended to reflect how CPU and GPU performance relate when playing a real game.


But they used the same Gpu.

this could just be an issue with the benchmark

Thinking about it in isolation it doesn't make sense to have the same gpu, and yet the cpu that seems faster produces lower gpu score.

there is obviously a bottleneck somewhere causing the actual game performance to be lower and that's unfortunately not picked up in the CPU tests and scoring.

I'm not too familiar with fire strike as I usually use Timespy - but it's safe to assume if they used the same GPU but threw in a 32 core threadripper the CPU score would be even higher and the Overall score even worse?
 
Last edited:
Soldato
Joined
15 Jun 2005
Posts
2,751
Location
Edinburgh
Thinking about it in isolation it doesn't make sense to have the same gpu, and yet the cpu that seems faster produces lower gpu score.
The CPU score is a physics based test and so responds well to multi threading. The GPU score still requires an element of CPU but this responds better to IPC and frequency.

A threadripper system may indeed score lower overall but then it would also perform worse in a real game.
 
Last edited:
Soldato
Joined
6 Feb 2019
Posts
17,634
The CPU score is a physics based test and so responds well to multi threading. The GPU score still requires an element of CPU but this responds better to IPC and frequency.

A threadripper system may indeed score lower overall but then it would also perform worse in a real game.

Thanks, it raises another question though. If only the overall score is important, why did they reference the scores by CPU score OR why even put a 3d mark benchmark in a CPU review.
 
Soldato
Joined
15 Jun 2005
Posts
2,751
Location
Edinburgh
Thanks, it raises another question though. If only the overall score is important, why did they reference the scores by CPU score OR why even put a 3d mark benchmark in a CPU review.
3d mark is intended to mimick a typical gaming workload. How well it does this is hotly debated, like with many synthetic benchmarks.
 
Associate
Joined
28 Sep 2018
Posts
2,276
Thanks, it raises another question though. If only the overall score is important, why did they reference the scores by CPU score OR why even put a 3d mark benchmark in a CPU review.

3D Mark’s cpu test is a good measure of cpu performance where as the overall test blends gpu and cpu.

take a look at Derbauers review where he breaks out GT1/GT2/CPU tests in timespy and the power load they put on the cpu.
 
Associate
Joined
28 Sep 2018
Posts
2,276
3d mark is intended to mimick a typical gaming workload. How well it does this is hotly debated, like with many synthetic benchmarks.

It’s more of a high stress benchmark than a typical work load. Especially time spy and time spy extreme.

Games in truth aren’t that stressful. However they are very sensitive to unstable overclocks of core/uncore/mem/gpu.

Here’s a 90min snip of bfv I did earlier and you can see it’s lack of stress. This is with a bunch of tabs and things open in the background.

https://cdn.discordapp.com/attachments/305981997816348683/639220611448832010/unknown.png

You get occasional blip thus the maximums but average is really mild.
 
Soldato
Joined
15 Jun 2005
Posts
2,751
Location
Edinburgh
A real game is much more variable and spikey. Which is why longer, multiple benchmarking runs are required to get a full picture. Synthetics like 3dmark are just a quick, consistent way to make comparisons between systems/components.
 
Associate
Joined
9 May 2007
Posts
1,284
3D mark time spy cpu test is not high stress. It's one of the easier ones to bench. You can restart in cinebench but pass a time spy cpu score. Mainly due to the test being SSSE3 instructions.

Time Spy CPU test scoring
The CPU test consists of three increasingly heavy levels, each of which has a ten second timeline. The third, and heaviest, level produces a raw performance result in frames per second (FPS) which is multiplied by a scaling constant to give a CPU score () as follows:



The scaling constant is used to bring the score in line with traditional 3DMark score levels.

Time Spy Extreme CPU test scoring

In the Extreme CPU test we only measure the time taken to complete the simulation work. The rendering work in each frame is done before the simulation and does not affect the score. The CPU score () is calculated from the average simulation time per frame reported in milliseconds.





The scaling constants are used to bring the score in line with traditional 3DMark score levels.

The main issue with the time spy cpu tests is the heavy use of SSSE3. This really helps Intel CPU's and even when they score higher, an AMD CPU can equal the performance in many other games at a lower score. This shows that the balance of SSSE3 instruction is not right and should be 15-30% of the workload. With AVX doing the rest. If this happened then AMD CPUs would most likely perform much better and Intel overclocked CPUs may see reduced performance. See AVX offset. It would be worth investigating.

Really given the way my AMD CPU is boosting in games, Time Spy is a heavier load than most games. Also in some game (all the ones I have tested) I am beating out a overclocked 9900k @ 5GHz all cores with decent RAM (3200 and sometimes higher from what I can tell stock timings). This implies that there is something wrong way the CPU test that does not benefit AMD CPU's (see over use of SSSE3 instructions). Remember the average 9900k does 11k in time spy cpu. I do at most 11400-11500. This is faster than most 9900k cpu's. Not bad for the 3800x.

Note: Love the good work the forum did in destroying the equations, had to use images.
 
Last edited:
Soldato
Joined
22 Nov 2009
Posts
13,252
Location
Under the hot sun.
Can someone explain this?

The 3900x gets a higher CPU score and yet it also gets a lower GPU and lower Overall score.


nswpducn.1wo.png

Too many cogs moving around on the 3900X until AGESA 1004 lands where gaming would be automatically been pushed to CCD0 which clocks to 4.6Ghz instead of CCD1. Already some benchmarks show 10% higher fps at 1080p with that bios.

A better indication is a review using 2080S & 5700XT with 3700X/3800X & 9900KS. (and 2080Ti ofc) with 3600C16 ram all of them.

You will be surprised how close the 3700/3800 to the 9900KS are with the 2080S and that there is no difference with the 5700XT at 1080p.

Because someone would cry about last of resources....
9900KS review

1004 AGESA on 3800X
https://www.reddit.com/r/Amd/comments/dpf037/3800x_1004_beta_bios/

Game benchmark

https://forums.overclockers.co.uk/posts/33122484/
 
Last edited:
Associate
Joined
9 May 2007
Posts
1,284
Too many cogs moving around on the 3900X until AGESA 1004 lands where gaming would be automatically been pushed to CCD0 which clocks to 4.6Ghz instead of CCD1. Already some benchmarks show 10% higher fps at 1080p with that bios.

A better indication is a review using 2080S & 5700XT with 3700X/3800X & 9900KS. (and 2080Ti ofc) with 3600C16 ram all of them.

You will be surprised how close the 3700/3800 to the 9900KS are with the 2080S and that there is no difference with the 5700XT at 1080p.

Because someone would cry about last of resources....
9900KS review

1004 AGESA on 3800X
https://www.reddit.com/r/Amd/comments/dpf037/3800x_1004_beta_bios/

Game benchmark

https://forums.overclockers.co.uk/posts/33122484/

It because that bios is pushing much higher all cores boosts as well. 3800x is getting 50Mhz-80Mhz to all cores. I have not tested it myself. This will push a 3800x from 4249MHz to 4324MHz. On water I can boost too 4249-4299MHz in time spy on ABBA.

I am getting 4249-4299MHz in CPU-z for all cores. 9900KS looks good so far but most wont get past 5.1GHz because you cant cool it. 5700XT is easy to bottleneck. Hopefully this means that a 3800x with a really good RAM overclock will hit 11500 time spy cpu. People already have that with RAM overclock's and core clocks at 4.4GHz. Note the average time spy cpu score for the 9900k is 11k. Core speed won't be a big uplift for time spy because it uses SSSE3 instructions.

As you are pushing FPS using SSSE3 you have much higher latency. Run the extreme Time spy cpu time of an AMD cpu using AVX512 and watch the frame time really drop. This is the issue with the graphics score.

Graphics test scoring

Each Graphics test produces a raw performance result in frames per second (FPS). We take a harmonic mean of these raw results and multiply it by a scaling constant to reach a Graphics score (ℎ) as follows:





The scaling constant is used to bring the score in line with traditional 3DMark score levels.

Final score



So with a 12541 Sgraphics and scpu of 11534. We get. 1/((0.82/12541)+(0.15/11534)) = 12,378.885327833428183875978279939 see here https://www.3dmark.com/3dm/40441662?

CPU instruction sets

In the Time Spy test, the boids simulation is implemented with SSSE3. In the Extreme CPU test, half of the boids systems can use more advanced CPU instruction sets, up to AVX2 if supported by the processor. The remaining half use the SSSE3 code path. The split makes the test more realistic since games typically have several types of simulation or similar tasks running at once and would be unlikely to use a single instruction set for all of them. Custom run With Custom run settings, you can choose which CPU instruction set to use, up to AVX512. The selected set will be used for all boid systems, provided it is supported by the processor under test. You can evaluate the performance gains of different instruction sets by comparing custom run scores, but note that the choice of set doesn’t affect the physics simulations, which always use SSSE3 and are 15-30% of the workload.

If you look at the CPU instructions (3800x), first SSSE3 and then AVX512 we can see:

SSSE3 + AVX2
Average simulation time per frame 71.6 ms
https://www.3dmark.com/spy/9072540
CPU Score 4887

15-30% SSSE3 and AVX512
Average simulation time per frame 50.6 ms
https://www.3dmark.com/3dm/40587711?
CPU Score 6922

That's a massive jump in performance and reduction in average simulation time per frame. This could be the reason for reduced Sgraphics scores in time spy for AMD CPU's. Some game have removed the need for SSE4.1 and SSSE3 to support some cpus. No mans sky https://twitter.com/NoMansSky/status/764427751357120512 We are currently testing fixes for older AMD Phenom CPUs
 
Last edited:
Soldato
Joined
6 Feb 2019
Posts
17,634
This thread has nearly the same number of replies as the 3900x thread.

Amazing how much traction Intel chips gain when you have german retailers saying most sales are for ryzen
 
Soldato
Joined
18 Oct 2002
Posts
4,333
Its hard to tell a lot of reviewers have no idea what they are doing
and arnt using intel spec 1.60 mOhms LLC & intel spec 1.60 mOhms AC/DC Load Line in reviews
so it makes it impossible to compare online reviews
also silicon lottery comes into it ofc

example kitguru said their sample used 1.38v at 5ghz
is that a bad sample or is that using bad default settings

I will post details on mine when it comes in a few days..

cheers bud, this is why i only usually read two sites anandtech/techreport because they give you the bottom line. Some sites have posted reviews with engineering samples, not great.

Kitguru seem to be more concerned with 5.2 AVX clocks and have posted a voltage of 1.392v at that speed which is not far off my own chip.

I pull 90w-100w at 5.3 at similar voltage when running games, if running AIO I would TDP limit the chip just to ensure things don't get out of hand when the usage spikes.
 
Back
Top Bottom