• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

2990X Threadripper Performance Regression FIXED on Windows*

Though that is more about the multi-package AMD CPUs think I've been seeing symptoms of that going back to the early days of Ryzen and the 1700/1800 even - with certain compile tools GHz for GHz with tools that can use as many threads as you can throw at them Ryzen CPUs were significantly behind my 4820K - i.e. a 8 core / 16 thread was 50% slower than my 4/8 i7 unless I disabled SMT or messed with CPU affinity at which point it jumped upto the same speed as my i7 - had to go to like a 16/32 AMD CPU to see actual gains. At the time I put it down to legacy software that had been optimised heavily for Intel Hyperthreading but that video would explain it if Windows generally has a poor time dealing with threads on AMD CPUs.

https://forums.overclockers.co.uk/posts/31682069

Funny thing is yet again I got **** from the AMD fanboys for saying it :rolleyes:
 
Pretty sure you or sideways misquoted each other there :p it was oldbanana who said it was crap

Can't remember which instance it was that ended up taking a load of ****** comments in reply I have several people in that thread blocked and not really interested in checking through it.
 
I wonder if they'll even bother, something like this could well fly under the radar considering they ignore even their own feedback hub.
Either they fix it or risk people switching to Linux. Besides Microsoft works closely with all the major hardware vendors on things like DirectX so you would think AMD has contact or two they can work with to get this fixed.
 
I find it hard to believe they didn't know about it tbh , Testing their server SKU would have shown issues at least.

Since when mave m$ given a hoot about a small sample of hardware problems (related to their kernel) especially when the average person did not know where to prove it? They have had enough problems with their own patches. This is good but unless theres a big enough song and dance over it, its gonna blow over.
 
Since when mave m$ given a hoot about a small sample of hardware problems (related to their kernel) especially when the average person did not know where to prove it? They have had enough problems with their own patches. This is good but unless theres a big enough song and dance over it, its gonna blow over.

On the desktop yes I agree but in the server space is a different ball game is it not.
 
On the desktop yes I agree but in the server space is a different ball game is it not.

EPYC is server right? It would be just inconvenient to involve that market so I guess your right but we know if it was just AMD's desktop flavour it would go to the bottom of the to do list.
 
Either they fix it or risk people switching to Linux. Besides Microsoft works closely with all the major hardware vendors on things like DirectX so you would think AMD has contact or two they can work with to get this fixed.

That's assuming people are aware it's a problem with Windows and not TR, my guess is most people will see a performance deficit and say AMD's CPUs aren't as good as Intel's, it will be wrong but that won't stop them from assuming that's where the problem is, it can't be Microsoft's fault as their soooo 'great', right? :rolleyes:

Changes to a scheduler isn't like working with hardware vendors on things like DirectX, IIRC it took Linux devs almost a decade to get NUMA working properly/efficiently and they had much more motivation to get it working correctly as Linux, i think, is used in more environments where NUMA is a thing.

I suspect AMD has already tried working with Microsoft on this and either got blank stairs or was told Nope, we ain't fixing that, the reason i suspect that is because AMD has spent time and money on developing software that attempts to assign threads to the best cores.
 
Last edited:
EPYC is server right? It would be just inconvenient to involve that market so I guess your right but we know if it was just AMD's desktop flavour it would go to the bottom of the to do list.

Yes, Unfortunately I'd have to agree, maybe with them not being on the server scene until recently they just CBA to fix it
 
*snipped*

Funny thing is yet again I got **** from the AMD fanboys for saying it :rolleyes:

Ryzen is a great chip but some people don't understand that hardware can be great and acknowledged as such but still suffer from poor software or just have workloads they do not quite as good in. Once "you" have made a purchase it apparently means your opinions of the product has to be 100% positive for that purchase to make sense to these people. I switched from from a 4790k to a 2600 non x and i am happy with the performance and have seen gains but also ties and even losses in peak performance in some rare cases. Though compared to my old i7 the 2600 is a lot harder to tilt and can take a harder beating before tripping in multitasking scenarios so i got what i was after.
 
Ryzen is a great chip but some people don't understand that hardware can be great and acknowledged as such but still suffer from poor software or just have workloads they do not quite as good in. Once "you" have made a purchase it apparently means your opinions of the product has to be 100% positive for that purchase to make sense to these people. I switched from from a 4790k to a 2600 non x and i am happy with the performance and have seen gains but also ties and even losses in peak performance in some rare cases. Though compared to my old i7 the 2600 is a lot harder to tilt and can take a harder beating before tripping in multitasking scenarios so i got what i was after.

One area I noticed the 2600 seems to be quite good is Windows boot times - it is one of the few general areas (rather than specific applications that like lots of cores, etc.) where I notice it stand out compared to the older i7s, etc. - also seems better than the 1600/1700 for that for some reason as well.
 
One area I noticed the 2600 seems to be quite good is Windows boot times - it is one of the few general areas (rather than specific applications that like lots of cores, etc.) where I notice it stand out compared to the older i7s, etc. - also seems better than the 1600/1700 for that for some reason as well.

I can give you another. While max and even avg fps was lower on my ryzen 2600 compared to the older i7 4790k the frametimes i exported was much better on the ryzen chip in gta5 with fewer and less severe spikes. Now i predict someone will be triggered by this comment but that is the internet for you.
 
I can give you another. While max and even avg fps was lower on my ryzen 2600 compared to the older i7 4790k the frametimes i exported was much better on the ryzen chip in gta5 with fewer and less severe spikes. Now i predict someone will be triggered by this comment but that is the internet for you.

To be fair the systems I built were for non-gaming people so I've not had extensive opportunity to test gaming wise but I didn't actually see any difference at all in the games I did test with my 1070 to my 4820K I mean literally same framerate (give or take like 1-2 fps) and same smoothness as far as I could in any of the bits I'm familiar with from playing the games a lot.

EDIT: I should add there that the 2600s weren't overclocked other than enabling the performance enhancements that had them boosting to around 4GHz while my 4820K is slightly overclocked.
 
To be fair the systems I built were for non-gaming people so I've not had extensive opportunity to test gaming wise but I didn't actually see any difference at all in the games I did test with my 1070 to my 4820K I mean literally same framerate (give or take like 1-2 fps) and same smoothness as far as I could in any of the bits I'm familiar with from playing the games a lot.

EDIT: I should add there that the 2600s weren't overclocked other than enabling the performance enhancements that had them boosting to around 4GHz while my 4820K is slightly overclocked.

Problem is also that there are a lot of issues right now with windows 10 on the consumer side it's difficult to get a clear and true picture of actual performance and experience. If only i had known about the EmptyStandbyList fix a lot sooner it would have saved a few headaches for sure. While I standby my numbers and statements i can fully understand why someone else may have had a different experience and/or different numbers.

On a side note, overclocking my 2600 is impractical due to a **** gigabyte bios. The gains are not worth it compared to the increased power draw. Vdroop on this board is horrendous.
 
Back
Top Bottom