• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD announce EPYC

Intel's slides for their new Xeons all focus heavily on trying to talk down Zen, using insane unfair comparisons, comparing one Xeon to a 1800x desktop chip clocked at 2.2Ghz, mentioning in every slide that EPYC uses a desktop die... while highlighting the coherent links that give multiple 50GB/s links from one die to another.... something literally not remotely used in desktop in any way shape or form and not even used in single core server, only in 2S systems. In previous years they wouldn't even be mentioning AMD, now they are mentioning EPYC in almost every slide and lying about it in almost every single slide.

An interesting thing is, Intel are critical of the L3 cache latency as being on average high, but the local L3 latency is extremely low while Intel's went with very low L3 causing hits to go out to memory with massively worse latency than AMDs local L3. So when software can keep the majority of data a cpu needs in local L3, then the latency is drastically lower than Intel chips with a neutered L3 who has to go out to memory more often as a result. They've upped L2 but had to dump a lot of L3 to fit everything on to one die. Again this is where the many die strategy works, Intel with a massive die on the 28 core can't afford more features, it had to cut back in multiple areas, AMD could afford to have more cache, more memory channels, more pci-e and more core precisely because adding 25mm^2 of stuff to a 170mm^2 die to add those features is easy where as adding 4x 25mm^2 worth of features to a single 680mm^2 die taking it up to a 780mm^2 die would have been the difference between getting 10 working cpus per wafer and 1 working cpu per wafer and 10 would already be a disastrously bad yield at 680mm^2.

For one of the main things they criticise EPYC for, L3 latency, real world worst case latency for the L3 won't happen often while Intel having just over half as much will be a problem much much more often.

Semiaccurate saying they have confirmation that Intel's way of chopping up the chips with neutered features has cost them customers who will go with EPYC instead, so great move by Intel... for AMD that is, it's bad for Intel. The whole launch seems terrible from Intel. Why are they even saying AMD just reused a desktop die... if they reused a desktop die and provided better server features with EPYC what exactly does that say about Intel who spend billions in R&D for server chips... they made something dedicated and it's a bit faster in some things, slower in others and has some significant drawbacks, than a chip AMD threw together from desktop parts? That just makes Zen sound amazing and Intel sound stupid. That messaging along with angering customers by artificially neutering features and demanding literally thousands extra just to not cripple a feature like memory capacity, it's just crazy.

The messaging of, hey, AMD had 5 years to work on this one architecture and we've worked on this for a year since Broadwell-e and look at these benchmarks where we still win sounds so much better but they went for the absolutely worse message possible... AMD barely even tried and basically matched us.
 
At the end of the day drunkenmaster Intel isn't going to fool anyone that actually matters. The big players in the server/data center market have already been trialing EPYC for over year and have already made commitments to AMD going into the future. All Intel will succeed in doing is making themselves look like a baby chucking it's dummy out of the pram. To blatantly lie about about EPYC to professional IT experts that already know what EPYC is capable of doing for them is only ever going to end one way, a hell of a lot of Intel hardware and chips flooding the second hand market.
 
Not sure what they even mean by calling EPYC a "reused desktop die", aren't Intel's HEDT parts just higher clocked Xeons with features removed? What's the difference? Intel are absolutely correct when they say software optimisations will be needed for Naples and Ryzen but when AMD's chips often outperform Intel's ones already, what does that say about Intel's chips? It's also not a great PR move to say "our monopoly has meant all software is optimised for our chips, so just stick with us rather than drive innovation".

The whole integrated v segmented argument they put forward is also pretty stupid since both have advantages and disadvantages, and AMD's approach seems pretty smybolic of the way monolithic software architectures are being replaced with multi-threaded layered architectures that are more manageable. Of course they're just trying to sell their product but it seems like they're struggling for evidence if all they have is "SMT is better when at the same clock speeds, even though AMD's chips are clocked higher anyway for the same TDP and are thus more efficient, plus a hell of a lot cheaper" and "AMD's architecture is less well known and weird, be scared of it".
 
Last edited:
In Intel's case it used to be that Xeon's were literally no different to desktop but features got disabled. Then they did move to separate chips, though usually the same architecture and it was really more about core count, die size and cost than designing something differently. Though more cores required a slightly different way to connect them. Though arguably up until the very latest chip they were mostly trying to reused the same ringbus in an absolutely not consistent and not brilliantly designed way for server. They've finally moved to a full mesh and I would almost consider Skylake-SP to be one of their first genuine chips, it's got a different cache layout and vastly different fabric for pushing data around the chip. Before this they had the same architecture and same cache design as all their desktop stuff.

Then with Intel they pushed there server stuff down onto HEDT. AMD are using a basically one die for all design, but they've only done the same as Intel, designed something specifically with the fabric and inter chip interconnect specifically designed to work for server then pushed that design down to HEDT and desktop.

Hell if anything you can say, Intel designs a cheaper architecture for desktop while AMD provides you with full server quality chips in desktop, it would be quite funny if AMD actually went that route with marketing just to pee off Intel :p


In terms of software, any really big valuable customers write their own and optimising for a different architecture is in effect, childs play. Like a company Facebook's size will have hundreds of not thousands of software engineers, they'll get samples and try and optimise their code for said samples to get an idea of which way they'll go moving forwards. They'll have to optimise for Skylake-SP as well, because not doing so would be mental. Even just deciding to put a team on it for a couple of weeks to see what kind of performance improvements they can make then they can make decisions after more experience with the code. They'll have a rough idea that say 10 weeks with the whole team and we can get performance up another roughly 40%, then that information goes upstairs and someone makes a decision on what is best financially, EPYC or Skylake-SP.

Hell when it comes to Skylake-X, even with all this support, there are many games that just as with Zen, didn't work out of the box brilliantly with the new architecture and will take some time to optimise.
 
Last edited:
AT benchmarks Epyc:

http://www.anandtech.com/show/11544/intel-skylake-ep-vs-amd-epyc-7000-cpu-battle-of-the-decade/3

Its a work in progress,but it looks to be relatively competitive.

Productivity workloads EPYC crushes the Xeon CPU's just measuring performance alone, needs a bit more work in data centre workloads but even there its not bad at all, and the cost for the performance ratios makes the Intel chip look bad, no matter what the workload.

Impressive stuff.
 
AT benchmarks Epyc:

http://www.anandtech.com/show/11544/intel-skylake-ep-vs-amd-epyc-7000-cpu-battle-of-the-decade/3

Its a work in progress,but it looks to be relatively competitive.
The database test was a weird one. The whole database they tested could be stored in L3 cache, how realistic is that in the real world? How would an EPYC system handle a large database running Oracle E business suite for 100's of employee's? Surely that's the type of workload they need to be testing and benchmarking.
 
New Video by Adored. One thing i found interesting was that when AMD launched the opteron (which eventually went on to claim 25% of the server market) intel just about acknowledged AMD. Considering there press slide deck; they are either not making the same mistake again, or AMD is about to do some serious damage to intels market share (25+%). This is going to be interesting.
 
Intel might not be able to rip us off like they have been for much longer if AMD show us a better chip range at a better price too.
But even if they are not quite as good as intel, their price will make that up im sure. GO AMD!!!
P.S. I'm not a fanboy of either, just like to see greed stamped out! :D
 
New Video by Adored. One thing i found interesting was that when AMD launched the opteron (which eventually went on to claim 25% of the server market) intel just about acknowledged AMD. Considering there press slide deck; they are either not making the same mistake again, or AMD is about to do some serious damage to intels market share (25+%). This is going to be interesting.
Hopefully this time Eu and Us will both give them much higher fine if they try to do dirty tricks again
 
It's really impressive what AMD have achieved so far, especially considering the R&D difference between it and Intel. Heck AMD isn't even in the top 10 of R&D spenders in the world and Intel is number 1.
 
Intel compaired their chip to a underclocked threadripper at 2.2ghz for clock for clock.
LOL! OK lets see how much you can push them and get from them, then put it on a big flag to wave :D
 
Back
Top Bottom