• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Intel to Cut Prices of its Desktop Processors by 15% in Response to Ryzen 3000

Do you know how long it takes to optimise a codebase for a new CPU architecture? Most software never gets updated, let alone updated within a short timeframe (say a few months). It's a bit much to expect AMD to give game developers access to chips months before launch and for those studios to prioritise performance on new AMD CPUs at all. To be honest I think we've done well to get several high profile games updated since Zen 1's launch.

I would not bank on patches improving performance at this point. If you game at 1080p, don't do much else with your PC, and have spare budget after maxing out your GPU, just get an i7-9700K and forget about it. Zen wasn't for you and Zen 2 isn't for you either, let's stop all the fighting about it. At lower price brackets and for more diverse users, AMD is extremely competitive.

Last sentence of your posts sums it up perfectly :)
 
I'd only give it until Comet Lake, tbh.

i give it a lot more time then that how many years has intel had the same old Skylake since end of 2014 i think and still using it and if leaks are true will be the basis of there 10th series cpu. Amd are very close to intel in every way atm Yes intel can be faster at gaming when using the right stuff but amd has closed the gap quite a lot So skylake could be 6 yeras old when they finally release 10nm ice lake cpu.

when ice lake cpu comes out do you really expect intel to get there monster cpu's from i9 and i7 right out of the box? do u think there 10nm will be able to hit 5ghz easly ? do you think that it Will yield a massive ipc boost out of the gate? Currently intel are a cross roads in what to do another 14nm skylake type of cpu will still keep the only real avanatage they have over amd and thats pure clock speed. Intel relase a new die shrink and they could loose that and that will hand the uper hand to amd for a while. Intel last die shrink didnt exactly go acording to plan and that was ok they only had The most amazing cpu to compete with the the 5ghz amd fx bulldozer chip:) (joke) so they could aford to loose clock speed higher temps etc.

they can't aford that atm ryzen has pushed intel into a corner and intel now have to respond And from all account intel will reply with another 14nm chip over a 10nm chip And leave the 10mn till end of 2020 or even 2021.

Either love or hate amd or intel for a very long time we have not had this much choice Across the whole range of cpu .
 
Same old Intel vs AMD argument that dogs this forum though, AMD are amazing yes but then you point out the chips don't actually outperform Intel and the argument turns to cost/power/heat. As an enthusiast, none of those really matters does it? You want best performance regardless.
No. If that were true, given this is an enthusiast forum, every single poster would have an i9-9900K and 2080 Ti. We don't.

Also as far as anyone knows, Comet Lake is going to be another Skylake rebrand, probably with 10 cores. We might see their new "Sunny Cove" cores on Icelake (or whatever they've renamed it to now) on mobile parts next year but probably not on desktop for a while because clock speeds are going to suck compared to 14nm+++.
 
No. If that were true, given this is an enthusiast forum, every single poster would have an i9-9900K and 2080 Ti. We don't.

Yup. For me a Ryzen Threadripper 2990WX 32C/64T and a Tesla V100 PCIe 16 GB HBM2 with 815 sq. mm GPU and 21.1B transistors :D

What an enthusiast will you be if you settle with an 8C/16T CPU? :D
 
No. If that were true, given this is an enthusiast forum, every single poster would have an i9-9900K and 2080 Ti. We don't.

Those who can often do. Those who can't buy as good as they possibly can which is why AMD are doing so well, the new offerings are very good for the money but then you still get some people who buy the AMD product and will over justify until they're blue in the face as to why their money was well spent and not admit simply they compromised on performance due to cost.

Saying that from what I've seen the cost sounds like it is levelling out with x570 boards but I haven't done any real reading as I don't feel the need to investigate upgrades (yet).

Please don't interpret me as being an AMD hater or Intel fanboi. I went 3770K > 1700X > 8700K and was poised to go 3700X but will likely now wait until the next iteration to get an upgrade.
 
If you game at 1080p, don't do much else with your PC, and have spare budget after maxing out your GPU, just get an i7-9700K and forget about it. Zen wasn't for you and Zen 2 isn't for you either, let's stop all the fighting about it.

For maximum framerate, true.
Strictly for gaming, intel offers the better value - you get a 374$ CPU which outperforms both the 400$ and 500$ AMD quotations, which gives a clue that the pricing from AMD right now is very wrong.

Also, these particular scores lack information about the cheaper i5 and i3, I don't know why AMD's entire stack is compared only to 2 intel products.

qeocenba.heo.png
 
It always surprises me how the case is only ever made with FPS. I'd like to see someone do a decent double blind subjective test on a range of systems to see exactly what difference end users experience. I suspect this hasn't been done as there would likely be little to no practical difference between them.

I did however hear Google did user tests and went AMD for their image quality. Whether that is true I don't know.
 
What storage do you use for your esxi server? local,iscsi or nfs? What do you use your esxi server for?

I have many in datacentre's for commercial use, but the ryzen 2 one at home, is local storage, I use it to host testing nix based servers, so e.g. test configurations before deploying on live servers, I have a windows VM I use to run certain tools for personal use, and also I use it to test esxi specific stuff as well, it is planned to migrate to proxmox tho as I prefer proxmox.

All of my datacentre deployments are in the process of been migrated to AMD.

Datacentre storage is usually also local, but some are iscsi for the clients who pay more.

Many of the machines I pay more for power than bandwidth, to give you an idea on that side of things.
 
And this sir is why I'm eagerly awaiting my Rome servers. I was sent a list of Rome skus last week so I'm assuming my suppliers are on the case pricing all the kit up for me.

I currently have 6 servers with 2 x xeon x5690 and 192gb ram per server. Half the sockets double the cores for much much less power draw and better performance in the majority of workloads is a win for us.

yep 100%
 

As it stands today I think Intel are over egging that they think they will only lose 20% market share. As I see it Rome has the server space and data centre sewn up. There is very little reason aside from application specific reasons for selecting an Intel server. Even with massive discounts Rome is just more compelling.
 

Link to jump to gaming average/conclusion:
https://youtu.be/CGQY9yJqfMs?t=500

To all the posters saying the gap is not 5%, HardwareUnboxed (again) just did a mega 36-game bench of the 3900X vs the 'gaming king' 9900K, and guess what, they found the 3900X is just 6 percent slower. So as I and others were saying, there is a tiny gaming performance gap between the two, whilst the 3900X wipes the floor with the Intel processor in productivity or multi-threaded tasks whilst drawing less power doing it despite 4 more cores! It's a smackdown.

Now that we've established how close these two are (when you go and search for any difference using a 2080 Ti and running tests at 1080p only), virtually neck and neck in gaming for many titles (within 4% of each other if you look at the graph) the goalposts have moved completely. Suddenly, stock results don't count and the power hungry and inefficient 9900K needs to be overclocked to 5+ghz for gaming tests to be fair! What a joke. For starters, most 9900K's can't get to 5Ghz without huge voltages resulting in massive heat and power draw. Secondly, to overclock a chip which is already running at its limit with a 4.7Ghz all-core out the box, you will need the most expensive AIOs out there or a custom loop.
 
https://youtu.be/CGQY9yJqfMs?t=500

To all the posters saying the gap is not 5%, HardwareUnboxed (again) just did a mega 36-game bench of the 3900X vs the 'gaming king' 9900K, and guess what, they found the 3900X is just 6 percent slower. So as I and others were saying, there is a tiny gaming performance gap between the two, whilst the 3900X wipes the floor with the Intel processor in productivity or multi-threaded tasks whilst drawing less power doing it despite 4 more cores! It's a smackdown.

Now that we've established how close these two are (when you go and search for any difference using a 2080 Ti and running tests at 1080p only), virtually neck and neck in gaming for many titles (within 4% of each other if you look at the graph) the goalposts have moved completely. Suddenly, stock results don't count and the power hungry and inefficient 9900K needs to be overclocked to 5+ghz for gaming tests to be fair! What a joke. For starters, most 9900K's can't get to 5Ghz without huge voltages resulting in massive heat and power draw. Secondly, to overclock a chip which is already running at its limit with a 4.7Ghz all-core out the box, you will need the most expensive AIOs out there or a custom loop.

That's just one more vulnerability and that gap is gone. Even before the issue with boost is resolved.
 
As it stands today I think Intel are over egging that they think they will only lose 20% market share. As I see it Rome has the server space and data centre sewn up. There is very little reason aside from application specific reasons for selecting an Intel server. Even with massive discounts Rome is just more compelling.

yeah really its just to serve clients who insist on intel for their own reasons or for application compatibility issues. I see no performance/cost reason to use intel in a datacentre now.
 
It always surprises me how the case is only ever made with FPS. I'd like to see someone do a decent double blind subjective test on a range of systems to see exactly what difference end users experience. I suspect this hasn't been done as there would likely be little to no practical difference between them.

I did however hear Google did user tests and went AMD for their image quality. Whether that is true I don't know.

It's very interesting to know how the Ryzen CPU influences the image quality produced by a GPU ?
 

Link to jump to gaming average/conclusion:
https://youtu.be/CGQY9yJqfMs?t=500

To all the posters saying the gap is not 5%, HardwareUnboxed (again) just did a mega 36-game bench of the 3900X vs the 'gaming king' 9900K, and guess what, they found the 3900X is just 6 percent slower. So as I and others were saying, there is a tiny gaming performance gap between the two, whilst the 3900X wipes the floor with the Intel processor in productivity or multi-threaded tasks whilst drawing less power doing it despite 4 more cores! It's a smackdown.

Now that we've established how close these two are (when you go and search for any difference using a 2080 Ti and running tests at 1080p only), virtually neck and neck in gaming for many titles (within 4% of each other if you look at the graph) the goalposts have moved completely. Suddenly, stock results don't count and the power hungry and inefficient 9900K needs to be overclocked to 5+ghz for gaming tests to be fair! What a joke. For starters, most 9900K's can't get to 5Ghz without huge voltages resulting in massive heat and power draw. Secondly, to overclock a chip which is already running at its limit with a 4.7Ghz all-core out the box, you will need the most expensive AIOs out there or a custom loop.

There are 20 games where the 9900k is from 5% to 16% faster and 7 where it's 10%+. The 3900x plays all the games excellently but if you are just gaming I don't understand why people have such a problem with the 9900k being the best chip. At this point it just seems like raving Intel hate rather than any measured opinion. Also you missed the bit where the 9900k is a bit faster before you overclock to 5ghz. Weird also that in the individual benchmark he stated the 9900k as being 17% faster in SOTTR but in the chart it says 15%.

Ryzen is an amazing product but it has taken them 3 generations to get where they are and the gains are still not enough to bypass the Intel CPU's that people love to rip apart. Good on them but it's not the second coming people desperately want to portray it has.
 
There are 20 games where the 9900k is from 5% to 16% faster and 7 where it's 10%+. The 3900x plays all the games excellently but if you are just gaming I don't understand why people have such a problem with the 9900k being the best chip. At this point it just seems like raving Intel hate rather than any measured opinion. Also you missed the bit where the 9900k is a bit faster before you overclock to 5ghz. Weird also that in the individual benchmark he stated the 9900k as being 17% faster in SOTTR but in the chart it says 15%.

Ryzen is an amazing product but it has taken them 3 generations to get where they are and the gains are still not enough to bypass the Intel CPU's that people love to rip apart. Good on them but it's not the second coming people desperately want to portray it has.

^^^^^ This.
When intel had the opportunity to demolish the AMD lineup, they didn't think much about greed, but acted accordingly. Nowadays, AMD is driven by the greed only.

https://www.anandtech.com/show/2045/2




"Final Words
Intel's Core 2 Extreme X6800 didn't lose a single benchmark in our comparison; not a single one. In many cases, the $183 Core 2 Duo E6300 actually outperformed Intel's previous champ: the Pentium Extreme Edition 965. In one day, Intel has made its entire Pentium D lineup of processors obsolete. Intel's Core 2 processors offer the sort of next-generation micro-architecture performance leap that we honestly haven't seen from Intel since the introduction of the P6.

Compared to AMD's Athlon 64 X2 the situation gets a lot more competitive, but AMD still doesn't stand a chance. The Core 2 Extreme X6800, Core 2 Duo E6700 and E6600 were pretty consistently in the top 3 or 4 spots in each benchmark, with the E6600 offering better performance than AMD's FX-62 flagship in the vast majority of benchmarks. Another way of looking at it is that Intel's Core 2 Duo E6600 is effectively a $316 FX-62, which doesn't sound bad at all.

We're still waiting to get our hands on the E6400 as it may end up being the best bang for your buck, but even the slower E6300 is quite competitive with AMD's X2 4200+ and X2 3800+. If AMD drops the price on those two parts even more than we're expecting, then it may be able to hold on to the lower end of the performance mainstream market as the E6300 is not nearly as fast as the E6600."
https://www.anandtech.com/show/2045/19
 
@4K8KW10

greed is good for AMD currently ! need to make max profit consumer side before intel drops their next line up and will drop price accordingly . AMD did what they needed to price wise with first 2 gens and this is correct for the time being with 3rd series. It will drop down in the near future, but why drop it now when chips are OUT of stock, demand is HIGH and max profit is being generated !

server side they are killing intel in pricing and still make sense not to raise prices as much as desktop as they still need to capture more market- push more companies onto their platform .

hopefully they can follow on trend and increase R&D as the'll need to for intels 10nm ! as manufacturing wise, 7nm EUV should be on par with it . Key question is, AMD werent able to increase speed so much with Die shrink to half size, can intel pull this off with IPC increase as well ?
 
Back
Top Bottom