• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

*** AMD "Zen" thread (inc AM4/APU discussion) ***

SiDeards73;30488702 said:
Intel in panic mode? funny how they can wheel a +15% Improvement in one move when they have been dishing out marginal improvements for generations... That pretty much says all you need to know about Intel, were happy to rinse your wallet for minimal upgrades while we have no competition.. oh wait competition, heres a decent performance boost, but its gunna cost yah!

funny this leaks now as well, even though im inclined to say its probably 1000% rubbish as its from WTFtech, but funny it leaks when the Ryzen info is coming thick and fast, conspiracy theorists would say is a knee jerk well placed leak to put doubt in potential AMD customers minds...

Problem is, i bet a lot of people will look at this if theres any truth in it and ask why they couldnt bring those improvement earlier on Skylake / Kabylake? oh wait maybe they only found this extra performance once Kabylake was released, yeah that'll be it.

IT's not, I'm not sure why people are surprised? What is coffee-lake? Oh right, it's adding a hex core chip. 15% performance increase with + 50% cpus on a i7 8700k.... ouch. That suggests to fit into the TDP they'd likely be pushing clocks down a fair amount.

They are banging on about slimmer finfets and the like, this is all there talk for, wow, our amazing advancements is going to allow a bigger chip in on 14nm. Only trouble is Broadwell-E is like ~350mm^2, Skylake/Kaby quad is a 122mm^2 chip, a hexcore is only going to be around 150mm^2.


We all knew hex core was coming to 14nm, the biggest problem for Intel in mainstream is wanting a lower TDP and putting in more cores. I somewhat suspect it will have a much lower base clock, but a very high turbo and probably hope users don't notice it won't hit those turbo speeds anywhere near as often.
 
drunkenmaster;30489629 said:
t-topology is a very VERY basic technology and has absolutely nothing at all to do with reverse engineering anything.

The more you say the dafter it sounds. T-topology is about making the trace length the same distance between the front and back dimm slots... ie, they added some wiggly lines to the nearer dimm slots to make the traces longer to match the distance to the back traces.

The memory controller then has less work to do in staggering signals as the trace lengths are all equal and as such is more stable at higher clock speeds.

They made copper traces the same length.... reverse engineering... lol.

Also for the record, this HAD to be done on many systems throughout the history of electronics. For instance GDDR5 initially had to have the same trace length, in a later revision unequal trace length was added, support was added to memory controllers and it made life easier in design to allow unequal length in PCB design. All Asus did was realise that equal trace length was more stable, precisely because they used to do it on graphics cards and when they stopped doing it they would have found memory overclocking less stable, realising it's more work on the memory controller. IE all they did was revert to the older way of doing things essentially and via experience from changing it on other products in the past.

There is no magic here, it's common sense, whoa, we took advantage of those unequal lengths and got worse overclocking... what's the reason, easy to deduce, if we equal trace lengths on another platform, maybe that will increase memory stability... it did. Literally nothing to do with reverse engineering.


T-Topology has absolutely nothing to do with GDDR5, so making that reference you’ve already shown you’re veering yourself into the realm of the unknown (to you).

It differs from Intel’s design, that’s why the example was used. Perhaps the OC Socket would have been more apt.

By example I used T-Topology, as all other vendors are still using a typical daisy chain layout…
I can only imagine if you knew this, you wouldn’t have made the post you just did…
 
Beren;30489115 said:
No arguments from me on that one - though you have to remember that their job is to return the biggest rewards possible to their shareholders - not just charge a 20% margin on top of their costs. I also hope that actually a lot of the profit they have been making has been piled into R&D, you can chuck an awful lot of money down blind alleys looking for the next thing when you have a great and secure revenue stream. They just needed a push to bring some of those toys out earlier.

Remember that in 2015 Intel spent $12 Billion on R&D that is what 4 times AMD's total Revenue??

That is not a company being completely lazy - I suspect it may be a Company that is a little scared that the Moore's law that they have lived by is ending and looking for what is next but they are not some Evil MegaCorp ® that is taking us all for a ride.

At the same time AMD have taken some massive risks borrowed heavily and gone back to the drawing board to come up with Ryzen. Which I hope is epic and lifts AMD to new levels of success - because I am really impressed with how they do things.

Bottom line - Intel might have been a little lazy, but you need to look at what they have done in CoreM and SSD tech over the last 7 years as well as heaps of other areas not just CPUs.

It's worth noting that R&D effectively covers 70-80% of the staffs wages, that is your biggest R&D cost, the people doing it. From a lot of rumours from a lot of people, Intel has been starting R&D projects left right and centre and cancelling them before they get anywhere.

Optane sounds like... well, the claims of performance are literally between 1 and like 3 magnitudes lower than they were(various different performance claims, all but one has tanked and to different levels) as well as been delayed repeatedly. There is basically a question that the original goals will ever be achieved hence the seemingly best case has been scaled back so far, it's performance in it's first incarnation won't necessarily be best case(think gddr5x talked about at 14Gbps but launches at 10Gbps) and we don't even know when the first incarnation will truly be available.

It's incredibly interesting (depending on price) technology but it's looking increasingly like a technology that was talked about way too early, long before it was concrete and might turn into vapour-ware.

Another thing to point out would be AMD, people muttered terrible things about cutting so many staff in general and so many supposed big names. But they cut that staff, those supposed key(and highly paid) staff, cutting down wages and with it R&D costs, yet have come up with Zen and Vega on much less money. If $12billion a year is going on a lot of bad staff, a lot of dead end projects then well, you basically can't quantify at all how much of that turns into worthwhile end results.

One last thing, AMD didn't borrow heavily to do Zen. They had more debt before Zen R&D started than they do today, they've paid off a lot of debt, their share value is much higher and they have far far lower overheads. They haven't taken on more debt in general. They streamlined the company massively under Rory Read, they divested into things other than PC like semi custom/consoles and they've not made big cash losses in the meantime thanks to that streamlining. A lot of the 'losses' they made were on paper. If the company is stated to be worth 5billion one year, and 4 billion the next, you take a 1billion paper loss on the books, with no cash loss. The massive majority of their losses are paper losses. Otherwise their debt wouldn't be in the situation it is(much much healthier). If AMD made huge cash losses, their debt would have increased correspondingly.
 
Silent_Scone;30489698 said:
T-Topology has absolutely nothing to do with GDDR5, so making that reference you’ve already shown you’re veering yourself into the realm of the unknown (to you).

No - Take a minute to reread the post - at no point is there anything that says T-Topology is anything to do with GDDR5 - just that it uses a similar technique in that it is about maintaining equal trace length, to avoid timing delays or skew. I.e. it was a technique that had already been used before but in a different application - no reverse engineering was necessary.


Silent_Scone;30489698 said:
By example I used T-Topology, as all other vendors are still using a typical daisy chain layout…

Or maybe other vendors don't consider it worth their while for the theoretical gains?


Silent_Scone;30489698 said:
I can only imagine if you knew this, you wouldn’t have made the post you just did…

Honestly just give it a rest. Any of us can read marketing blurbs and yet non of us actually have the means to prove that the physics of e.g. T-Topology actually work.
 
Silent_Scone;30489698 said:
T-Topology has absolutely nothing to do with GDDR5, so making that reference you’ve already shown you’re veering yourself into the realm of the unknown (to you).

It differs from Intel’s design, that’s why the example was used. Perhaps the OC Socket would have been more apt.

By example I used T-Topology, as all other vendors are still using a typical daisy chain layout…
I can only imagine if you knew this, you wouldn’t have made the post you just did…

Good for you, you didn't understand, thus proving you really don't know as much as you claimed.

I never at all stated t-topology has anything to do with GDDR5, I likened it to it.

I'll try again, in graphics cards, the first GDDR5 cards HAD to have equal length traces, later on the spec was improved and the memory controllers were designed to adjust the timings to adjust for different length traces.

This is common, it's happened on many memory types in history. As you can simply with no knowledge easily tell, this is more work for the memory controller, that is common sense.

Now on a motherboard, you have Asus who decided, with t-topology, to equalise the length of traces on the dimms. IT's entirely no different, it's common sense that with the same trace lengths it's one less thing for the memory controller to do, it's less, work, it's easier, nothing more or less.

They did NOT reverse engineer an AMD memory controller, they went, hey, equal size traces are less work and allow for higher overclocks.
 
TaKeN;30489335 said:
Ryzen 7 1700 (YD1700BBAEBOX) with SpecSheet Appeared at HardwareSchotte.de

Translates to..

• Energy-efficient AMD Ryzen 8-core processor with ZEN architecture with quiet Wraith cooler
• This CPU has no fixed multiplier , similar to the AMD Black Edition.
• 8 cores with 16 threads at up to 3.7 GHz clock
• AMD Ryzen processors has no fixed turbo clock - the maximum Turbo clock is dependent on the cooling. < This I find interesting!

Stolen from reddit again :p


8 cores with 16 threads at up to 3.7 GHz clock

So no maximum boost clock but lists an upto? Im sure these leaks are more or less spot on now, but some of the info is a little conflicting.
 
There is little reason to believe the leaks are spot on. Hard facts on architecture are known because AMD have talked about the architecture, hard numbers on samples are mostly reliable. Prices and specs change, I'll point out again that Gibbo has got pricing wrong on cards before because AMD told everyone one price leading up to launch and changed them the day before. By that I mean Gibbo said it would be higher than various guesses, then it wasn't, his information was accurate at the time but it was intentionally incorrect information AMD gave out, though I can't for the life of me remember exactly what product it was, 99% sure it was a GPU, 5870/50 maybe?

So even if say that image of the trade site prices was accurate at the time, it doesn't necessarily reflect what the prices will be at launch, because such information can change.
 
drunkenmaster;30489780 said:
Good for you, you didn't understand, thus proving you really don't know as much as you claimed.

I never at all stated t-topology has anything to do with GDDR5, I likened it to it.

I'll try again, in graphics cards, the first GDDR5 cards HAD to have equal length traces, later on the spec was improved and the memory controllers were designed to adjust the timings to adjust for different length traces.

This is common, it's happened on many memory types in history. As you can simply with no knowledge easily tell, this is more work for the memory controller, that is common sense.

Now on a motherboard, you have Asus who decided, with t-topology, to equalise the length of traces on the dimms. IT's entirely no different, it's common sense that with the same trace lengths it's one less thing for the memory controller to do, it's less, work, it's easier, nothing more or less.

They did NOT reverse engineer an AMD memory controller, they went, hey, equal size traces are less work and allow for higher overclocks.




Let&#8217;s work on the assumption that you are better equipped to tackle these things than the engineers for the sake of humour.

Both of the topologies have a trade-off, but I&#8217;m sure you already knew that. With dual channel systems it allows for higher memory overclocking with more banks; and the engineering comes from balancing these trade-offs verses doing things in serial. These are the reverse engineering aspects I was eluding to before you needlessly dug it up.

By all means if these feats are meaningless, perhaps you can go and work for them and show them blue prints of a motherboard of your own design.
There are other examples that could have been used.
 
They've not explained how the clock thing works yet, which is very frustrating. My assumption is that it will work similar to the NVidias Pascal cards, ie. the 1080 has a listed boost clock of 1733 MHz, but it actually boosts higher than that so long as the cooling and power is there. My guess is that the "up to" numbers we're seeing is what its aiming for, but will go higher if its within the set parameters, and likely lower if its running hot.

We really have a whole bunch of unanswered questions still. Hopefully all will become clearer in the next few weeks.
 
mrsteve1982;30489817 said:
8 cores with 16 threads at up to 3.7 GHz clock

So no maximum boost clock but lists an upto? Im sure these leaks are more or less spot on now, but some of the info is a little conflicting.

(warning speculation)

3.7Ghz is the max turbo in a non "X" chipset and/or when set to keep within the advertised TDP. AKA "Precision Boost".

On an "X" chipset you can enable "XFR" Extended Frequency Range. Here the the overclock can be above the advertised max turbo. It sounds like this will be automatically governed by your cooling, and perhaps other factors like a configurable TDP, etc.

I think overclock = turbo and they become one and the same on an X platform.

9ebkNNs.jpg
 
Silent_Scone;30489834 said:
Let&#8217;s work on the assumption that you are better equipped to tackle these things than the engineers for the sake of humour.

Both of the topologies have a trade-off, but I&#8217;m sure you already knew that. With dual channel systems it allows for higher memory overclocking with more banks; and the engineering comes from balancing these trade-offs verses doing things in serial. These are the reverse engineering aspects I was eluding to before you needlessly dug it up.

By all means if these feats are meaningless, perhaps you can go and work for them and show them blue prints of a motherboard of your own design.
There are other examples that could have been used.

Lets make the assumption I'm better equipped to tackle these things... sorry but quote in my other two posts where I said this, or implied anything like it?

You are implying that they had to reverse engineer AMD chips to be able to do this. All I was pointing out is, A, this is wiggly traces. B, I already stated the trade off and why it was done. Non equal traces allow for far easier and cheaper design, potentially less PCB layers and easier routing, that is the trade off. A little work done in the memory controller for cheaper pcbs and cards.

So mr thinks you're so smart for pointing out there are trade offs.... if you could read better you'd understand I already pointed them out.

Asus decided to do it to add better overclocking, it comes at a cost, they put it on more expensive boards, that is the trade off. The comparison to GDDR5 is that it's known about, it's happened on LOTS of previous memory types, equal length traces and timing/clock skews is something that has been around and tackled by engineers for the past 50 years, so no Asus didn't reverse engineer an AMD cpu to come up with this brand new idea.

No, I never claimed I would be able to implement the design, nor manufacture a board, no I didn't claim to be better than Asus engineers. What I can do is highlight when you're talking rubbish and make a basic claim that something like equal trace lengths is not a new nor difficult idea for a motherboard design engineer to come up with.

This is a case of ENGINEERING, not REVERSE ENGINEERING, the definition of which is

the reproduction of another manufacturer's product following detailed examination of its construction or composition.

You wanted to sound really clever, you're doing this "i know more than all of you" shtick throughout this thread, particularly the last week or so, you thought reverse engineering sounded complex and cool and like you knew something others didn't.... but you're completely misusing it.


With dual channel systems it allows for higher memory overclocking with more banks; and the engineering comes from balancing these trade-offs verses doing things in serial. These are the reverse engineering aspects I was eluding to before you needlessly dug it up.

This part specifically, this has absolutely nothing to do with reverse engineering, that is... making a design choice. Do we go with a cheaper PCB with shorter traces and less layers, or do we enable better overclocking but it will cost more, might add layers to the board and may take a little more fine tuning and expense with design. Engineering comes in the IMPLEMENTATION of EITHER of those options, making the choice itself is simply a question of what the product is. Cheaper low end board, cheaper shorter traces, higher end enthusiast board, the more expensive option, the engineering comes after that choice is made.
 
The fact is, if you understood what T-Topology really did &#8211; we wouldn&#8217;t be having this conversation.
The trace length is only one factor here. The snake like nature of the traces (past the via) verses a straighter trace, added inductance and capacitance.

So even your length angle is incorrect.


Anyway, although it's marginally relevant to the topic seeing as we don't know what implementation is used on AM4 boards, best to let it be.
 
Silent_Scone;30489698 said:
T-Topology has absolutely nothing to do with GDDR5, so making that reference you’ve already shown you’re veering yourself into the realm of the unknown (to you).

It differs from Intel’s design, that’s why the example was used. Perhaps the OC Socket would have been more apt.

By example I used T-Topology, as all other vendors are still using a typical daisy chain layout…
I can only imagine if you knew this, you wouldn’t have made the post you just did…

 
Pants;30489845 said:
(warning speculation)

3.7Ghz is the max turbo in a non "X" chipset and/or when set to keep within the advertised TDP. AKA "Precision Boost".

On an "X" chipset you can enable "XFR" Extended Frequency Range. Here the the overclock can be above the advertised max turbo. It sounds like this will be automatically governed by your cooling, and perhaps other factors like a configurable TDP, etc.

I think overclock = turbo and they become one and the same on an X platform.

9ebkNNs.jpg

I can't see how overclock and turbo become the same thing on the X platform. Almost anything automatic like this has to work within a TDP or you can't really sell it with a TDP.

I think what we'll see is, lets say a chip has 3.6Ghz base, 4Ghz turbo, now the chip will try to go to 4Ghz with all cores and will stay that as long as load on those cores doesn't break the TDP limit, if it does it might go anywhere between 4Ghz and 3.6Ghz which is should be able to maintain at it's given TDP under any load.

What XFR sounds like it will do is, say you are running 4 threads, if you have a stock cooler maybe it will push to 4.1Ghz before being temp limited, then if you stick on a giant air cooler you get 4.3Ghz, air cooling, 4.5Ghz. So when you're not at TDP limit, XFR will keep increasing clock speeds, to a stable limit, as long as it stays under a certain temp, but when it hits TDP it would stop.

Real overclock blows past TDP completely. They may work in unison, maybe overclocking becomes a case of change a TDP limit from 95W to 150W, whack on watercooling and keep XFR enabled and with the higher TDP limit as long as temps stay fine it will clock/voltage up to the highest it can.

But we'll have to see how good that would be, it might be more stable to set a flat 4.3Ghz on all cores with lower voltage with XFR off.
 
Pants;30489845 said:
(warning speculation)

3.7Ghz is the max turbo in a non "X" chipset and/or when set to keep within the advertised TDP. AKA "Precision Boost".

On an "X" chipset you can enable "XFR" Extended Frequency Range. Here the the overclock can be above the advertised max turbo. It sounds like this will be automatically governed by your cooling, and perhaps other factors like a configurable TDP, etc.

I think overclock = turbo and they become one and the same on an X platform.

9ebkNNs.jpg

I kinda figured this would be how it worked as well, you can run it at a stock out of the box setup, where it will boost to a set frequency, or you can enable the XFR boost thing which allows it to go even further if the cooling is there to allow it.

Manually OC'ing the cpu will i guess disable this auto boost thing.
 
Its probably going to be the same as with Nvidia graphics cards. So you have a defined "official boost range" which is kind of guaranteed.

If you have better cooling and a better motherboard,you can enable XFR and it can boost past the "official boost range".
 
CAT-THE-FIFTH;30490076 said:
Its probably going to be the same as with Nvidia graphics cards. So you have a defined "official boost range" which is kind of guaranteed.

If you have better cooling and a better motherboard,you can enable XFR and it can boost past the "official boost range".

Hope not because I can just see the intel fan boys coming out with throttling arguments:o
 
SiDeards73;30490016 said:
I kinda figured this would be how it worked as well, you can run it at a stock out of the box setup, where it will boost to a set frequency, or you can enable the XFR boost thing which allows it to go even further if the cooling is there to allow it.

Manually OC'ing the cpu will i guess disable this auto boost thing.

Hopefully there are multiple options. I currently have turbo enabled on my errm, whatever the hell I have, 5820k(?), because ultimately I don't want to be running 4.2Ghz all day long, only when the load requires it.

So it would be nice if you can keep all the turbo things enabled(normal turbo all cores and XFR maybe being different things to be enabled separately?), then up a TDP limit to say 130W and see the clocks scaled a bit higher. Really single core shouldn't go up if you upped the TDP as if only a single core is turned on then it would take insane voltage to go beyond 95W, but what is single threaded these days, superpi, worthless.

Most modern games will use at least 3 or 4 threads if not 8, it would be nice to see a 4-6 thread scaling go up just by upping TDP limit, but keeping intact the ability to downclock massively at idle and only clock up as high as the current load needs. If you're watching a video or something you don't need 130W and the whole chip at max clocks. THat is where turbo rather than fixed overclocks really works best.

SO yeah, I hope overclocking doesn't disable turbo/XFR features, though as before it would be nice to have that option. If you want to do a benchmark run(it's not for me personally) then usually sticking to a single clock is more stable.
 
CAT-THE-FIFTH;30490076 said:
Its probably going to be the same as with Nvidia graphics cards. So you have a defined "official boost range" which is kind of guaranteed.

If you have better cooling and a better motherboard,you can enable XFR and it can boost past the "official boost range".

So you can disable XFR if you don't want it doing that then?
 
Back
Top Bottom