• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Polaris architecture – GCN 4.0

Nope.

GPU architecture takes many years to design. Hawaii was released in 2013. That means that AMD purposely thought to include hardware async years before the 2013 release date - before Gsync was even a thing.

The more sensible thing to debate is why NVIDIA didn't have the foresight to include hardware async into Maxwell, since as we keep getting reminded here, Maxwell is a much newer architecture compared to Hawaii.

I'm guessing NVIDIA had the choice of adding it via hardware, but saw that they could make good profits on these Gsync modules instead.

These are really good points, Hawaii (GCN1.1) was probably laid down once done with GCN 1.0, so around mid to late 2011, they must have had the foresight to include its scaler for Adaptive Sync as far back as that.

Given that Nvidia GPU's have no such hardware on the GPU (including Maxwell) Nvidia with its external add-on looks more reactive to what they may have got wind of what AMD were doing.

Exactly.
Tearing is a bad experience to me personally but it seems the majority accept varying amount of it, if they manage to solve it without Vsync then that will be the new thing on the block with what was before being put forwards as unplayable and flat out bad experience with reviews asked to focus on it depending on who gets there first.
https://forums.overclockers.co.uk/showpost.php?p=24199463&postcount=100

I said that on 29th Apr 2013 and Nvidia Gsync announced 18th Oct 2013 so anyone thinking that it was not being thought about before Nvidia Gsync is just wrong.
 
AMD are the ones with the more elegant, hardware based solution, which results in cheaper variable refresh rate for all.

Sucks more juice and runs hotter :rolleyes:

Either way I hope AMD put up a decent fight this time around. I would happily take an AMD GPU with a FreeSync monitor if it met my needs.
 
Sucks more juice and runs hotter :rolleyes:

Either way I hope AMD put up a decent fight this time around. I would happily take an AMD GPU with a FreeSync monitor if it met my needs.

Unless you have a high refresh rate screen in which case Nvidia just lies and doesn't bother telling you their idle power usage is terrible, negating any power saving done under load you mean?

Also 90% of the reason Maxwell uses less power under load is they went ahead and stripped out almost all compute functionality and removed the hardware scheduler which instead uses power on the CPU instead. Both of which appear to be going back into the gpu for Pascal. It's not elegant to strip out a load of functionality, performance and offload a bunch of processing onto the CPU instead.
 
Seems like the truth hurts, Gregster, hence your childish response.

Why would it hurt? I have no allegiances either way and happy using AMD or Nvidia. Bit of a strange thing to say really.

The fact of the matter is, variable refresh rate was built into AMD hardware long before Gsync was even available.

Yer, the first capable card was the 290X iirc.

To say that AMD reacted to Gsync is not true - they were obviously planning it long ago, as it's supported in hardware.

Why did they only ask VESA to add it after Nvidia came up with G-Sync modules?

I'm assuming you have no idea how long it takes to design complex GPU architectures - as I mentioned in my previous post, AMD must have included this years before Hawaii was actually released.

I am quite aware of how long it takes to design a GPU I have been building and interested in computers since the Spectrum 48K days

The fact that Hawaii released in 2013, means that AMD decided to equip their GPU's with variable refresh rate technology years before this.

Again, why did they only ask Vesa to add Adaptive Sync after G-Sync came to light?

We're all aware that Gsync launched first, though saying AMD just reacted to Gsync and decided to try and copy it is a downright lie.

If you have some evidence of this - anything that mentions Adaptive Sync prior to Nvidia mentioning G-Sync back in November 2013 (iirc) I am happy to look into it and give you credit but it is just making no sense. You are basing on assumption and no strength in your argument.

TBH it's rather embarrassing for NVIDIA to have to conjure up a expensive PCB (installed in monitors) to support Gsync, as their hardware isn't capable of it. They have a much higher R&D than AMD - but AMD are the ones with the more elegant, hardware based solution, which results in cheaper variable refresh rate for all.

Well with Nvidia having a bigger share of the GPU market, I would say it is more embarrassing for AMD to have taken so long. Like I said, I am happy buying either way and hearing that there is a way to stop tearing and stutter, I was keen as mustard to try it. Of course G-Sync came first and by quite some time but not really fussed, as VRR is available to everyone. It comes across as a desperate fanboy rant to me Dave if I am honest and you should look forward and to the future. Both now have the tech, you are not forced into one vendor or another, if you want VRR, go for a Freesync monitor, they have something like a 2:1 choice over G-Sync capable monitors so a real plus for AMD there. I don't get why you are so adamant and my previous reply was purely because it is nonsense and no substance in your "I'm Guessing" statement.

Now whilst we are at it, I am really hoping that AMD can bring out a beast in Polaris, I am also hoping Nvidia can bring out a beast in Pascal, I have been following all the rumours and look forward to jumping on at some point (and probably jumping on both at some point - money dependent of course. I really enjoy my hobby and hope that my fellow gamers have no issues regardless of brand. I certainly don't get tied up in the politics of GPU buying as it is bought for fun. Anyways, if you have anything to back up your claims, I of course will look through it and give you some credit but if you can't, sorry bud but it is just your thoughts. :)
 
Now whilst we are at it, I am really hoping that AMD can bring out a beast in Polaris, I am also hoping Nvidia can bring out a beast in Pascal, I have been following all the rumours and look forward to jumping on at some point (and probably jumping on both at some point - money dependent of course. I really enjoy my hobby and hope that my fellow gamers have no issues regardless of brand. I certainly don't get tied up in the politics of GPU buying as it is bought for fun. Anyways, if you have anything to back up your claims, I of course will look through it and give you some credit but if you can't, sorry bud but it is just your thoughts. :)

Just buy the best for your needs/money - simple ;)
 
Now whilst we are at it, I am really hoping that AMD can bring out a beast in Polaris, I am also hoping Nvidia can bring out a beast in Pascal, I have been following all the rumours and look forward to jumping on at some point (and probably jumping on both at some point - money dependent of course. I really enjoy my hobby and hope that my fellow gamers have no issues regardless of brand. I certainly don't get tied up in the politics of GPU buying as it is bought for fun. Anyways, if you have anything to back up your claims, I of course will look through it and give you some credit but if you can't, sorry bud but it is just your thoughts. :)

Why did they only ask Vesa after, read my response, why did AMD spend years on Mantle before giving it to OpenGL group. An open standard doesn't go, hey, we had this idea, we'll submit it tomorrow and see what happens. AMD had HSA under their control for a while before handing it off to a group. This is how it works, you can't form an open standard without input from other people who would use it. You don't propose adaptive sync as a standard without first talking with multiple monitor manufacturers and giving them time to think it through, speak with their own various departments, look into implementation, costs. This all takes time, before a proposal for something to be introduced would happen months, maybe years of thinking, meetings, planning and changes occur before this happens.

For this to be submitted a couple of months after G-sync was announced would probably mean a 99% chance this was being talked about with monitor manufacturers 6+ months before that date.

Why did Nvidia announced G-sync when they did, announce a super rushed "upgraded" version, why did the first official, manufactured WITH the g-sync module ship long after v-sync was announced and adaptive sync submitted to Vesa? Why didn't Nvidia announce the release of proper G-sync screens at CES or GDC with non bodge job screens being used for show with final screens with the module included as standard shipping as is? Everything Nvidia did, the entirely ghetto and entirely unprecedented nature of pushing users to open screens and install a g-sync module, screams time to market and making an announcement before a KNOWN upcoming adaptive sync product coming to market.

Literally everything about the situation screams as loud as possible that this was coming regardless, that Nvidia knew about it and rather than just use an industry standard good for everyone they took a short cut that screwed customers but let them take credit for it.
 
Well AMD also did a super rushed demo of a windmill running on a laptop long after Nvidia announced G-Sync was coming. Wouldn't it also be the same for G-Sync in time spans as well then or do you thing that someone on the Monday thought "hey, I have an idea" and it was all over the press on Tuesday? Seriously though, does it really matter? Like I say, AMD have Freesync capable screens that outnumber G-Sync capable screens by a 2:1 margin, so surely this is a plus for AMD. And add in they are cheaper as well, which again is another plus for AMD.
 
"We are pleased to start production of our industry-leading, 2nd generation 14nm FinFET process technology that delivers the highest level of performance and power efficiency" said Charlie Bae, executive vice president of sales and marketing for System LSI Business at Samsung Electronics. "Samsung will continue to offer derivative processes of its advanced 14nm FinFET technology to maintain our technology leadership."

rest of the article


so samsung started mass production of the 14nm, i dont know if it's just ARM, or AMD chips also, because if it is that would put first small polaris 2-3 months from now.
 
Well AMD also did a super rushed demo of a windmill running on a laptop long after Nvidia announced G-Sync was coming. Wouldn't it also be the same for G-Sync in time spans as well then or do you thing that someone on the Monday thought "hey, I have an idea" and it was all over the press on Tuesday? Seriously though, does it really matter? Like I say, AMD have Freesync capable screens that outnumber G-Sync capable screens by a 2:1 margin, so surely this is a plus for AMD. And add in they are cheaper as well, which again is another plus for AMD.

Literally do you not read at all.

Industry standards involve talking with a significant number of parties, submitting proposals, adapting hardware AND waiting for a new cycle of monitors and scalers. Taping out a full silicon chip even a simple silicon chip takes longer.

G-sync is done with FPGAs, simply put a large around of effectively programmable transistors. For the same performance they cost dramatically more, use a lot more power and are far larger in size than the same done via specific hardware designed with a single usage. But being programmable you can buy off the shelf near enough.

FPGA time to market is a tiny fractoin of that for a full silicon tape out of a specific chip. Going with a single initial manufacturer with no industry standard takes a fraction of the time it takes to do the same thing properly with full support of the industry.

The entire point is you can absolutely do G-sync within a few months easily. You couldn't do G-sync within a few months if you used a proper standard scalar that went through a full design cycle and silicon tape out nor if it had to adhere to industry standards.

There is a reason why the initial offer was to jam a g-sync module into a single option of an existing screen. There was a reason that until very recently you could only have one output. Precisely because G-sync was incredibly rushed.

If AMD/the industry wasn't going variable refresh rate... why did G-sync rush to market with a huge number of BIG downsides and in a ghetto mod your screen fashion? If Nvidia caused everyone else to respond, they then had plenty of time to simply make a final full g-sync module that accepted more inputs and could launch with proper monitors. FPGA costs a hell of a lot more to do than a full dedicated chip version.
 
If AS/FS was in the planning stage for so long, how come it was way behind Gsync in terms of functionality on launch, with AMD only catching up with some software features right now, and still behind as shown by certain LCD panels in Gsync displays offering higher refresh rates than the AS/FS model? Surely something in the planning and design stage for so long should have hit the market running, and not have software issues and problems with LCD overshoot.

Fact is, nobody knows exactly what went on, and this will just be yet another tedious point scoring **** fest that will never be resolved. Knock yourselves out.
 
"We are pleased to start production of our industry-leading, 2nd generation 14nm FinFET process technology that delivers the highest level of performance and power efficiency" said Charlie Bae, executive vice president of sales and marketing for System LSI Business at Samsung Electronics. "Samsung will continue to offer derivative processes of its advanced 14nm FinFET technology to maintain our technology leadership."

rest of the article


so samsung started mass production of the 14nm, i dont know if it's just ARM, or AMD chips also, because if it is that would put first small polaris 2-3 months from now.

Samsung has been in mass production of 14nm for a long time(far longer than TSMC), this is talking about their second gen finfet. TSMC are nearing their second gen finfet as well but are noticeably behind Samsung.

The Polaris samples people have seen are 14nm LPP, the second gen process, the first gen is 14nm LPE. TSMC is 16nm FF and 16nm FF+.

There is really nothing second gen about them, they use the same equipment, materials everything. Effectively on a new process you have a given transistor size, in this case 14/16nm, and a given metal pitch, which is effectively the spacing of the transistors, of about 64nm. But if you stuck to these rules you'd get lower yields. So the first gen process is just conservative and rather than telling customers to use 14nm/64nm for their design they say use 15nm/70nm. Then as they learn more, find the mistakes and iron out problems they introduce a '2nd gen' which is really just a point where they say, hey, you can use the 14nm/64nm numbers now. This basically means the first gen is 10% off the theoretical maximum of the process, and the second gen is just at the maximum the equipment was designed to achieve.

Either way getting to the 2nd gen point means yields are decent, they've found the reason for most of the problems, they are now confident to make denser and faster chips.

The samples of Polaris shown were on this '2nd gen' version of the process. With a big chip being 15% smaller or having 15% higher performance makes a huge deal. Both Nvidia and AMD have waited for the respective foundries best version of the process before going into production.

I suspect small/medium Polaris entered production this quarter having taped out and gotten samples back in previous months. Could be a launch in 2-3 months. Keep in mind that one wafer takes 4-6 weeks to finish being made. So if you go into production today you wouldn't get a single chip back till end of Feb. To launch you'd want minimum 2 weeks, maybe up to 6 weeks production(depending on yields/capacity of fab) to be able to launch with stock in the hundreds of thousands. Low end you want probably 200+k for launch, midrange, maybe 50-150k, high end you might get away with 10-50k, ultra stupid high end, 5-15k.
 
Fact is, nobody knows exactly what went on, and this will just be yet another tedious point scoring **** fest that will never be resolved. Knock yourselves out.

Exactly. I genuinely don't care who was first, who did this, who did that. I have a G-Sync monitor, like others and others have Freesync monitors and we all seem happy with them, so why does it really matter? I am certainly not out for point scoring but if DM and others want to believe Freesync was first, that's cool with me :)
 
If AS/FS was in the planning stage for so long, how come it was way behind Gsync in terms of functionality on launch, with AMD only catching up with some software features right now, and still behind as shown by certain LCD panels in Gsync displays offering higher refresh rates than the AS/FS model? Surely something in the planning and design stage for so long should have hit the market running, and not have software issues and problems with LCD overshoot.

Fact is, nobody knows exactly what went on, and this will just be yet another tedious point scoring **** fest that will never be resolved. Knock yourselves out.

It was behind in functionality at launch by having WAY more functionality? Trying to recall the FS panel with a single input... can't.

AMD didn't make the panels, and as pointed out, just because it's in the planning stage doesn't make the final silicon any less new when it's finished. New chips/new screens have problems. Honestly the monitor industry is a joke, 120hz should be minimum standard for the past 5 years, they shouldn't sell 60hz screens, the 28+ " 1080p panels are a joke for DPI/image quality and every other new screen has major problems, bleed, dodgy firmware, bad colour, massive QA problems. Two screens had a overshoot not working with FS on problem, both monitors had a bios update to fix the problem. BenQ and Acer ****** up, nothing more or less. They both probably rushed to market themselves and missed out some testing, which appears to be a near standard problem with most panels these days. Every other screen gets massively hyped, finally launches then you get threads about awful bleeding or every other panel failing, multiple replacements.

The thing I find worst about it all is monitor makers have started charging extortionate prices for cheap panels (in part thanks to g-sync being used to ramp pricing massively) at the same time quality dropped through the floor. Expensive and no quality assurance is laughable.
 
Last edited:
Arguing about who was first, wow this sub-forum gets even better :D

If you want to be point whoring about free/adaptive sync, all you need to say is:

- cheaper "overall"
- much better/larger choice of various panels and by more monitor manufacturers despite gsync being out for over a year longer
- here to stay for good

Likewise for gsync, want to point whore:

- universally better ranges
 
Last edited:
It amazing, its the Mantle DX12 argument all over again.

Just remember AMD can do no wrong and the future will be so bright it burn your retinas out.
 
Back
Top Bottom