• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Polaris architecture – GCN 4.0

Well they could be, they could also be holding a gun to monitor manufacturers head to make them sell at such a high price, maybe they are subsidising the Freesync monitors to make the Gsync stuff look more expensive.
Who knows, just remember kids NVidia = bad:rolleyes:

Very childish tbh, i was merely stating that Nvidia charge for the modules and none of us know how much. This effects the price of the monitor or are you disputing this. So Nvidia do have a part in the extra price of G-Sync monitors we just don't know how much. Back to play school with you.
 
I swore 7000 got video free-sync; but not game as 7000 series could only do video where as newer gnc could do it all. I'm at work so I can't really look it up and I need more coffee :D

Well yeah, it's completely reasonable to believe that because all the freesync news, slides etc included support for gcn 1.0. The dropping of that was done a lot more quietly.

It was going to be the same with vsr - 'support is going to be added later' then 'the hardware can't support it' then a few months later it was included in the drivers as a bit of a surprise.
 
I applaud Nvidia for it personally and without them doing it, AMD wouldn't have A-Sync screens either, so AMD users with the tech should also applaud them and AMD users can be grateful it is cheaper for them.... Happy days no?

(

Nope.

GPU architecture takes many years to design. Hawaii was released in 2013. That means that AMD purposely thought to include hardware async years before the 2013 release date - before Gsync was even a thing.

The more sensible thing to debate is why NVIDIA didn't have the foresight to include hardware async into Maxwell, since as we keep getting reminded here, Maxwell is a much newer architecture compared to Hawaii.

I'm guessing NVIDIA had the choice of adding it via hardware, but saw that they could make good profits on these Gsync modules instead.
 
Nope.

GPU architecture takes many years to design. Hawaii was released in 2013. That means that AMD purposely thought to include hardware async years before the 2013 release date - before Gsync was even a thing.

The more sensible thing to debate is why NVIDIA didn't have the foresight to include hardware async into Maxwell, since as we keep getting reminded here, Maxwell is a much newer architecture compared to Hawaii.

I'm guessing NVIDIA had the choice of adding it via hardware, but saw that they could make good profits on these Gsync modules instead.

These are really good points, Hawaii (GCN1.1) was probably laid down once done with GCN 1.0, so around mid to late 2011, they must have had the foresight to include its scaler for Adaptive Sync as far back as that.

Given that Nvidia GPU's have no such hardware on the GPU (including Maxwell) Nvidia with its external add-on looks more reactive to what they may have got wind of what AMD were doing.
 
Last edited:
These are really good points, Hawaii (GCN1.1) was probably laid down once done with GCN 1.0, so around mid to late 2011, they must have had the foresight to include its scaler for Adaptive Sync as far back as that.

Given that Nvidia GPU's have no such hardware on the GPU (including Maxwell) Nvidia with its external add-on looks more reactive to what they may have got wind of what AMD were doing.

It's probably more likely that an existing mobile IO block was repurposed due to tightening R&D constraints. It just happened to have to the necessary support for the eDP protocol for an approximation of Gsync to be put forward. Don't forget that Nvidia mobile parts have the necessary protocol support to work with adaptive sync supporting displays, whether NV politics will allow it to happen... yea.

If you spend a bit of time googling, the very first mention of anything remotely related to what FS/AS has become was not until well after Gsync was available in the market place.
 
It's probably more likely that an existing mobile IO block was repurposed due to tightening R&D constraints. It just happened to have to the necessary support for the eDP protocol for an approximation of Gsync to be put forward. Don't forget that Nvidia mobile parts have the necessary protocol support to work with adaptive sync supporting displays, whether NV politics will allow it to happen... yea.

If you spend a bit of time googling, the very first mention of anything remotely related to what FS/AS has become was not until well after Gsync was available in the market place.

Mobile the V-Blank scalers for mobile parts are different, the Discrete parts for V-Blank are more sophisticated, it requires extra R&D than it does sticking with what you have, as Nvidia have.
You don't save money by spending R&D to make new scalers, that makes no sense.

The fact that there was no talk about Desktop V-Blank Sync is irrelevant as it applies to both sides
 
Mobile the V-Blank scalers for mobile parts are different, the Discrete parts for V-Blank are more sophisticated, it requires extra R&D than it does sticking with what you have, as Nvidia have.
You don't save money by spending R&D to make new scalers, that makes no sense.

The fact that there was no talk about Desktop V-Blank Sync is irrelevant as it applies to both sides

The problem for your argument is that AMD had nothing to show until well after Nvidia had demo'd Gsync and they were then very late to market behind Nvidia. The argument that Nvidia got wind of a secret AMD project, and were then able to go away, design a very advanced LCD controller and scaler (advanced enough to allow some panels higher refresh rates than the standard AS/FS versions) and then get all that to market in a timescale that beat the competition by almost a year is somewhat far fetched. Surely the established scaler manufacturers should have been able to get something out sooner than they did if it was a project that had been in the design pipeline for that long.

Even the launch of AS/FS was less than smooth. Late to market, constantly slipping release dates, rushed driver support, all kinds of issues with display overshoot problems and questions about what product actually supports which features not nailed down until well after the announcement. Those are the signs of a project that has been rushed.

AMD have nailed all those issues, and FS displays offer a compelling experiance for their owners, but nothing about the announcement and eventual launch gives the indication of anything other than a rushed, reactionary product.
 
This is really pretty simple, there are pretty smart guys in both Nvidia and AMD. It's more than likely that both companies began to think about sync technology after the release of eDP 1.3. eDP 1.3 spec included the panel self refresh feature and a framebuffer. Both are needed for VRR tech.

Nvidia got their solution to market first. Kudos to them. But, to say that AMD didn't think about sync tech until then is wrong as AMD had obviously been thinking about it as they had included the necessary hardware for VRR in their desktop cards with the release of the 290/290x. Which would have had to be considered months in advance.

So, yeah, they probably arrived at sync tech independently of one another. And both solutions are similar and is that surprise? Both solutions are based on the same source.

And is somebody really suggesting that AMD used mobile parts in their desktop cards because of lack of funds and are just lucky? haha, really?
 
And is somebody really suggesting that AMD used mobile parts in their desktop cards because of lack of funds and are just lucky? haha, really?

No. I said AMD probably repurposed a display IO block from an existing part due to constraints on R&D. Having support for eDP, especially the earlier version in older GCN parts, and its ancillary features in a desktop design are wasted die space. NV had the luxury of being able to design a purpose designed part, whilst AMD took the view that it would save them more by reusing an existing design (IMO of course). Given how HDMI 2.0 will not be supported out of the box until Polaris, it gives me some confidence that AMD are willing to reuse existing block designs as long as possible.

Feel free to take that as you wish.
 
From what I remember reading, AMD has supported the full eDP spec since the 5000 series. It was not used much in the end at the time. But their chips have had support for V-blank since then, just not in the way needed for adaptive-sync.

AMD tend to be quicker at supporting open standards than Nvidia.
 
It would be very easy to think that AMD were only swung over to VRR once they had heard about NVidia's Gsync. Suddenly finding that they could do a similar thing, with the EDP stuff. But, that would make sense if it was only the GPU's that were also used for mobile parts, the thing is the Hawaii parts 290/x were never going to be mobile (at least I don't think they were) and they have the necessary hardware included, why would AMD put that hardware on the chip, which was already quite a large chip if it wasn't gong to be used. In my opinion this alone give credence to AMD at least thinking about VRR before Gsync was announced.
 

Seems like the truth hurts, Gregster, hence your childish response.

The fact of the matter is, variable refresh rate was built into AMD hardware long before Gsync was even available.

To say that AMD reacted to Gsync is not true - they were obviously planning it long ago, as it's supported in hardware.

I'm assuming you have no idea how long it takes to design complex GPU architectures - as I mentioned in my previous post, AMD must have included this years before Hawaii was actually released.

The fact that Hawaii released in 2013, means that AMD decided to equip their GPU's with variable refresh rate technology years before this.

We're all aware that Gsync launched first, though saying AMD just reacted to Gsync and decided to try and copy it is a downright lie.

TBH it's rather embarrassing for NVIDIA to have to conjure up a expensive PCB (installed in monitors) to support Gsync, as their hardware isn't capable of it. They have a much higher R&D than AMD - but AMD are the ones with the more elegant, hardware based solution, which results in cheaper variable refresh rate for all.
 
Last edited:
In general industry standards take a long time. You don't put hardware in place, you don't spend time and money coming up with a standard, nor writing software, nor adding hardware, without discussions with the other parties. Industry standards take so long exactly because you need to have multiple meetings with multiple other companies, who then go back and have their own meetings with various departments, give input, a lot of back and forth on how to do it, how would best work, what suits which company, etc, etc.

For adaptive sync to be put forward to Vesa in the early year, work on it would have started LONG before G-sync came out.

Nvidia saw an opportunity to short cut to market with a programmable FPGA chip without needing to bother with industry standards. Talk to Asus, get them on board with massively expensive marked up screens to rip customers off with, order a bunch of FPGAs, a couple months working on programming the FPGA and ironing out bugs and you pretty much have G-sync done. They saw something coming then decided to screw their customers, beat adaptive sync to market rather than do what is best for everyone. It's the Nvidia way.
 
Back
Top Bottom