• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Workaround: FreeSync on nVidia GPUs

IIRC the reason Nvidia decided to push for adaptive sync on desktops and eventually build G-Sync is because they added adaptive sync to some laptops or laptop prototypes and the test feedback they received was pretty much "this is amazing, we want it on desktops).

have you got a link for that please? Gsync on laptops didn't come until a couple of years after the launch of Gsync on desktops.
 
Not going to argue with you. But, you are wrong. AMD were on the road to VRR before Gsync was launched. Were Nvidia first out the door, yes, because they had the resources to bypass the all red tape with the VESA certification process.

But only after nVidia started showing it off - it is pretty obvious from the states the two launched in nVidia had a massive head start. AMD had zero intentions with VRR until people started showing a lot of interest in G-Sync. They didn't need VESA certification as Display Port already allowed for manipulation of the vblank interval.

have you got a link for that please? Gsync on laptops didn't come until a couple of years after the launch of Gsync on desktops.

The first demo of "G-Sync" IIRC was using eDP with a cannibalised laptop display - Tom Petersen spun some story something along those lines.

EDIT: First press demo was using the ASUS VG248QE - its hard to find a lot of the initial information as it was back then.
 
Last edited:
But only after nVidia started showing it off - it is pretty obvious from the states the two launched in nVidia had a massive head start. AMD had zero intentions with VRR until people started showing a lot of interest in G-Sync.



The first demo of "G-Sync" IIRC was using eDP with a cannibalised laptop display - Tom Petersen spun some story something along those lines.

I might be remembering this wrong, but I'm pretty sure gsync was announced to big fanfare with proper demos then it was AMD about 2-3months later in a boothe at a show on a laptop that demoed VRR so it was AMD that cludged freesync together from eDP using the power saving feature that slows down screen refreshes
 
I might be remembering this wrong, but I'm pretty sure gsync was announced to big fanfare with proper demos then it was AMD about 2-3months later in a boothe at a show on a laptop that demoed VRR so it was AMD that cludged freesync together from eDP using the power saving feature that slows down screen refreshes

The first time it was shown off to the press they used the Asus monitor but there was a video with Tom Petersen where he talked about showing it off to partners where there was some story about eDP (or atleast a laptop) but I can't find the video now.

The only comments I can find on it now are:

To his knowledge, no scaler ASIC with variable refresh capability exists—and if it did, he said, "we would know." Nvidia's intent in building the G-Sync module was to enable this capability and thus to nudge the industry in the right direction.
...
My sense is that AMD will likely work with the existing scaler ASIC makers and monitor makers, attempting to persuade them to support dynamic refresh rates in their hardware. Now that Nvidia has made a splash with G-Sync, AMD could find this path easier simply because monitor makers may be more willing to add a feature with obvious consumer appeal. We'll have to see how long it takes for "free sync" solutions to come to market. We've seen a number of G-Sync-compatible monitors announced here at CES, and most of them are expected to hit store shelves in the second quarter of 2014.
 
Last edited:
But only after nVidia started showing it off - it is pretty obvious from the states the two launched in nVidia had a massive head start. AMD had zero intentions with VRR until people started showing a lot of interest in G-Sync. They didn't need VESA certification as Display Port already allowed for manipulation of the vblank interval.



The first demo of "G-Sync" IIRC was using eDP with a cannibalised laptop display - Tom Petersen spun some story something along those lines.

EDIT: First press demo was using the ASUS VG248QE - its hard to find a lot of the initial information as it was back then.

I am not denying that Nvidia had it out first. What I am saying is that AMD were working on a VRR solution before Gsync was released. AMD were obviously caught on the hop by Gsync and rushed out a demo in CES in January 2014 using laptops, but, they had been putting things into place for VRR before the Gsync demo. First Gsync monitor was August 2014, first Freesync monitor was April 2015. (I am not counting the BEN Q monitor that released in MArch as they jumped the gun before the drivers were ready and it needed a firmware update.)

They did have to get display port certification, you are confusing eDP with Display Port. Do you not remember? AMD had to put a proposal to VESA and that didn't get accepted until May 2014 and even then it only became an optional part of the Display port standard. So, AMD to wait until after that to start working on the Manufacturers.

AMD had the hardware in place to connect to Adaptive Sync monitors before the Gsync demo even took place. They Submitted the Proposal to VESA in early November 2013. Now, Maybe AMD managed to come with an alternative to Gsync, put a proposal together and send it to VESA in less than a month, I don't believe that for a second. But, combine that with having the necessary hardware to connect to a DP 1.2a monitor on desktop cards released before the Gsync demo, suggests that AMD were working on a VRR solution.

Besides, I asked this when they did the open Q &A session here on Freesync. They confirmed that they were working on Adaptive sync during the development of the Hawaii and Bonaire cards.

So yes, Nvidia did release first, and did put the skids under AMD. But, AMD were working on a VRR solution before that.
 
I've tried to find the information online a couple of times when it came up recently but it seems to have dropped off the map. It definitely happened I saw it myself at the time.

I do agree, though it is kind of moot, that it isn't cutting edge - VRR has been used in professional displays and signage going back possibly to the 70s.


The industry was gearing up for VRR before Nvidia announced g-sync. There is just no question. The difference is hardware scalars take a lot longer than an FPGA and moreover industry standards take longer than doing things yourselves.

IT was literally within about 6 weeks or something of the g-sync announcement that AMD had a demo of freesync at CES I think it was, or computex, I forget which one is January.

Everything about the sequence of events implies Nvidia pre-empted it to get the lock in before something that was already coming along happened that they couldn't lock in.

Adaptive sync gets announced as coming in 6-12 months and g-sync is dead, you know the announcement is coming and go the utterly horrendously awful anti consumer route of expensive fpga just so you can beat it to market and get your customers locked in without looking like the bad guy and boom, that is how g-sync happened.

Look when 144hz 4k screens were made possible with is it dp1.3 or dp1.4, I forget which now. IT's literally 2-3 years later by the time they appeared. The cycle on things like industry standards for cables and signals in which everyone together can get together and agree on a set of things to work towards takes years, not months. For AMD and the industry at large to have no intention to support VRR, freesync would have arrived at best in custom monitors also with fpgas and a non standard adaptive sync standard after a year, it would have been another year+ for the industry to make compatible scalars, agree on some basic standards and get the framework in place.

It simply doesn't happen as fast as it did unless with no question VRR was WELL done the pipeline before Nvidia decided to screw their own customers out of way way more money. If they went the design a custom scalar chip rather than fpga they'd have saved themselves literally millions upon millions in the long run. The only reason to use an absurdly expensive fpga instead of a properly designed, taped out and manufactured dedicated hardware is singular, time to market. THey had a time to market issue. If AMD and the rest of the industry were actively refusing, Nvidia had absolutely zero reason to rush. THe sole explanation for rushing to market and beating dedicated hardware to market with an ultra expensive fpga..... they were trying to beat something to market.
 
Last edited:
They did have to get display port certification, you are confusing eDP with Display Port. Do you not remember? AMD had to put a proposal to VESA and that didn't get accepted until May 2014 and even then it only became an optional part of the Display port standard. So, AMD to wait until after that to start working on the Manufacturers.

AMD had to - nVidia didn't see the link Andy posted - G-Sync module works within the VESA standard via the allowed ability to vary the vblank interval.

The industry was gearing up for VRR before Nvidia announced g-sync.

That is rubbish and people trying to re-write history - I doubt if it was down to VESA and AMD we'd even have VRR now - it isn't like it is new technology at all - its been used in professional VDUs going back probably 30 years before G-Sync.
 
AMD had to - nVidia didn't see the link Andy posted - G-Sync module works within the VESA standard via the allowed ability to vary the vblank interval.

No it doesn't. The Gsync module doesn't need any approval by VESA. It doesn't need any extra hardware at the display port, as long as the display port supports the resolution it will work with Gsync. That's why it supported much older cards than adaptive sync. The Gsync module is the scaler, frame buffer, timing controller all rolled into one.
 
No it doesn't. The Gsync module doesn't need any approval by VESA. It doesn't need any extra hardware at the display port, as long as the display port supports the resolution it will work with Gsync. That's why it supported much older cards than adaptive sync. The Gsync module is the scaler, frame buffer, timing controller all rolled into one.

Which is what I said - though I think I mistook what you said in post #60 as I assumed you were saying nVidia had an easier ride getting VESA certification rather than bypassing any need for it.
 
That is rubbish and people trying to re-write history - I doubt if it was down to VESA and AMD we'd even have VRR now - it isn't like it is new technology at all - its been used in professional VDUs going back probably 30 years before G-Sync.


No, it's people ignoring obvious obvious truths. It takes longer than a year to get a new standard, new scalers designed, taped out, produced and tested and into new panels which have gone through a design cycle, it's that simple. It takes longer than that. Making an FPGA version would take literally 1/10th of the time. For AMD to have freesync screens out and working within a year after g-sync straight makes certain this was in the works long before g-sync was announced. There is literally no other option. Even a small basic chip such as required for a monitor still takes time to design, to test, to tape out and actually be produced, you can't just do it in a few months because you want to, it takes much longer than that and before you design a chip... you know, you actually have to have a reason to do it. Which means talks about something that requires such chips start a minimum of a few months before design on such a chip starts.

Something like VRR, freesync and having panels to market takes probably at least a couple of years, maybe longer.

as said there is a single reason to go FPGA instead of vastly cheaper dedicated hardware, time to market. Nvidia were trying to beat 'something' to market, I wonder what that something was. According to you AMD and everyone else were actively saying no to it and then reacted after g-sync was made. Ignoring again the massive time to market such a standard, chip design, design cycle for monitors takes, if they only decided to do this after g-sync went public, it didn't matter when g-sync went public had it been a year later with chips that cost 1/10th as much it wouldn't matter.

I literally said in the days around the g-sync reveal/launch that I was 100% certain an industry standard was obviously due to be announced in the next couple of months and that Nvidia were just trying to beat it to the punch to lock in their hardware users. I remember long threads of the same Nvidia people telling me what Nvidia were doing was unique and super difficult and AMD couldn't do it, nor would they do it with an industry standard because it's too complex, etc, then freesync/variable refresh rate got announced.

IT was patently obvious from the very second Nvidia did this. When there is a better, cheaper, compatible for all method of doing this and out of the blue one companies makes an absurdly overpriced stupid version with the only benefit being time to market... it's exceptionally obvious why they've done it, because if a industry standard free method gets announced first Nvidia can't latch on, released second and lock their customers in to being ripped off further.
 
It takes longer than a year to get a new standard, new scalers designed, taped out, produced and tested and into new panels which have gone through a design cycle, it's that simple. It takes longer than that.

If you were talking about a complete new design sure, but we aren't. The scalers used in this care are barely more than firmware updated versions of existing chips and part of the reason there has been a bit of a bumpy road with regard to refresh ranges, etc. - those limitations despite the 9-240Hz in theory is precisely because it was based on currently available at the time scalers.*

The reason nVidia went with an FPGA was due to the lack of interest from anyone else in implementing the feature - until they actually lit the fire - again the explanation you are pushing is re-writing history.

Also the FPGA approach has some benefits - with additional buffers and reprogrammability to work with you have far more options when it comes to things like supporting variants of Windowed modes and other non-exclusive fullscreen approaches, low pixel persistence modes and more flexible approaches to overdrive which can be an issue with adaptive sync (in general both G-Sync and any other VRR tech) - it is quite obvious when you look at the technology at a lower level why nVidia took this approach and it certainly isn't about beating anyone to market - the costs and R&D time for some of the features like working with the problems with overdrive just don't stack up with that.


* Taken this from Reddit but it was updated with information from AMD:

  • G-Sync officially works within a range of 30hz to max refresh, as allowed by the display technology that the G-Sync module is installed in.

  • FreeSync on paper supports a range of 9-240hz, but is limited by the currently available scalars. This is a situation that is QUICKLY improving and the discrepancy between G-Sync and FreeSync is quickly closing. Initially, G-Sync 144hz displays had 30-144hz ranges, whereas the first FreeSync implementation was 35-90hz. This quickly improved with multiple 30-144hz FreeSync displays hitting retail. The first 240hz displays so far have ranges of 30-240 (G-Sync) and 48-240 (FreeSync). While AMD is clearly narrowing the gap here, there's still a ways to go before a majority of their displays have the consistent range found in equivalent G-Sync displays.

  • The discrepancy in this is due to the use of hardware scalars that were not originally designed for variable refresh rate (VRR). There are three companies that manufacture and sell scalars, and they are rapidly advancing the technology to support VRR tech. Nvidia bypassed this to do their own thing and, so far anyway, it's worked out well for them.
 
Last edited:
Nvidia can make their own scaler chip, that is the issue here. Exactly what they do with their FPGA can be done with a longer design cycle in a smaller vastly cheaper dedicated design. It's got nothing to do with what everyone else wants to implement. IF the industry wasn't interested and there was no time frame limitation they would have made their own scaler that did everything the FPGA does in a tiny fraction of the price, it would just have taken longer. I'm not talking about they would have gotten Samsung or someone else to make it. They could easily have made a dedicated hardware version of their FPGA and with the same cost increases they pass on would have made a lot more profit in the long term.... but they didn't. Why? because they had a time limit.


If the scalars had no changes in hardware, they could have launched in weeks. They took existing designs and added in VRR on a hardware level and they cheaped out as they so often do, that doesn't mean it's not dedicated hardware and it doesn't mean it can't improve. It just means that a lot of companies didn't want to make a completely new chip from the ground up on the design side, they just wanted to do a small add on. Add on's still require a full tape out and production cycle. You don't take an existing mask set and just add on a few more transistors for something. IT's just less work on the design end to add a few new functions to an existing design than to start from new.
 
Here is a link that somewhat supports what you are saying... and also blows it out the water...

https://www.amd.com/en-us/press-releases/Pages/support-for-freesync-2014sep18.aspx

Also some monitors like random Korean models using older scalers are firmware updatable to support FreeSync albeit with limited ranges like 42-60Hz.

As per the FPGA versus traditional scaler - as I said some things like overdrive and low pixel persistence are far more effectively handled with the capabilities of the kind of FPGA nVidia is using versus traditional scaler - hence why things like adaptive overdrive with FreeSync is currently not well supported.
 
Here is a link that somewhat supports what you are saying... and also blows it out the water...

https://www.amd.com/en-us/press-releases/Pages/support-for-freesync-2014sep18.aspx

Also some monitors like random Korean models using older scalers are firmware updatable to support FreeSync albeit with limited ranges like 42-60Hz.

As per the FPGA versus traditional scaler - as I said some things like overdrive and low pixel persistence are far more effectively handled with the capabilities of the kind of FPGA nVidia is using versus traditional scaler - hence why things like adaptive overdrive with FreeSync is currently not well supported.

Yes, there were some monitors with scalers that could have been updated to support Freesync. Remember Iiyama had one brand of monitor that did have an upgradeable scaler. Gibbo made a post in the monitor section saying that you could return the monitor to Iiyama, they would upgrade the monitor and send it back. But, Iiyama decided to cancel this. Nixeus, whose monitor was used in the second Freesync demo, claimed that some scalers could be upgradeable, but, not all would be, not even all of the same model. So rather than have people brick their monitors with firmware upgrades, monitor manufacturers just started fresh.

The Nixeus guy made a post on one of the forums regarding this, sorry, I can't find it anywhere. He was also the one who started the whole "monitor could be upgradeable" So I guess he had to explain himself.
 
Here is a link that somewhat supports what you are saying... and also blows it out the water...

https://www.amd.com/en-us/press-releases/Pages/support-for-freesync-2014sep18.aspx

Also some monitors like random Korean models using older scalers are firmware updatable to support FreeSync albeit with limited ranges like 42-60Hz.

As per the FPGA versus traditional scaler - as I said some things like overdrive and low pixel persistence are far more effectively handled with the capabilities of the kind of FPGA nVidia is using versus traditional scaler - hence why things like adaptive overdrive with FreeSync is currently not well supported.

Dear lord, stop talking about a traditional scalar. There is nothing an FPGA can do that Nvidia can't design dedicated hardware to do. Nvidia have several options in implementing what they want to implement, adapt something that exists, use a non dedicated FPGA and program what they want (relatively speaking horribly inefficient in die size, power, profitability, but buying an existing FPGA takes a tiny fraction of the time) or designing a new chip and going through the time it takes to tape out and manufacture. They aren't tied to existing hardware if they make their own chip.

FPGAs aren't special chips that enable you do to things traditional chips can't, they are just essentially dramatically less specialised with a huge amount of logic that lets you use them for a wide array of uses. Nvidia's requirements aren't for an FPGA with a wide array of uses, they want something with an extremely specific usage that is drastically better done on dedicated hardware. The only downside to dedicated hardware is initial design cost and time to get the chips back... the initial design cost more than pays it's way back over time such that in the long run it's going to be dramatically more profitable and Nvidia with a huge stack of cash has no reason to go the cheap upfront and hurt their back end profits route. There was a single viable reason for Nvidia to go the quick FPGA route, time to market.

AS for the supports but blows it out of the water... except doesn't blow it out of the water. Old scalers that didn't support freesync, don't support a wide enough range through just firmware updates to work as any freesync panels including the first ones did. Meaning the first panels with a wide range used updated hardware, not firmware updates as simply updating firmware didn't give the required results at all.
 
That is rubbish and people trying to re-write history - I doubt if it was down to VESA and AMD we'd even have VRR now - it isn't like it is new technology at all - its been used in professional VDUs going back probably 30 years before G-Sync.

I don't know about the rest of the Industry, but the one thing I am 100% sure about is that AMD were working on VRR before long Gsync was released. How long it would have taken them to get it to the market is a different story and one that can never be answered because we will never know.
 
Dear lord, stop talking about a traditional scalar. There is nothing an FPGA can do that Nvidia can't design dedicated hardware to do. Nvidia have several options in implementing what they want to implement, adapt something that exists, use a non dedicated FPGA and program what they want (relatively speaking horribly inefficient in die size, power, profitability, but buying an existing FPGA takes a tiny fraction of the time) or designing a new chip and going through the time it takes to tape out and manufacture. They aren't tied to existing hardware if they make their own chip.

FPGAs aren't special chips that enable you do to things traditional chips can't, they are just essentially dramatically less specialised with a huge amount of logic that lets you use them for a wide array of uses. Nvidia's requirements aren't for an FPGA with a wide array of uses, they want something with an extremely specific usage that is drastically better done on dedicated hardware. The only downside to dedicated hardware is initial design cost and time to get the chips back... the initial design cost more than pays it's way back over time such that in the long run it's going to be dramatically more profitable and Nvidia with a huge stack of cash has no reason to go the cheap upfront and hurt their back end profits route. There was a single viable reason for Nvidia to go the quick FPGA route, time to market.

AS for the supports but blows it out of the water... except doesn't blow it out of the water. Old scalers that didn't support freesync, don't support a wide enough range through just firmware updates to work as any freesync panels including the first ones did. Meaning the first panels with a wide range used updated hardware, not firmware updates as simply updating firmware didn't give the required results at all.

LOL you are explaining to me what an FPGA is?... after all my previous posts on related topics? give it a rest.

That link, combined with the state of the original release clearly shows AMD didn't go through a long design and implementation process, such as you said would be necessary for a complete new design, such as would suggest that they'd started working on it for awhile.
 
LOL you are explaining to me what an FPGA is?... after all my previous posts on related topics? give it a rest.

That link, combined with the state of the original release clearly shows AMD didn't go through a long design and implementation process, such as you said would be necessary for a complete new design, such as would suggest that they'd started working on it for awhile.


as I said some things like overdrive and low pixel persistence are far more effectively handled with the capabilities of the kind of FPGA

When you say things like that, you make it absolutely sound like you have no idea what an FPGA is, if you don't want to be mistaken for someone that doesn't know, stop making silly statements about them maybe?

You keep going back to saying traditional scalars this, something else scalars that, when I started off by saying instead of making an FPGA in your theory that had all the time in the world to make dedicated hardware. It's that simple, also 90% of the functionality Nvidia will be providing through their FPGA will be identical to a 'traditional' scalar. FPGA's don't do anything better, they do things worse in general and dedicated hardware will always beat them out, full stop.

If NVidia had no time constraints there is not one sensible reason they would go ultra expensive off the rack FPGA route they have done over dedicated hardware, literally not a one. Your theory involves them having no time frame because everyone else didn't want it.

That link doesn't say anything at all that you're implying it to. If the existing scalars didn't have the capability with updated firmware to provide the freesync range that the first freesync panels had, the only logical conclusion is..... they made new scalars. A new scalar with even slight changes still requires a full validation, full testing, full tape out. Just the design has a advanced starting point and needs little work. Implying otherwise is incredibly disingenuous. Even the most basic chip with the most basic changes takes a long while to go through the various required stages.

Wait, is what you're trying to say.... are you trying to say because an announcement was made in September for panels that were launching in Q1 the next year.... that they made new scalars and had them in products within 6 months? Because the announcement to the public of a partnership is the day that they start work on it? Like how the panels Nvidia showed at their g-sync launch with Asus, that was the day they started working on the collaboration with Asus... despite the products working infront of them at the time? Announcements to the public generally mean, we've done X amount of work, we know the product is ready and it's coming at such and such a time, this is when to announce. That announcement probably indicates they went into early production or started the tape out.
 
Wait, is what you're trying to say.... are you trying to say because an announcement was made in September for panels that were launching in Q1 the next year.... that they made new scalars and had them in products within 6 months? Because the announcement to the public of a partnership is the day that they start work on it? Like how the panels Nvidia showed at their g-sync launch with Asus, that was the day they started working on the collaboration with Asus... despite the products working infront of them at the time? Announcements to the public generally mean, we've done X amount of work, we know the product is ready and it's coming at such and such a time, this is when to announce. That announcement probably indicates they went into early production or started the tape out.

I specifically said in combination not just going by the date of an announcement alone and I'm clearly using traditional scalers to contrast and define between what nVidia is doing and how it is normally done nothing more, nothing less. The FPGA has many benefits in a tech like G-Sync that needs to work around quite a few potential stumbling blocks that more fixed function hardware would potentially require significant revisions of such as handling of Windowed modes which neither G-Sync or FreeSync have nailed down entirely without various bugs and issues but FreeSync is struggling with a lot more (slightly alleviated by a redesign of the WDDM and compositor in Windows 10).
 
Back
Top Bottom