• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD freesync coming soon, no extra costs.... shocker

Again, I ended it with 'LOL'. Although AMD are still making use of the standard freely so it's still true to a degree. You're getting hung up on that one particular thing and stalling.

If it's not a possibility that they're using VESA standards, then it is IMPLIED that they are using it by yourself as you're comparing it directly to Freesync saying I told you so.


I'm not interested in what other patents you know about lol. Infuriating

Actually you ended the line about it being free, with "lol", thus not implying it had anything to do with the toshiba tech statement.

Second, I'm getting hung up on nothing, YOU are, it was YOUR argument, I'm pointing that out to you, once again. I don't care whose screen it is, YOUR argument was based on it, you went back to it, you brought up Toshiba panel in another post, not me.

I also didn't claim they aren't using Vesa standards, show me where I did, I said it would be impossible to PATENT a vesa standard for themselves, I didn't bring up other patents, at all... so I find it hard for you to find something I didn't do, infuriating.

Please show me where my argument said Nvidia was using the Vesa standard, or was impossible to use it, or where I said using it was the basis of my comparison, or explain why I can't compare g-sync and freesync unless they use the same standard......

What precisely am I stalling on? When you make up weird new crap in every post is it not surprising I haven't already addressed it previously?
 
Last edited:
Silly question, in the first quote you seem to be implying that the monitor manufacturers will put a price premium on the panels that support this feature and in the second quote you seem to be saying that it will be free and there wont be a premium. So which do you really believe will be the case? My self I'm sure there will be a premium for such a feature although I wouldn't like to put a price on it.

On a separate note there has been a lot of mention of G-Sync being locked into Nvidia (needing the hardware in the monitor), but if this new FreeSync needs the built in hardware in the AMD GPU's how is this really any different, it is locked into AMD in the same fashion.
Before anyone says oh yes but Nvidia can build the hardware into their next chips, yes they can but just the same as AMD can support the Nvidia implementation. I know that it will all come down to cost but that is not the point I trying to make, I find it odd that one is being said to be open and one closed when in reality they are both locked to their own vendors.

The first post I wasn't implying what you think I was, put simply.

A £200 screen that supports freesync, a £200 screen that has identical features but has been updated to support freesync. If you have neither and need a screen, it's £200 either way. If you already have a screen, and most people do, then there is little financial incentive to add in a feature to the existing screen, when by not doing so you generate more sales for a new £200 screen.

That is a very far cry from having two screens, what costing £200 without freesync, another that supports it, costs absolutely nothing more to produce, but you price at £300.


Personally I don't believe "freesync" is anything Nvidia can't do, I believe strongly that they've used an overly expensive FPGA to short cut the time to market of already coming "normal" chips that will add the same feature. FPGA's are essentially programmable chips, a more normal fixed function chip doing the same job will 99% of the time be hugely smaller, hugely more efficient, hugely cheaper. FPGA's advantage is time to market basically, and R&D cost(if you just add some programming to a FPGA you buy off the shelf, making your own FPGA would be much much harder).

The only real reason to make an FPGA version with all the downsides and increased cost, and losing extra inputs on the monitor, is time to market. There is no way in hell it will be a FPGA solution into the future, even if AMD could never do freesync, it would benefit Nvidia/Asus/whoever financially to replace it with final fixed function chips.

Everything about it screams, this is standard but we did it this way to short cut to market and make it ours. Nothing particularly wrong with that, except the cost to end users, which is my only problem with g-sync.
 
So has it been explained why Freesync is being compared to G-sync yet? because the way I see it:

Adaptive V-sync
Smooths out <60fps
Stops tearing but suffers input lag >60fps due to v-sync.

Freesync
Smooths out <60fps
Stops tearing but suffers input lag >60fps due to v-sync.
More power efficient - useful in battery powered devices.

G-Sync
Smooths out <60fps
Stops tearing >60fps and no v-sync related input lag.

This is a lot of hype over nothing (aside from potential power savings in certain screens).

All of these articles comparing it to G-Sync are neglecting to mention that the primary reason for G-Sync is to give the v-sync-less experience without tearing above 60fps, Freesync will have input lag due to v-sync being enabled.

Outside of you saying (I assume) freesync doesn't work above 60fps and so it's just v-sync..... I've seen no other claim remotely to this.

Secondly, adaptive v-sync does not change the refresh rate of the panel, AT ALL, g-syncs main feature, is variable refresh rate, freesync's main feature is, variable refresh rate.

Variable refresh rate, g-sync and freesync do this. Adaptive v-sync has nothing at all to do with changing the monitors refresh rate, the question is when freesync is 100% clearly on the same line as g-sync and can't even possibly be compared to adaptive v-sync, why are you pushing that line of thinking?

Another point, Nvidia's own patents talk AT LENGTH about power saving features of adaptable refresh rate... it's where the idea stemmed from in the first place and IS a major feature of g-sync also(you will see this when you get g-sync enabled mobile devices, they WILL advertise it as a feature).

Lastly, why do you in none of your lists there suggest g-sync, v-sync or freesync reduce tearing below 60fps?

Tearing is when a frame updates as the screen is being refreshed, it's a timing issue, it doesn't relate to frame rate at all and tearing is significantly magnified at lower frame rates.

A HUGE feature of g-sync is getting rid of tearing below 60fps, I just find it odd you missed it out in all scenarios.
 
If it needs the hardware that is present in AMD GPU's then it wont work on non AMD GPU's.

Didn't the original article say it was a proposed VESA standard? or have I got confused during the reading of this thread?

It's not about the hardware needed on the GPU, the lock in is on the screens. Nvidia can say "though screen x supports xxx-sync, and we could choose to support that, we won't, you can only use the screens we approve".

If the monitor supports a feature that ANY hardware is freely capable of supporting, the specific hardware implementation from Nvidia, AMD or Intel locks no one else in/out from that screen.

Nvidia, AMD/Intel, Qualcomm, Apple, Samsung, can make a active decision to support something that would become industry standard, or choose not to support it. Most people choose to support these things, Nvidia do for most industry standards.

It's when you go an alternative route to specifically lock in your own users, and charge for it, that people have a problem.

Everyone's hardware solution will be slightly different, it's knowing that it's a standard screen that matters, and that it would work with any gpu you used it with(except maybe Nvidia) that makes it not locked in.
Vblanking is an option on that laptop, if you put that screen in another laptop with Intel cpu/gpu, the screen hasn't changed, the vblanking option is there, the only thing stopping it working would be Intel refusing to support it, fully entitled to, but AMD couldn't in any way prevent Intel from using that vblanking method of doing freesync.

As with g-sync, AMD will likely call it some name, freesync or something more, professional, trademark the name, like 3dvision or anything else, and then that trademarked name would encompass their own entire package, hardware, driver, and output, but it stops there. Intel might call their hardware and driver "latesync", it might do everything in hardware/software completely differently, it might be better or worse, send out better or worse timed frames, but it hits a standard cable, gets to a standard monitor and the monitor does it's own thing no matter where those frames came from.
 
Well that was a surprise. I knew AMD were going to hit back with something but I didn't expect something this soon. The only reason I went to Nvidia on my recent 780 purchase was for Gsync, but I am returning the 780 using DSR as I just don't like Nvidia drivers and the card just feels **** in general.

Cannot wait to get my 290 now at the end of the week hopefully.

Does anyone know if this is likely to work on a Samsung 700D 120hz monitor? As I understand it, the hardware stuff is in the GCN cards not the monitor?

The monitor needs to support the Vblanking feature which, I don't know for any certainty but I'm guessing that very few screens currently support it. It's a new proposed standard, it has made it's way into new screens already(the toshiba laptop supports it, but you wouldn't want to game on that :p ). I strongly suspect most current monitors won't work with it. I suspect that certain gaming oriented monitor manufacturers have been persuaded, or dissuaded from pushing to release monitors with Vblanking as soon as they had maybe planned.

I have a 700d also, but I really think a new monitor will be required for it. I'm in no particular rush because the difference is going to be exponentially smaller the higher the framerate you have in games. G-sync/freesync will probably follow an exponential curve, making the biggest difference at 30fps, much less difference at 60fps, and by circa 90fps will be very small.

Based on the quality of my 700d, I think i'd quite happily wait for a Samsung replacement, they seem to have been quite neutral, not going 3dvision, no sign they went g-sync, so hopefully will get a screen for it sooner rather than later.

Basically I'd see any current screens supporting it as a massive bonus, but expect to be looking for a new screen.
 
It seems that it's possible it's part of the display port spec but monitor makers maybe don't actually have to pay any attention to it, so display port is set up for it already but monitors may not have put any software in place to use it.

I think we'll find out relatively soon, Anand or one of 50 other guys will ask AMD, vesa, someone about it and get an answer.

Guess, it's part of display port spec, it's not required yet in Vesa so many screens haven't ever used it, most new screens will want to do it for power saving reasons. We'll get gaming screens with 120+hz with whatever is required included monitor side and we all get it. Changable refresh rate for power saving has been on the cards for several years, so it's likely everyone was building up to this and someone decided to jump the gun with a non standard version...


Something more interesting is if this would come to tv's soon, and if both consoles support it, because it's likely given that Kabini supports it. With tv panels being so much bigger and using so much more power, the push for greener tv's will probably see it pushed forward.
 
You dismiss the needed GPU hardware very easily.:rolleyes:
The bottom line is that the AMD GPU's have supported this for three generations, so they have the hardware necessary.

Until we get more info the rest is all just speculation.

I dismissed nothing, don't put words in my mouth thanks. The hardware GPU side is NOT relevant to the hardware monitor side, this is not speculation. If a monitor is using a soon to be industry standard, AMD could not lock out another Vesa member using this for free OR a non member using it for a small cost. They couldn't use AMD's hardware implementation but they can use the monitor's implementation, it's up to whoever else to decide how to output the signals however they want.

AMD can't take a stranglehold on a Vesa standard, everything up to the output of the gpu is AMD, everything beyond that has to adhere to industry standards.

AMD/Nvidia/Intel all have different hardware but can all output a 120hz signal.... how AMD does it doesn't have to be identical to how Nvidia or Intel do it, that doesn't mean they can stop a screen using an industry standard method from working with other hardware.
 
I know that using comments in a tech article is a pretty poor source (especially since they are translated from another language), but if what they are saying is true then it will make freesync virtually useless for gaming because it doesn't work very well with a constantly varying refresh rate (which would explain why AMD went with a demo that had a perfectly constant framrate):
http://be.hardware.info/nieuws/38467/ces-amd-toont-gratis-alternatief-voor-g-sync-freesync



I don't pretend to understand the technical details of how vsync and gsync are working, and I also freely admit that the source I'm getting this from is a bit crap, but if this is actually the case then it would certainly explain why nvidia went with the gsync route when they must have known about the vesa standard that was in development that freesync is using.

If this guy is talking from his behind then do please carry on as you were :)

He isn't, but he doesn't realise this is what g-sync must do.

If you watch the pendulum demo the key to smoothness isn't variable framerate but the smooth change in frame rate.

Going 60 to 30 to 60fps, means frame times of 16.67 to 33.33ms, to 16.67ms.

Now think v-sync, drops below 60fps, hits 30fps.... goes back to 60fps. This is precisely the stutter g-sync attempts to eliminate. It CAN'T do this by just changing frame rate, it has to smooth the frame rate. The pendulum demo goes from 60 to 59 to 58fps and so on. This means 16.67ms, to around 17ms, to 17.5ms between each frame. With a smooth change that tiny difference(sub 1ms) between frames is what induces the smoothness.

You literally can't explain this as possible unless g-sync is predicting the next frame rate. If 10 frames in a row at 16.67ms apart but the next one is at 30fps and is going to come 32ms later, how do you make that smooth?

Well, you have hardware keeping track, it knows the previous frames came 16.67ms apart, the only way to maintain smoothness is to decide the biggest gap after 16.67ms is acceptable to stay smooth, say 2ms max, and refresh with the same frame again. So 18ms later, the next frame is lets say comes in after 32ms(from the original frame) which is going to be(32-18 = ) 14ms later. so it has this, but knows the last frame time was 18ms... and knows the last frame took 32ms to calculate. So it says, we need it smooth, but know the current frames are being produced at 32ms apart. So it will hold this frame to say 20ms, then show it. Because it wants to maintain smoothness to the previous 18ms frame time... but it knows the frames are being created 32ms apart... so it needs to essentially work towards that one small step at a time. When frame rate increases it's as important to smooth the frame rate the other way, though I suspect it will do quicker steps as though bigger frame time differences, the actual faster frame rate itself likely compensates for that.

Frame smoothing is the absolute main feature of g-sync, not variable frame rate. Frame prediction/tracking will be a monumental part of this, for both companies. Without frame smoothing g-sync is only as smooth as the frame rate change, which could be perfect, or absolutely awful. I'm literally 100% certain g-sync has to absolutely do frame smoothing and will be doing loads of it's own prediction to tell the gpu when it's best to send the next frame.

There will be consistently lots of marketing bull crap from both companies about it because explaining it simply is what 99.9999% of users want, most won't read what I posted let alone if Nvidia/AMD tried to explain it to most people. It's significantly more complex than what I've stated, but essentially impossible to produce g-sync or an AMD equivalent without a huge amount of calculating and tracking. I think/guess Nvidia has done this in hardware and maybe only done this since Kepler. Likely an addition of frame pacing features. ultimately this is like 80% of the frame pacing tech(monitoring frame rates and keeping it smooth) with 20% more stuff being done with matching these changes to refresh rates in the smoothest way possible.

If it needs the hardware that is present in AMD GPU's then it wont work on non AMD GPU's

This is the original post you were responding to and your response

Quote:
Originally Posted by weldon855 View Post
its a vesa standard please tell me how you think AMD plan to lock this?
If it needs the hardware that is present in AMD GPU's then it wont work on non AMD GPU's.

Didn't the original article say it was a proposed VESA standard? or have I got confused during the reading of this thread?

It was specifically in reference to how AMD would lock this in to them and your insistence that if AMD use hardware to make this work then it can't work on non AMD gpu's. This is wrong, as I pointed out.

It's fairly obvious as well, there are pretty much thousands of standards gpu's, cpu's, apus, memory, hdd's all adhere too that allow them to communicate with the same messages and enable the same things, yet the hardware that generates those signals is completely different.

I can plug any of a hundred hdd's into my computer, many/most of them will use different algorithms, memory types, controllers, buses, yet they all send the same messages to my mobo and it can read them all despite the hardware in those devices being different. Likewise one hdd can send and receive data the same from an Intel or AMD motherboard which both have entirely different hardware for doing the same task.

If the monitor is using an industry standard, it does not matter how AMD generates their signal, how they determine what frame rate they want, Intel will be able to generate the same signal from their own hardware any way they please and use the same monitor with the same modes.

It's the entire reason for industry standards, to do precisely what you are saying they won't do. So no you aren't right, AMD wouldn't be able to lock it in by any info we've heard and no a monitor with this option wouldn't need an AMD gpu.

Again to HDD's, how can AMD read and send data to a industry standard sata 3 hdd, and Intel can do the same, using different hardware?

No wonder this industry is in such a mess when standards body (ie VESA) is basically run by no one else but those MAD guys (amd). Sometimes someone has to take a charge and make things happen for REAL

Yup, the more things change the more they stay the same.

They're all insane, greed plays a huge part in it.
HDMI usage costs a company $10k a year and $0.04 per device. Sell 100mil monitors with hdmi ports and that is $4million in income, in one year. Add up every laptop, every monitor, every console, every bluray player, every tv that use HDMI, multiple by the ports available on those devices and realise what the HDMI guys(i forget who else is in it except Sony, there are others) and you realise there are basically $100's millions at stake a year over which format moves forward to become the defactor industry standard and you realise why Display Port, royalty free, (yes a fee to join but it's not insane for big businesses and numerous millions less, effectively free vs using things like HDMI) has encounted such fierce opposition and slow adoption.

AFAIK certification of display port devices has increased like 63% in the past year. It's finally getting there, Intel finally threw it's support behind it, Apple is one of the biggest pushes of it outside AMD. It's pretty much the only viable option for 4k(afaik still the only current single cable that can push 4k above 30hz).
 
Last edited:
Frame Pacing, drunkenmaster :) What your describing in hardware frame tracking i think is Frame Pacing.

Yup, I mentioned it if you read far enough :p

I would be very shocked if both AMD/Nvidia weren't using their implementations of frame pacing monitoring hardware to do most of the work. It will be slightly more complex as it's got to take into account refresh rate to determine which frames to drop, which to push, when to send them, etc. But frame pacing will be the foundation of it for sure.

Someone asked why if AMD frame pacing was fine why they'd put it in hardware, I'm absolutely not saying this is why, I just found it funny. Anything will run faster in fixed function than software, but frame pacing, tuning would be a pain but the actual calculations, keeping track of a rate of change and things is really pretty simple in reality, it's not hardware intensive, but it could certainly be latency intensive and having everything done on hardware on the gpu without having to come back to the CPU would certainly save some clocks, but ultimately AMD/Nvidia will always put anything worthwhile into fixed function hardware. the entire industry has gone that way for anything they can offload.

But people really do need to realise, the variable frame rate can act exactly as bog standard rubbish v-sync, if a game goes 60-30-60-30 then g-sync would perform exactly the same as v-sync, which it doesn't and that would be absolutely worst case scenario which is actually where g-sync is at it's best.

Frame smoothing is the fundamental tech behind g-sync/freesync, variable framerate is essentially the next step in frame pacing. Frame pacing produces the smooth frame rates, variable refresh rate monitors allows it to show it's full value.

The entire concept of frame pacing for any company is based around monitoring, calculating and prediction, so anyone suggesting this as a downside for AMD doesn't know what they are talking about.

Nvidia's patents linked to in the other thread are precisely about using some fixed function hardware to track the rate of frame rate change to calculate when to update refresh rate.
 
Youve missed his point again, he is saying that IF Amd have added hardware and Nvidia havent then nvidia cards will still not be compatible with freesync, the end result will still be amd will only work with freesync and Nvidia will only work with gsync, that one is an "open standard" becomes irrelevant IF nvidia cards lack the hardware

Well firstly he was talking about AMD locking this tech due to having this "hardware" in their own gpu making it NOT WORK on non AMD gpu's. SO that isn't what he was saying, as he was quite specific. Second, Intel already support the same option in hardware that AMD do by all accounts(I get the gist that if you support eDP, which is currently a superset of DP 1.2, then it's almost certain the hardware supports the feature). There is nothing pretenting Nvidia adding this hardware capability into future hardware.

There is EVERYTHING preventing AMD adding the capability to use g-sync into future hardware.

So open standard becomes irrelevant, no, because that is the point of an open standard, it's open, Nvidia can choose to support it whenever the hell they like, choosing not to support something and not letting something work is NOT based on AMD letting or not letting them do this. It's a free choice for Nvidia, hence, open standard. It is NOT a option for AMD to freely choose to use the g-sync module and Nvidia driver to use g-sync hence not open standard.
 
I wasn't aware Laptop LCDs were void of scalers. Makes sense though.

Completely different technology then. Well that's no surprise. Might be to some though.

eDP is a superset of dp 1.2, and doesn't do much extra, it's absolutely not entirely different technology, think about the source, which also goes on to say dynamic refresh rate is possible today on current display port cables with no update to the vesa standard..... so it is possible then Mr Nvidia?

30hz @ 33.3ms. The scaler then holds the frame in memory so that it can be compared to the incoming frame. If the frame doesn't arrive in time it simply redraws the last frame. You're working on the assumption that G-Sync isn't fast enough, but then that is why we've got an upgraded scaler surely. One you keep pointing out we don't need.

Your hate is strong.

Seriously, just what? Please explain the hate for Nvidia in saying frame smoothing is what makes g-sync good?

I'm not making any assumption that g-sync isn't fast enough, I literally said nothing close to it and nothing that can be interpreted as such. Second, explain what happens when the frame rate dips from 60fps to 30fps in your scenario.
G-sync is going along at 16.67ms frame difference, then the next frame doesn't arrive for 33ms so it just refreshes the same frame at 33ms again right.....then what another frame comes up and it does it 16.67ms later?

Let me explain what would happen here with v-sync........

(I stopped implying it was the same thing, because what the first part was describing was basically v-sync, and it wouldn't be "g-sync" smooth).

The ENTIRE point of g-sync is if that next frame appeared after 34ms, on a 60hz screen, in your scenario it would refresh at 33ms, and have to wait another 15ms to show that new frame.

If you attempt to smooth out the frame rate by saying "the last 10 frames were 16.67ms, so to keep it smooth the next one has to be no later than 18ms" when no new frame appears in that time they refresh the last one.

so how long from the original frame can it show this new frame now based on predicting and smoothing frame rate?

First scenario 16.67ms, same frame refreshed 33.33ms later, new frame arrives at 34ms, earliest it can be drawn 15ms later.

In the g-sync/smoothed frame rate scenario, first frame 16.67ms, timer limit for frame to not arrive triggered so refreshes at 18ms to keep a frame difference small, new frame comes in at 34ms, this can be shown 16.67ms after the last refresh which instead of 1ms previously(without frame smoothing) is now 16ms, so the new frame can actually be shown .67ms later instead of 15ms later.

You can't do g-sync without a hell of a lot of tracking and monitoring to determine when to update and when to not. The entire point of it is bringing the frame pacing style smoothing to the screen as well. There has to be some mechanism for smoothing out the refreshes as the stutter on screen comes from larger changes in frame time, which when you can control the refresh rate will show as big differences in refresh times.

Watch the pendulum demo and see the exceptionally smooth frame rate change, that is the entire point. Jumping from 60 to 30 to 60 to 30 is what v-sync does, with no smoothing g-sync could only do the same, smoothing is the very difference between the two technologies.
 
Why are you depicting miliseconds as if you'd be able to notice the difference between them? (Apart from worst case maybe). Literally you're debunking the technology without seeing it first hand. It's pacing is clearly going to be much better than using V-Sync, given peoples reactions from seeing it first hand.

This is me giving you the last word chap.

lol, please ask Nvidia that same question on twitter. Apparently your argument is ms don't matter because we can't see them, that is golden. Also in what way am I debunking the technology at all?

Read ANY review on AMD/Nvidia frame pacing and realise the entire thing about it is the same millisecond differences.


It's actually the perceived difference of rates that are important than the rates themselves but you will find that realistically the very biggest ms difference between g-sync and v-sync will be waiting one full frame refresh.

So you made something up I didn't say, again, decide it was pure hate when I said it was smart and good, then decided to base your argument on being unable to see such small ms differences........ despite g-sync being incapable of showing a much bigger difference than one frame, ever. It's the smoothness of said frames which is what it's all about. The reduced latency over v-sync comes from that 1ms vs 15ms update to the screen of the newest frame drawn, the smoothness 16.67-18ms-16.67ms is actually what "makes" g-sync.

It's the entire point of it, go and ask about frame time differences on any other forum, Nvidia's own, see what they say, read some frame pacing reviews on Nvidia, read Nvidia's interviews on frame pacing and then come back and say a few ms don't matter at all.

Again, because you don't seem to get it, v-sync jumping from 60 -30-60 fps constantly is going 16.67-33-16.67ms. That is v-sync, the time differences are obviously very small, and apparently... you can see them. The change in frame time IS the stutter, that entire 16.67ms IS the problem, can't see it, its the entire thing g-sync is out to change, that 16.67ms change in frame time and YOU are the one debunking g-sync by suggesting this change in frame time is not perceivable.
 
Why are you depicting miliseconds as if you'd be able to notice the difference between them? (Apart from worst case maybe). Literally you're debunking the technology without seeing it first hand. It's pacing is clearly going to be much better than using V-Sync, given peoples reactions from seeing it first hand.

This is me giving you the last word chap.

I'll put this more succinctly for you and others.

Worst case for v-sync is 60fps to 30fps to 60fps, this is the absolute worst case possible(officially not, but it's the worst case Nvidia has showcased at any g-sync demo for v-sync) of stutter, frame times go 16.67 - 33 - 16.67ms.

MjFrosty believes g-sync is crap because you can't see such small times, you can't, he's debunked it, it's all a joke, you can't notice the difference at all.

I'm apparently pro Nvidia because I said the technology is good, but MjFrosty flat out insists v-sync is perfect because you can't notice this difference at all.............

Yup the hate is strong, from one of us, it's pretty clear which one that is.
 
According to this article, it's a fuss about nothing as it can NOT be done on desktop screens using current hardware nor software. Forget Vblank etc.

Gsync module is the only way of doing what it says on the tin outside of laptops. So there is nothing similar actually coming from AMD ever, as it cannot be done, they just want to take the attention out of Gsync with a purposely non-informative demo so people/fanboys can fill in the speculation and not buy a Gsync monitor.

For everything else, there's Masterca.. I mean, for laptops, there's Freesync, but I am also expecting to see gaming laptops with built-in Gsync module.

This is hilarious, let me quote something from that article that you've made these claims based on shall I

generally based on standards like LVDS or eDP (embedded DisplayPort). Desktop monitors use other interfaces, like HDMI and DisplayPort, and typically have a scaler chip situated in the path between the GPU and the panel. As a result, a feature like variable refresh is nearly impossible to implement on a desktop monitoras things now stand.

I don't know if this really needs explaining for you, I'll bold a few words for you.

Do I really need to go further, okay, typically have a scaler... so not all do, okay. Nearly impossible... another term for this is.... NOT impossible, or POSSIBLE, and "as things stand now".

As things stood 2 months ago, nothing in the world worked with g-sync. As things stand today isn't particularly relevant to how things stand tomorrow.

eDP/DP, almost identical, both(afaik) support Vblanking.

But based off an article that DOESN'T claim it's impossible NOW, certainly doesn't say it can be widely done in the future and actually has the Nvidia guy SPECIFICALLY SAY that they hope g-sync pushes the industry towards variable refresh rate scalers....... you claim it will never work on desktop?

I think I know which side you're on ;)


Here's a hint, laptop screen, uses eDP, amd support eDP, it uses display port cables. So if you took a "laptop" screen, put it on a stand and used the existing cable connection......errm.... nope, that would work, quite obviously.
 
The real question is how G-Sync is smoothing between pacing because people around the world are noticing a difference. You on the other hand were trying to depict how it's doing it within ms time frame and how it's holding it between delayed frames, but nobody knows exactly how. If they did I'm sure certain people would implement something similar. (That's your queue to tell me how AMD could have implemented it years ago and how we don't need it as you were earlier)

You're once again clutching at straws and slowly going full circle to say you're now pro G-Sync lol. :rolleyes:

Carry on :)

It's funny because your argument before said nothing of the sort, in fact you specifically said we couldn't notice those kinds of differences.... backing away from that and pretending we never said it are we, because it was ridiculous nonsense?

"The real question is how G-Sync is smoothing between pacing"

what, is this supposed to be gibberish pretending to be knowledgable, pacing IS smoothing, how are they smoothing between the smoothing, I don't know, the real question is surely how are they smoothing between the smoothing BETWEEN the smoothing though, isn't it?

Frame pacing is a pretty well explained field, from both sides, and we roughly speaking know how both do it because there is relatively on one rough way to do it... by you know, smoothing.... the frames......

I wonder if you can be honest. If you had a bunch of frames that were I dunno, 25ms, 45ms, 30ms, 16ms, 55ms, 25ms, 30ms.... how would you "smooth" these so you got a smooth set of frames rather than constantly changing rate? Would you attempt to smooth the difference by keeping track of a rough average and holding frames a couple ms here and dropping a frame there.

I would because what else would you do, add in 10 duplicate frames at one point and drop 30 frames elsewhere. There is only one goal in frame pacing... pacing the frames. Nvidia/AMD are on record everywhere explaining the goal, how they achieve it doesn't really matter, the goal is what matters, I was explaining the goal, not the method(though due to the Nvidia patents I have a very very very good idea on Nvidia's general method, not the algorithm or parameters they would use those.

I'm not pro g-sync, and I'm not clutching at straws, you claimed I was debunking it and now you're claiming I'm pro g-sync, you change your argument ENTIRELY every other post to suit yourself, and it's all been gibberish.

My stance hasn't changed since I first saw it. It's got potential to do a lot from 30-60, but it will be extremely lessened on a 120-144hz screen(every single review agreed on this point), it will do relatively little above 60fps and almost nothing about 90fps.

It's better to have, than have not, depending on your hardware and screen you may see almost no benefit, or loads.

I don't much care about freesync or g-sync in use, I like the technology though, is that very much okay with you? I have 2x290's and a 120hz screen, freesync will, outside of the odd insanely powerful game or something that won't run xfire, do very very little for me, well nothing considering the screen I have almost certainly doesn't support it. The technology simply interests me, is that allowed?
 
Like me I don't think you realised that internal scaling on EPD isn't possible on desktops. I believe what NV is saying is true and that there is no way to manipulate it other than with panels that are VESA EPD 1.3 compliant at notebook level.

Carry on :)

Actually I didn't, but the main point is, Nvidia didn't say no way, they said nearly impossible AS it stands, nothing more or less. G-sync doesn't work on current monitors either... I'm not sure what the point is.

What do people think a laptop monitor is, will only work off battery power, uses different pixels. Package any existing freesync supporting monitor in a desktop stand and you have a freesync monitor..... with AMD hardware capable of running and supporting either...

There are not the options or extra validation done in desktop monitors because no one bothered, this doesn't mean can't be done, in the slightest. Maybe it will be entirely because Nvidia pushed them, or maybe it was coming anyway(which I believe is what Vesa/AMD were surprised by), but this is all stuff that is painfully, ridiculously easy to package up as a desktop monitor.

It's a panel, :o, and a controller :o and a cable :o and AMD hardware which supports the cable and output and standard.

My theory was a couple months ago that it would become industry standard when the monitor makers had a chance to refresh their chips with the required options, now it turns out the controllers and chips exist, just no one bothered to make a desktop screen out of them. It's easier than I thought it was going to be, not harder or impossible, but easier. They can merely re purpose existing technology rather than come up with new technology.

EDIT:- more to the point, freesync/g-sync benefits those with lower frame rate on lower hardware the most.

Laptops are where g-sync/freesync will be MOST effective to begin with, and that is where it already works with AMD cards and already released laptops, no Nvidia laptops support g-sync. Funny that.
 
Last edited:
So which other currant GPU's will this work on then? If it needs the hardware that is on the AMD GPU's.....oh would that be none....so my original statement at this point in time is factually correct, now if I had of written it in 5 months time who knows if it would be correct or not, but right now it is correct.

Intel supports the eDP standard required and is likely using the same method, they've demo'd it before. No matter how much you try and keep rephrasing the argument you were implying AMD could "lock" this in. The key word is lock, AMD could lock no one out of using a free standard on a monitor Nvidia CAN lock out whoever they want from freesync.

On a separate note there has been a lot of mention of G-Sync being locked into Nvidia (needing the hardware in the monitor), but if this new FreeSync needs the built in hardware in the AMD GPU's how is this really any different, it is locked into AMD in the same fashion.
Before anyone says oh yes but Nvidia can build the hardware into their next chips, yes they can but just the same as AMD can support the Nvidia implementation.

You and greg now are making excuses for why AMD guys are calling the Nvidia version locked and saying isn't the AMD method locked in also.

This is exactly what you said, then repeated in another post. There is no misinterpretation.

It is different. AMD can NOT implement what Nvidia have done without Nvidia allowing it, Nvidia CAN implement what AMD have done without question, with absolutely no limits from AMD, or Intel or anyone else.


AMD's implementation uses an industry standard from the monitor, anyone at any time they so choose can access this feature of the monitor, this is not locked to being a feature only workable by AMD gpu's, AMD have no say in who accesses the feature on the monitor. It is absolutely categorically NOT locked to AMD.

Your original question was patently clear, your next statement was factually incorrect as the option to change frame rate on the monitor does NOT require AMD hardware at all. Your responses to your questions being answered were instantly hostile and Greg's defence via claiming everyone misunderstood, have a patently clear agenda as usual.
 
Yes, except that is all complete nonsense. There is literally no question Nvidia has to do a lot of heavy prediction and monitoring work... they already do this, it's called frame pacing.

AMD will need the same, they already have it, it's called frame pacing.

The "you will still see tearing" is COMPLETE rubbish, using vblank, despite guessing at the next frame rate you still have COMPLETE control over when you send your own frame, so it will either update the existing image in the frame buffer or you can choose to send a new one. It will eliminate tearing completely. as for the occasional stutter, it will of course have some, but to claim g-sync eliminates it is also stupid. If you're spinning around at 30fps in a game... you're going to get stutter even with perfect syncing, there is more than one reason for stutter, low fps is a very large reason for stutter during more dramatic movement(jumping/spinning), while running in a straight line will induce inherently little as the image will change much less.


There doesn't seem to be much point in trying to explain it, you all decide for yourselves that Nvidia needs no prediction, and will happily jump from 60 to 30 fps with no stutter because the frame time change makes absolutely no difference....... even though that behaviour is identical to v-sync.

Nvidia HAS to have frame smoothing, there is no two ways about this, that means they HAVE to be predicting when the next frame is coming, and predicting frame rate is mathematically, simple.

We're not talking complex maths, keeping track of a few averages. The entire point of g-sync is that is limits the RATE of frame time change. This absolutely lends itself very well to frame rate prediction.

If your current frame time's are 16.67ms, and you already know that anything beyond 18ms wouldn't be smooth, then you already know you're not going to refresh later than say 18.5ms to the next frame, without any actual calculation you already know the very narrow window you will be in.

This is IF AMD/monitor makers don't implement even more control than current vblanking allows. I know particular users don't like to think about the future, and the ability to change things.

Just because they used Vblanking in the demo does NOT mean they are limited to this method for ever.

as for a controller, Vblanking is something very basic, it's not a $30 controller, it's one minor little feature in some existing controllers. A monitor needs some kind of controller, to, control it, no screens have no control otherwise they wouldn't work. New features are added to screens all the time, this doesn't suddenly cause the controller to cost $50 more, nor does supporting one tiny feature mean adding an entire extra chip.

In terms of software overhead, it's laughable, we're talking about keeping track of some basic frame times, doing some pretty basic calculations, and most of this will be done through frame pacing hardware(if not all).

The reality is not having absolute control of the frame rate isn't perfect, but it's not a huge issue anyway, it can still be made significantly smoother than 30-60fps runs on a current screen. Second, I said at the time, the pendulum demo shows entirely unrealistic frame rate change. When was the last game you saw where it dropped to 30fps from 60 one frame at a time, then went back to 60fps at exactly the same slow steady change? Answer, never. In reality, freesync/g-sync will meet in the middle. Where in a situation like the pendulum demo g-sync would offer an advantage, how much, would be down to the AMD programming frankly, where frame rate change is fast Nvidia will be having to drop/delay frames to keep that smooth frame rate change, which is a VERY GOOD THING, without it you wouldn't have smoothness. But this also means that missing one frame here and there for AMD will happen also, and I would think freesync/g-sync will meet in the middle there in more real game situations, and when the frame rate is steady as hell... then they should prevent all tearing at a steady frame rate and offer essentially any frame rate, rather than limited via v-sync and without the lag and they'll be dead even there also.
 
Last edited:
it's worth noting that Intel while having some of their own lock in stuff and trying to cash on with stuff like, Thunderbolt(is that the Intel one, or Apple, god knows, it's daft as crap though), but Intel works on MANY open standards, Apple has. Nvidia is consistently the only big name player who refuses to play ball, and it slows down gaming advancement as game dev's and hardware companies don't like to support proprietary standards in general.


Nvidia always seem to miss the general thing, if in 6 months I have a freesync screen, and they are refusing to support anything but g-sync, then if I wanted a Nvidia card, I'd have to get a new monitor as well(or would potentially want to) where if they simply got together and helped push "freesync" forward first, it would have happened quicker, potentially work better, and be of a huge benefit to ALL gamers including all Nvidia card buyers.

Intel and AMD work on loads of open standards and have a very long history of doing so, Apple made openCL for the industry and handed it off.
 
What are you talking about MjFrosty? they were rolling out when it was ready to be manufactured? Seriously what on earth are you rabbiting on about?

AMD CO DEVELOPED GDDR5, and RELEASED IT to be made available to all. Bull statement, no they MADE GDDR5 from design up along with memory companies and they could full well of held on to it. They ALSO did gddr3 and 4, which they let Nvidia freely use.

Likewise, again Intel, Apple, ARM, all work on industry open standards, they have closed proprietary things as well but when it's right for the industry they are happy to support open standards. They frequently add and move forward and provide R&D and backing to foundations working on industry standards, Nvidia invariably do not and in general get in everyones way which hurts every gamer Nvidia and AMD, Intel to, to benefit their own pocket.
 
Back
Top Bottom