• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD freesync coming soon, no extra costs.... shocker

It was rancid from the baiting OP (which he happily claims he did) and it was never going to be a good thread after getting all angry like he did but at least most have tried to remain civil in spite of the OP flaming nVidia user's.

No it wasn't, I didn't and you can't claim as such just because you want to.

I in the g-sync thread got absolutely attacked by everyone for describing roughly speaking what g-sync does, why AMD will support a similar feature, and pretty much predicted everything that has happened completely to this stage. I got nothing but hate and grief and bull crap and lies and hate from the Nvidia lot.

I said this thread was a "haha, I told you so" at THOSE people and NOT at Nvidia as, when as completely normal the Nvidia guys came in here and couldn't keep their mouths shut, it was all "you hate Nvidia", troll, why do you hate Nvidia.

I merely pointed out that this was gloating about being right to THOSE people who said I was definitely wrong, and not at Nvidia. I said precisely nothing about baiting, or trolling or anything else. YOu have claimed that 2 or 3 times in this thread once again absolutely purposefully misinterpreting my words first then later deciding to add your own words to it.

For instance my claim that Nvidia couldn't remotely patent the idea of dynamic refresh(not least because the idea wasn't new), andy kept trying to bait me with "hey I've linked this patent I don't understand that has the words refresh and Nvidia in it, therefore you're wrong". he posted this repeatedly, when I explained said patents to him(which very clearly aren't what he thought) he decided to not read it and claim I was talking about another patent.

This is standard for around 10 or so Nvidia guys around here, talk crap about something they don't understand, post a link with some words that are also in their argument, when it's explained to them that the link in question doesn't mean remotely what they think it does, they turn tail and claim they never said whatever it was.
 
Last edited:
Ok, you feel there is nothing to debate and I am in agreement, as you totally ignored my answer which shows nVidia were backing NGOHQ.com to support ATI and take on Intel. I like how people take anything AMD say as gospel but anything else is wrong. At least give consideration that nVidia wanted to take on Intel.

It DOESN'T show that, it shows a website with no quotes from anyone at Nvidia, no link to any quote from Nvidia, no name of an Nvidia person quoted. It is one website(because we've NEVER seen a pro Nvidia make crap up right) that CLAIMED they thought AMD would help them reverse engineer a technology that would see them sued, and they CLAIMED that Nvidia offered to help them.....

Nvidia wanted to take on Intel......... and?

Nvidia love their PR, they love love love their proprietary tech, but we're to believe that Nvidia wanted some random website to get help from AMD to reverse engineer their tech? The idea simply doesn't remotely sound like Nvidia, there was no business reason to do this. They wanted to get AMD to reverse engineer their own tech, rather than GIVE AMD the tech..... yeah, that is a great way to make money and licence a product, by not giving someone the code and having them steal it, that sounds truly and absolutely believable.

You also seem to miss most of the intricacies of such a deal. If AMD paid to licence Physx, and AMD put it in all their games, and 2 years later Nvidia changed physx, or wouldn't renew the licence(such as Intel not renewing Nvidia's chipset licence) then AMD suddenly has loads of games with physx in that won't work, that screws them completely.

No company in their right mind would licence that kind of tech from another company and become completely reliant on them for support of that, because once you become dependant on that support it can go away and screw you.

I don't think any company would licence physx as it was from any company that wasn't at least some what independent and Nvidia would be almost absolutely rock bottom of the list of companies you might licence it from.

There is no case where they licence it and it's a done deal, a million different ways for Nvidia to screw them at a later date on it. I fully don't expect Nvidia to take on or ask for support from Mantle, I DO expect them to get support for Mantle IF AMD(as with SO many other things) pass off Mantle to an independent foundation at some point in the future. You and no one seems to expect Nvidia to take on Mantle while it's controlled by AMD, almost every "nvidia" guy on this forum has said as much, but you think it's a great idea for AMD to put themselves in the same position with Physx? It was NEVER going to happen whatever PR guff Nvidia came out with, NEVER, it would be insane.
 
Last edited:
Dramatise much? You don't help yourself by attacking nVidia and users at every opportunity.I want to see freesync become a reality, the same as I want to see open standards but freesync clearly won't be free (unless AMD cover the cost of the module in each monitor) and it requires DP1.3 which isn't finalised yet, so no guarantees there.

As for dramatise much, you accused me of rancid baiting, which is frankly pretty bad to start with, then you actually claimed I admitted this..... you didn't say it in a jokey way, you simply said this as fact.

As for attacking Nvidia at every opportunity, no, MjFrosty in particular has accused me of it loads of times, but then in his next post he will say I'm pro g-sync, when in the previous post he's claiming something entirely opposite.

I dislike certain things Nvidia does, that's it, it's not attacking Nvidia to point out things they are doing. As for attacking Nvidia users.. I think it's incredibly clear which "team" is in here attacking how, in fact, it could not be clearer.

As for the screens, there doesn't need to be a extra module, every screen HAS a module/controller of some sort, they constantly update these over the years to support more features, there is unlikely to be an additional cost. You don't have to have a finalised standard to produce a chip that adhere's to the currently proposed standard, or a standard that isn't even proposed yet, it happens not frequently but certainly does happen.

The standard is due to be finalised in the next 60-90 days as is, there isn't a huge amount of reason they can't be in the process of producing chips already. Likewise there is no particular reason you can't release a desktop screen with an existing eDP controller that already supports the feature.
 

Just to point out, as with the article that comes from the same site, where does it say official, it links to the same guys SAYING Nvidia said they'd help......

just think back what you're saying, Nvidia want to help port not Physx, but CUDA to AMD hardware... really? It's fanboy/trolling at it's best. With no one from AMD nor Nvidia commenting, just a guy saying Nvidia is trying to help port it's API to AMD, which would, errm, help AMD compete against them in the professional market for instance, Nvidia's highest margin market.

This is the thing the idea is utterly daft, and Nvidia has a history of basically lying to everyone particularly if it can look like the good guy.

Be honest, you think Nvidia wanted CUDA ported to AMD hardware, because that is how they were going to port Physx, by trying to port CUDA< and frankly the idea is just stupid.

As for Nvidia offering AMD Physx... well I'm offering you a free pizza, the only clause is at some time in the future I can charge you whatever I want for that pizza....... but it's free now.

Offered and sincerely offered with intention of going through with it, aren't the same. There is no reason to believe Nvidia was pulling anything other than a PR stunt, they flat out would never want CUDA on AMD cards, there is no question of this. If you never planned to seriously offer your tech, but you could look good by telling everyone you did, it's a bit of a no brainer.

Secondly, why is it you/most of the Nvidia guys think it's insane for Nvidia to jump on board Mantle while AMD controls it, but you think it's completely logical for AMD to jump on board Physx while Nvidia controls it?

The difference is hypocrisy, I in no way expect Nvidia to support Mantle while AMD run it, if AMD give work on it and pass it off for an industry standard, I think Nvidia should and shouldn't have any issue supporting it.

AMD has a history of developing and working on things like this then making them open, Nvidia has no history of this AT ALL, and considering Nvidia went out of their way to disable physx on Nvidia hardware if the main card was an AMD card I think it's patently clear to anyone even slightly level headed how much BS Nvidia pushed out over their intention to share Physx... they wouldn't even share it with people who bought Nvidia cards if it didn't meet their conditions.
 
@tommybhoy

Good refresh of past events, its amazing how such concluding details can be forgotten over time and years later for others to put forth claims which would have people believe different as if they were not so for lack of memory.

Forgotten, awwww.... sweet, :p

I believe someone, I think it may have been Greg said something about lets see, not understanding the level of denial shown sometimes?

It's funny how in particular the "Nvidia offered AMD physx" argument gets brought up every couple of months without fail to bash AMD over the head with, when it's such a blatantly stupid situation. A couple Nvidia people just say they offered it(and they might, "here AMD, we'll licence Physx to you for 20 billion dollars..." with pinky connected to edge of mouth in Dr Evil style), and it's obviously true, and obviously AMD's fault.

Nvidia's entire history of locking in everything like Physx, the fact that they went out to buy Ageia in the first place, the very idea they would help someone port Cuda to AMD architecture.... it's all completely ridiculous, but some people push it around like fact. AMD wouldn't have offered Nvidia Physx in this situation of the roles were reversed, and they are a nicer company. If the roles were reversed they would have opened it up and asked Nvidia to join some group to push it forward.

AMD trusted Intel more with Havok than Nvidia with Physx, that is how bad Nvidia are ffs... Again because Intel has a long history of working with standards to help the industry, sure they have a long history of horrendous dodgy behaviour to, but they are fairly honest about where and how they'll cheat you :p

Perhaps we need a sticky in the graphics forum, debunked Nvidia arguments to make these threads shorter ;)
 
So I'm curious about the statement where shifts from say 60 to 30 FPS will be done incrementally. Is there anything documented to say this? Because I find myself puzzled by said scenario.

No documentation but I am a self proclaimed genius :p

Seriously though, they haven't(and maybe won't) talk about how they do it but essentially the smoothness can only come from frame smoothing. Remember that Nvidia have been talking about frame pacing and adding more and more hardware to monitor and control it for what, 3-4 years. AMD and Nvidia with frame pacing hold frames and drop frames when it means a smoother experience. The converse of this was AMD's with xfire having eleventy billion frames but plenty weren't actually being seen and the image was less smooth. Now we see less frames, drop quite a few, but we get a much smoother game from it.

From both Nvidia patents talking quite specifically about monitoring the rate of change of the change in frame rate to determine what refresh rate to be using and just coming to a logical conclusion that if it was jumping around from 60-30-60fps(just about worse case scenario for g-sync and "normal" for v-sync, without frame smoothing you would expect both to look identical because both would be updating frames at exactly the same time(with one extra refresh in the middle with no effect in the case of v-sync). This is where the stutter comes from with v-sync.


If your game is running at 60 FPS (or any number for that matter), then it has a sudden drop to 30 FPS - what your implying is that G-Sync will reduce the Refresh Rate slowly (to maintain smoothness), but if content is being rendered at 30 FPS how does one render at 59Hz and so on?
My obvious question then is, wouldn't this introduce input lag which is the very thing G-Sync aims to eliminate?

Every company likes to make bold claims, they all do it and most companies don't like explaining things in super detail to average users. But claiming it eliminates latency is a bit daft, makes it MUCH smaller than trip buffered v-sync, for sure. Latency won't be an issue, some latency, but less than basically most/all other methods is a non issue. g-sync really isn't there to eliminate latency to be honest, its 98% about the smoothness in situations where smoothness is usually an issue.

Without being good with making pretty graphs/tables it's really hard to describe but essentially, dropping frames.

Realistically the demo they showed was very slow frame rate change, but the reality is the 16.67ms frame time change that induces stutter in v-sync is noticeable, the probably 0.2-0.5ms frame time changes in the pendulum demo weren't noticeable. There will be a sweet spot in there that allows for faster frame rate change with 99% of the smoothness. IE 0,2ms awesome, 2ms only a fraction worse, 4ms noticeably worse but still decent, 8ms is pretty meh, 10ms is crappy and 16.67ms is woeful.

So if the frame rate went 60 to 30fps, I'd expect a quicker change than in the demo but around some sweet spot where you still feel 95% of the smoothness, but the quicker change reduces the number of dropped/delayed frames and gets to the new frame rate quicker.

I mean 60 to 30fps in 0.2ms gaps is 150 frames, or 5 seconds. If you could barely tell the difference at 4ms frame gap changes, then you'd be adjusted in 8 frames or less than a 1/3rd of a second.

If it does indeed work this way then I am puzzled, G-Sync does not have to smooth out frame rate transitioning for it to be smooth (which is basically no input lag and no tearing/stutter) - they could simply drop to 30FPS and display at 30Hz?

AS above people really need to understand where the stutter from v-sync comes, the stutter they're eliminating is the change in frame times, it's what they've been fighting with frame pacing for years, it is THE key to smoothness. v-sync already drops to 30fps instantly, that is the fundamental problem with it.


Oh and I know they showed the pendulum demo with FPS slowly dropping unrealistically, but it was my impression that scenario was there to demo the tearing-free animation and nothing more. (Infact if I remember correctly that's exactly what they were pointing out as the FPS began to fall, but feel free to correct me if otherwise).

g-sync would be tear free full stop, no matter when it updates, it is by design able to prevent the screen buffer updating midway through the screen refreshing, which is how you get tearing. The slow change in frame rate didn't have any effect on tearing, a signficantly faster change in frame rate would not tear under g-sync and would tear even worse(probably) without v-sync.

The key to the demo was the smooth way the frame rate changed.
 
In terms of which cards will support gaming freesync, this is likely something to do with hardware implemented frame pacing and possible XDMA implementation. Don't forget that if every gpu forward has hardware frame pacing and mostly people dumping money on new monitors will be on newer gpu's(particularly with the shift towards higher res screens finally) then ultimately programming for the current and future hardware becomes sensible and writing something new to support old hardware that needs different code becomes difficult.

When you look at how much effort MS put into the XO chip to enable something to happen with very low latency for Kinect, latency dependant things are often down to the hardware and thus implementing good frame pacing(which is ENTIRELY prediction based for Nvidia and AMD), is something hard to do with software due to latency problems.

Anyway, software isn't always possible for extremely latency sensitive situations like frame pacing, if it takes 3ms longer that is the difference between missing a frame update and not, it's make or break.

AMD said that variable refresh has been doable for a while, and it has because... it's ridiculously easy.

monitor side we are talking about updating the screen after the frame buffer is full or asking for a specific time to count before updating again, both insanely simple.

Cable side, DP simply has an extra channel, a simple piece of copper that can send a signal, that's it, nothing complex. It's just a case of monitors committing to listening to that signal. GPU side the variable refresh itself is literally just a case of having that extra channel hooked up to the gpu. Same way some gpu's have a dvi output but it is missing the analogue pass through, the pins/holes are there, they just aren't connected. So AMD some generations ago made their gpu's completely capable of sending the required message down that cable, this is trivial stuff.

Variable refresh rate itself does not provide smooth gaming, frame pacing is the MUCH more complex side of this, and Nvidia/AMD can do it in hardware and do it less well in software.

Nvidia has multiple patents, linked to in the original g-sync threads incorrectly stating they were patents on variable refresh, they aren't. It was patents on their method for monitoring rate of change of the frame rate. Not the current frame rate, but how big the change in frame times is. It's critical to smooth gaming and g-sync/freesync.

By the looks of things and which gpu's AMD is doing gaming freesync for, it's coupled to which gpu's have hardware frame pacing checking. On Nvidia this means some dedicated transistors on die that run hardware accelerated algorithms, from the patents they basically compare one frame to the last and compare pixel to pixel and decide how fast the rate of change is by how much of the image has changed, then keeping a buffer of the previous X number of frames and keeping a running guess on the next frame time.

Anyone that thinks frame pacing for either company isn't massively and deeply involved in prediction of frame time is incredibly badly mistaken.

It's a shame older gen cards don't support it, yet, it's possible in the future that they can improve the software enough to get the latency to where it needs to be be but software run comparisons will NEVER be able to beat a piece of dedicated circuitry that doesn't need to go back to the software to be checked on. it may get low enough that it's fine but hardware will always be better.
 
Everywhere I look around I seem to see reports of replacement Swifts, in the Anandtech review, which basically said the screen flickers below 40-45fps/hz range, a lot of the comments are of people who have gone through anything from 2 to 5 screens. People on OCUK with lots of issues as well.

Issues with new tech on a sensibly priced screen is one thing, huge number of issues on a premium model where they are gouging the living hell out of everyone is unacceptable. Anandtech points out that the same size/res will get you an IPS panel at half the price and that the $800 cost is ridiculous.

It really is about users who will just pay what is asked rather than be sensible. If people simply came out and said no to the Swift at £600-700 range, it would drop. Companies more and more just chance a big price on launch and see if anyone will just pay it anyway, they do and it ruins it for everyone as the company then finds no reason to drop the price. That screen absolutely doesn't cost anything near that price, they are making a disturbing margin on it and all because some people have no self control.

The Titan-Z was Nvidia taking their Titan pricing another step further, at least people finally baulked at the price... Nvidia not that long after dropped the price by what, $1000..... we have absolute proof that if you tell a company where to go stick their stupid prices they will back down and drop the price yet people still just give in.

Swift should have been in the £400-500 bracket, absolute max AND the seeming quality control issues shouldn't have been nearly as bad as they were. Even in as much as the backlight not being particularly uniform, for an utterly outrageously priced premium screen it's offering the calibration of a crappy out of the box £200 TN panel.
 
Back
Top Bottom