• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Fermi NDA ends today

It was funny, for a time, in that one thread. There's no reason to drag it out, though. Sure there have been a couple of tongue in cheek comments in this thread but that came across as really obnoxious in a conversation that for once was largely about graphics cards rather than those who own those graphics cards, and it was a pleasant change, for almost like, a whole page!

I'd disagree, there's already been quite a few nVidiots having a moan in this thread already, telling people they're talking crap because it's not positive about nVidia.
 
It was funny, for a time, in that one thread. There's no reason to drag it out, though. Sure there have been a couple of tongue in cheek comments in this thread but that came across as really obnoxious in a conversation that for once was largely about graphics cards rather than those who own those graphics cards, and it was a pleasant change, for almost like, a whole page!

+1

Why do you continuously fight Nvidia's corner tooth and nail, even during the whole debacle that was the last 4 months? :confused: ;)

Atleast I tend to fight a corner based on a point of view.
 
And how does this relate to Fermi NDA at all?

It doesn't. It's about getting one up over one another.

There was a time when you could come into this section of the forum and you would get posters like webbo, fornowagain(sp), jokester and many other notable posters all having discussions about various graphics cards they owned at the time.

There was banter and it was all in good humour but it never got out of hand. I couldn't remember a time when it ever descended in this kind of attitude or type of posts made in this thread.

For the most part I enjoy Mr. Drunkenmasters post. It is one of the very few posts worth reading in this part of the forum probably because when I read them he does make very good points to his point of view. And that is no disrespect to any other frequent poster in this section of the forum but atm his posts stand out at least to me when reading them.

The real battle of graphics cards is not being fought by Nvidia or AMD/ati but rather the enthusiastic supporters of each group.

Every time a new card is released or specs are revealed you will get the same posters coming in. And the same people who ended up starting a discussion that ended up getting into an argument are the same people time after time.

Whether this fermi card is really as good as people are led to believe remains to be seen. But even if it is and whatever people end up buying you still going to get good card in the shape of the 5870 which I would hardly call slow in the first place.

And if the price is right which I am hoping you will get a good card in the shape of fermi so you will have options and not forced to buy from one manufacturer.

I always fail to see why so many members here don't want this card to be as good as the 5870 if not better so that it will drive ATi to release a card which is better than Fermi and continue to drive the graphics card technology which hopefully benefit us end users.

Sometimes when I read the posts it comes across as 'If the manufacturer of my card can't make a faster gfx card then I don't want the other manufacturer to beat it'.


What Kylew did was bad enough what Rroff did was worse still in questioning why a fellow member was not banned.
 
Last edited:
Wait, have you actually read anything on Fermi? Apart from the clock speeds the specifications are very clear now for the potential highest-end core:

512 shader cores
64 texture fetch/256 texture filter
48 ROPs
16 'Polymorph' geometry units (each of which includes a tessellation unit)*
384-bit memory bus.

See here:
http://anandtech.com/video/showdoc.aspx?i=3721&p=2

*Given the number of units and the maximal performance increase over Cypress in tessellation (approximately 6x), you can probably guess what inspired them to architect this kind of setup. It will probably work in Nvidia's favour, so good on them, hopefully AMD will think of something to counter this in their next generation.

I should have said clockspeeds really, but still, if those specs are from a card that no ones actually going to be able to buy, then i know im not interested, don't know about anyone else.
 
Last edited:
What Kylew did was bad enough what Rroff did was worse still in questioning why a fellow member was not banned.

Sorry but every time I come onto the forums lately I find a whole load of new, generally obnoxious, posts making fun of me or to a lesser extent certain other individuals, generally in response to serious posts I've been making... 100% unnecessary and disruptive and quite often from kylew or instigated by him... its got beyond a joke.

I tend to be a reactive person dealing with each individual on their own terms - so if you see me responding to an obnoxious person from a 3rd person perspective you may get a somewhat one sided view of me.

I certainly don't go out to make trouble or to make fun of other people... but I will not just lie down and take crap either.
 
Last edited:
locked.gif
 
I should have said clockspeeds really, but still, if those specs are from a card that no ones actually going to be able to buy, then i know im not interested, don't know about anyone else.

Ah, okay, that's fair enough. I'm not even sure Nvidia knows what clock speeds the cards will be shipping at. :p
 
I'm starting to wonder - the demo cards were supposedly (not confirmed) GTX"360" cards with 448SP and ~1200MHz shader clocks... whereas the 360 was supposedly going to be somewhere around 384 or 416SP with higher (~1600MHz) shader clocks.
 
AS everyones said a million times, without card specs we have NO idea about anything. Theres nothing to suggest we'll see a 512sp card at whatever clocks were used in those benchies at all.

I think the problem with non fixed function tesselation comes in the dev's not knowing easily how much they can add. AMD's implementation pretty much lets the dev's know to an exact degree how much tesselation every single card in the series can handle, and therefore they can optimise games knowing exactly how much they can add without harming performance elsewhere. While with a variable output ability, and a changing amount of power from one card to another it will be far harder to scale Tesselation.

A game with AMD cards might find they can do all characters to X depth, buildings but leave ground flat for this generation but it will work on most hardware and won't give you changing framerates depending on what area of the game you're in.

This is the problem it will be very hard power wise for any card to just tesselate every last thing like in the uniengine demo, a fixed level to work to should make it fairly easy to implement smoothly.

But I've said before, it will be great if tesselation becomes a massively used thing, is definately not going to be completely unused as in AMD's case since the 2900xt, so next gen they can know they want tesselation, all game dev's want it(which seems to be the case) and so dedicating an extra X amount of transistors isn't a huge risk at all, while this gen it was.

If both companies move up to 28nm next rather than 32nm, theres going to be a HUGEEE increase in the number of transistors they can stuff in and still end up as tiny cores compared to this generation. We should be back in that process to good yields, tiny cores and low prices which simply aren't possible on 40nm like they were at 55nm. Even with a vastly increased tesselator unit and a huge bump in raw shader power in the next gen, they'll be small cores if they skip 32nm.

Yup I totally agree.

the two different approaches to tessellation will probably cause lots of problems for developers, and i do hope that they can strike a good balance to get the best from each system.

TSMC have completely ruined the whole 40nm era, i certainly hope that the next node to be used will be 28nm for as you say that will open up lots of possibilities for both camps.

i would also like to apologise to you personally drunkenmaster, if you took my earlier response to your comments to duff man's post as a personal attack. it certainly wasn't meant that way, but I'm fairly sure you are sensible enough to see what i was trying to say.
 
Or just ignore it? It's hard but it will help. Someone has to be the bigger man in the end.

You try putting up with a constant barrage of posts like this over the last few days:

http://forums.overclockers.co.uk/showpost.php?p=15760743&postcount=50

http://forums.overclockers.co.uk/showpost.php?p=15758196&postcount=36

http://forums.overclockers.co.uk/showpost.php?p=15690602&postcount=252

http://forums.overclockers.co.uk/showpost.php?p=15756872&postcount=21

thats just a small snippet... 1-2 posts like that can be funny but it gets very old, childish and tiring after awhile...
 
Yup I totally agree.

the two different approaches to tessellation will probably cause lots of problems for developers, and i do hope that they can strike a good balance to get the best from each system.

TSMC have completely ruined the whole 40nm era, i certainly hope that the next node to be used will be 28nm for as you say that will open up lots of possibilities for both camps.

i would also like to apologise to you personally drunkenmaster, if you took my earlier response to your comments to duff man's post as a personal attack. it certainly wasn't meant that way, but I'm fairly sure you are sensible enough to see what i was trying to say.

It shouldnt be too much of a problem for developers are direct x is there to standardise this sort of thing. Game code > direct x 11 > graphics card and reverse.

Whoever said about the graphics card forum devolving into simple bickering and fanboyism wasnt wrong. Out of my 5000 posts, I did about 2000 of them just in the graphics card forum over around 2 years then "that" started so I've been a casual observer ever since lol.

I'm not sure why but people tend to take the graphics card jousts between amd(ati) and nvidia far too seriously :confused:
 
You try putting up with a constant barrage of posts like this over the last few days:

http://forums.overclockers.co.uk/showpost.php?p=15756872&postcount=21

thats just a small snippet... 1-2 posts like that can be funny but it gets very old, childish and tiring after awhile...

:p Glad you liked it, I was disappointed when you didn't respond!
But seriously, I was just messing about and having a bit of fun!

And it's only fun because you BITE!

In all honesty though, sorry if it caused offence as you seem a pretty decent chap.
 
:p Glad you liked it, I was disappointed when you didn't respond!
But seriously, I was just messing about and having a bit of fun!

And it's only fun because you BITE!

In all honesty though, sorry if it caused offence as you seem a pretty decent chap.

I actually thought it was funny - but when I come on the forums in the morning and see 7 similiar posts - some purely obnoxious - it loses any humour. I didn't reply because I was so tired of it all and would have replied in a manner disproportionate to the post.
 
I actually thought it was funny - but when I come on the forums in the morning and see 7 similiar posts - some purely obnoxious - it loses any humour. I didn't reply because I was so tired of it all and would have replied in a manner disproportionate to the post.

Oh get over it you big baby.

You seem to forget the responses that you give, it's all fun and banter at the time, when you look back on it, you only see what people have said, not what you've said before.

You say things just like that yourself.

Stop thinking I'm trying to do you in and I don't like you, as frankly, I'd rather talk to you than a lot of other members here who blatantly have mental problems.

To put it simply, you can give but you can't take really.

Also, please call me "Kyle" my username's only kylew as "kyle" was taken, Kylew isn't my actual name, it's just my initial.

As I keep saying anyway, as much as I dislike nVidia, I am looking forward to them getting FERMI out, it'll mean a lot of good things for the whole add in graphics card sector.

With nVidia on board, we'll get more DX11 games, a nice boom on OpenCL hopefully as well as DirectCompute.

Then you've got their triple monitor gaming support which guarantees more and better surround gaming support for new and upcoming games, and really, with ATi having Eyefinity, nVidia couldn't simply let them have it with out them competing in some way, especially with it being such an easy feature for them to add in as I'm sure they've already had something very similar for their Quadro cards.
 
I personal can't wait for Nvidia to get out their new cards, as it should make ATI more honest and bring down their prices for their cards.

Okay they will be more expensive. For "surround gaming" point of view there are a few good points about Nvidias system compared with ATIs one. One is you don't need a monitor with a active display port, but you need two fermi cards and they only can support 3 screens which is too bad :|
 
I personal can't wait for Nvidia to get out their new cards, as it should make ATI more honest and bring down their prices for their cards.

Okay they will be more expensive. For "surround gaming" point of view there are a few good points about Nvidias system compared with ATIs one. One is you don't need a monitor with a active display port, but you need two fermi cards and they only can support 3 screens which is too bad :|

Nah, there are supposed to be FERMI cards with 3 outputs, 2 DVI one HDMI.

One bad aspect is the resolution you're limited to though, 3x 1920x1080.
 
thats better news than, as I have been here you needed two fermi card, which is worst than having to buy a DP screen

saying that, whos says one had to buy a 24inch screen with DP
 
thats better news than, as I have been here you needed two fermi card, which is worst than having to buy a DP screen

saying that, whos says one had to buy a 24inch screen with DP

I think the nVidia versus ATi surround gaming consists of two main points.

nVidia pros:

Don't need an expensive DP>DVI adapter, great if you've got 3 1080p 120Hz screens.

Cons:

Only supports up to 1920x1080x3

ATi pros:

Supports 2560x1600x3

ATi cons:

Requires at least one DP monitor or an expensive DP>DVI adapter, seems to have issues with 120Hz.

I think it's a shame that the nVidia cards can "only" do triple 1080p, however, realistically, how many more people are more likely to run triple 1920x1080 than they are triple 2560x1600?

Also, I wonder if you drop the refresh rate back to 60, will the nVidia cards support over triple 1080p?

Personally, I don't see *that* much issue with nVidia's cards "only" doing triple 1080p, the positives outweigh the negatives at least anyway, huge increase in support for surround gaming.

All we need now is a nice open 3D open source implementation as well as physics.

I can't see it being far off, even if it's just ATi implementing their own 3D for the time being, it'd obviously be better for everyone if there was an industry standard not controlled by any GPU makers.
 
Back
Top Bottom