Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
It was funny, for a time, in that one thread. There's no reason to drag it out, though. Sure there have been a couple of tongue in cheek comments in this thread but that came across as really obnoxious in a conversation that for once was largely about graphics cards rather than those who own those graphics cards, and it was a pleasant change, for almost like, a whole page!
It was funny, for a time, in that one thread. There's no reason to drag it out, though. Sure there have been a couple of tongue in cheek comments in this thread but that came across as really obnoxious in a conversation that for once was largely about graphics cards rather than those who own those graphics cards, and it was a pleasant change, for almost like, a whole page!
Why do you continuously fight Nvidia's corner tooth and nail, even during the whole debacle that was the last 4 months?![]()
![]()
And how does this relate to Fermi NDA at all?
Wait, have you actually read anything on Fermi? Apart from the clock speeds the specifications are very clear now for the potential highest-end core:
512 shader cores
64 texture fetch/256 texture filter
48 ROPs
16 'Polymorph' geometry units (each of which includes a tessellation unit)*
384-bit memory bus.
See here:
http://anandtech.com/video/showdoc.aspx?i=3721&p=2
*Given the number of units and the maximal performance increase over Cypress in tessellation (approximately 6x), you can probably guess what inspired them to architect this kind of setup. It will probably work in Nvidia's favour, so good on them, hopefully AMD will think of something to counter this in their next generation.
What Kylew did was bad enough what Rroff did was worse still in questioning why a fellow member was not banned.
I should have said clockspeeds really, but still, if those specs are from a card that no ones actually going to be able to buy, then i know im not interested, don't know about anyone else.
Snip.
AS everyones said a million times, without card specs we have NO idea about anything. Theres nothing to suggest we'll see a 512sp card at whatever clocks were used in those benchies at all.
I think the problem with non fixed function tesselation comes in the dev's not knowing easily how much they can add. AMD's implementation pretty much lets the dev's know to an exact degree how much tesselation every single card in the series can handle, and therefore they can optimise games knowing exactly how much they can add without harming performance elsewhere. While with a variable output ability, and a changing amount of power from one card to another it will be far harder to scale Tesselation.
A game with AMD cards might find they can do all characters to X depth, buildings but leave ground flat for this generation but it will work on most hardware and won't give you changing framerates depending on what area of the game you're in.
This is the problem it will be very hard power wise for any card to just tesselate every last thing like in the uniengine demo, a fixed level to work to should make it fairly easy to implement smoothly.
But I've said before, it will be great if tesselation becomes a massively used thing, is definately not going to be completely unused as in AMD's case since the 2900xt, so next gen they can know they want tesselation, all game dev's want it(which seems to be the case) and so dedicating an extra X amount of transistors isn't a huge risk at all, while this gen it was.
If both companies move up to 28nm next rather than 32nm, theres going to be a HUGEEE increase in the number of transistors they can stuff in and still end up as tiny cores compared to this generation. We should be back in that process to good yields, tiny cores and low prices which simply aren't possible on 40nm like they were at 55nm. Even with a vastly increased tesselator unit and a huge bump in raw shader power in the next gen, they'll be small cores if they skip 32nm.
Or just ignore it? It's hard but it will help. Someone has to be the bigger man in the end.
Yup I totally agree.
the two different approaches to tessellation will probably cause lots of problems for developers, and i do hope that they can strike a good balance to get the best from each system.
TSMC have completely ruined the whole 40nm era, i certainly hope that the next node to be used will be 28nm for as you say that will open up lots of possibilities for both camps.
i would also like to apologise to you personally drunkenmaster, if you took my earlier response to your comments to duff man's post as a personal attack. it certainly wasn't meant that way, but I'm fairly sure you are sensible enough to see what i was trying to say.
You try putting up with a constant barrage of posts like this over the last few days:
http://forums.overclockers.co.uk/showpost.php?p=15756872&postcount=21
thats just a small snippet... 1-2 posts like that can be funny but it gets very old, childish and tiring after awhile...
Glad you liked it, I was disappointed when you didn't respond!
But seriously, I was just messing about and having a bit of fun!
And it's only fun because you BITE!
In all honesty though, sorry if it caused offence as you seem a pretty decent chap.
I actually thought it was funny - but when I come on the forums in the morning and see 7 similiar posts - some purely obnoxious - it loses any humour. I didn't reply because I was so tired of it all and would have replied in a manner disproportionate to the post.
I personal can't wait for Nvidia to get out their new cards, as it should make ATI more honest and bring down their prices for their cards.
Okay they will be more expensive. For "surround gaming" point of view there are a few good points about Nvidias system compared with ATIs one. One is you don't need a monitor with a active display port, but you need two fermi cards and they only can support 3 screens which is too bad :|
thats better news than, as I have been here you needed two fermi card, which is worst than having to buy a DP screen
saying that, whos says one had to buy a 24inch screen with DP