• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Fermi delayed till May

You do realize that the forum's users are actual real people, and that Nvidia is a faceless corporation, right?

I know it might seem harsh in isolation... but its not like hes not been making similiar digs directed at me for the past 2-3 years... I have a dig back and all hell breaks loose lol...
 
Thing is now they have started making Ati cards will nvidia be happy letting them continue making there cards aswell when they do surface to market?

If yields are anywhere near as bad as they are supposed to be. They could only supply a tiny fraction of what's need to keep the company's in business.

They will be in no position to use the usual strong arm tactics and I think a rebrand-a-thon and some good old Nvidia marketing genius may be on the cards, or maybe just hand out some compensation. They are pritty good at that TBH.
 
Point i was making my way to is that with severely overdue products coming to a market with less partners to push at more varied prices, its going to seem like a bit of an uphill struggle for nvidia no?

Instead of fermi being the product to get them back in the game i think its going to be its successor that does it, im not suggesting for one second that a massive corporation like nvidia would be crippled in the market by all of this but it has to be hurting them a lot more than they had planned?
 
I think what is best for now is stop waiting for fermi, upgrade if need be if not enjoy your games. well its what i m going to do :p

oh wait what good games >_>

who cares if COD4 goes at 120 frame rates and your old GPU is only at 100? its still going to be smooth, still does what you want, more of a brag factor if you ask me :p

Good point. And Crysis 2 is going to be an XB360 game? WTF? :mad:

I've seen a lot of people mention that moving from 40 to 28nm would make the G100 suddenly viable. What is it about the 40nm process which is a problem, and why will this issue evaporate with 28nm?

Is it simply a case of lower power consumption and die sizes leading to better yields, or is 40nm just uniquely awful?

Wow, a post about actual videocard related topics.
 
Point i was making my way to is that with severely overdue products coming to a market with less partners to push at more varied prices, its going to seem like a bit of an uphill struggle for nvidia no?

Instead of fermi being the product to get them back in the game i think its going to be its successor that does it, im not suggesting for one second that a massive corporation like nvidia would be crippled in the market by all of this but it has to be hurting them a lot more than they had planned?

They could probably make up quite a huge loss from its graphics card division by supplementing it, but that will depend on how much they are set to lose. Would this bankrupt them ? not to sure, but I'll bet Intel will be rubbing their hands together.
 
I've seen a lot of people mention that moving from 40 to 28nm would make the G100 suddenly viable. What is it about the 40nm process which is a problem, and why will this issue evaporate with 28nm?

Is it simply a case of lower power consumption and die sizes leading to better yields, or is 40nm just uniquely awful?

Nothing at all, people stupidly, very stupidly, think that a CURRENT GF100 on 28nm would be great, the problem being, AMD will go wherever they go on 28nm, lets say they double shaders again, well Nvidia will need a doubling in shaders aswell, plus the other bits, which will mean it will be a VERY similar size on 28nm, and have ALL the problems the 40nm process has.

Yes if Nvidia make an exact GF100 on 28nm, it will be fine, but it will basically be a midrange core, not a high end one, it will be half the size it is now and still be very difficult to make.

So the main issue is people completely ignore that whatever is in the works for 28nm, lets call it a GF200, will be double the transistors of the GF100, and therefore really have no improvement under 28nm, but all the same issues. Though a better design, its likely to sacrifice shaders and power, or become EVEN bigger. THe 5870 is bigger than it was planned due to design sacrifices made to accomodate TSMC's crappy process. Nvidia didn't do these things, and can barely make their cores, they will need to incorporate them under their 28nm design as all the biggest problems within TSMC's 40nm process are still there in their 28nm process. So Nvidia literally have no choice but to sacrifice some die space to bus/shaders/rendering cell, or simply add die size with the modifications. The design, large and inefficient, is simply not suitable for large scale production these days, its why everyone else on earth has gone for small sizes and more efficiency rather than brute force.

One day Nvidia will learn that lesson.

The physical die size of the G100 is pretty much borderline to what is viable to produce in substantial yeilds on a 40nm process.

Yes, it is, the problem is, their next core would obviously, like ever before, be designed with a rough doubling of shader power to be added, meaning double the transistors give or take as I said. The plan will be a 6billion transistor on 28nm, not a high end 3billion transistor core. GF100 will never be made at 28nm, ever.
 
lol I made a harmless joke about how you were all big keyboard warriors and your still holding it against me?

Holding it against you? I think not.

It's forum banter.

You may dislike me and others, I wouldn't go so far as to do the same my self.

As I have said before, I actually don't dislike you, I don't even know you enough to dislike you.

We have petty disagreements and banter, that's all it is.

You should know that I give as good as I get, but it's just good fun really.

As I've said already, I only said something because whilst you were dishing out the banter, you weren't taking retorts very well.
 
:mad: it better not! :mad:


thats why us PC gamers can laugh in the face of console fanbois saying their platform is inferior (it is :p)

They've aimed CryEngine 3 at all "platforms".

That's PC, 360 and PS3.

However, if you've seen Crysis 2 videos for PC, you'll see that they're definitely not skimping on it.
 
Nothing at all, people stupidly, very stupidly, think that a CURRENT GF100 on 28nm would be great, the problem being, AMD will go wherever they go on 28nm, lets say they double shaders again, well Nvidia will need a doubling in shaders aswell, plus the other bits, which will mean it will be a VERY similar size on 28nm, and have ALL the problems the 40nm process has.

Yes if Nvidia make an exact GF100 on 28nm, it will be fine, but it will basically be a midrange core, not a high end one, it will be half the size it is now and still be very difficult to make.

So the main issue is people completely ignore that whatever is in the works for 28nm, lets call it a GF200, will be double the transistors of the GF100, and therefore really have no improvement under 28nm, but all the same issues. Though a better design, its likely to sacrifice shaders and power, or become EVEN bigger. THe 5870 is bigger than it was planned due to design sacrifices made to accomodate TSMC's crappy process. Nvidia didn't do these things, and can barely make their cores, they will need to incorporate them under their 28nm design as all the biggest problems within TSMC's 40nm process are still there in their 28nm process. So Nvidia literally have no choice but to sacrifice some die space to bus/shaders/rendering cell, or simply add die size with the modifications. The design, large and inefficient, is simply not suitable for large scale production these days, its why everyone else on earth has gone for small sizes and more efficiency rather than brute force.

One day Nvidia will learn that lesson.



Yes, it is, the problem is, their next core would obviously, like ever before, be designed with a rough doubling of shader power to be added, meaning double the transistors give or take as I said. The plan will be a 6billion transistor on 28nm, not a high end 3billion transistor core. GF100 will never be made at 28nm, ever.

What I'm finding worrying is how many issues they're having so close to release.

Considering ATi and nVidia claim to be working on multiple generations of hardware at once, doesn't the issues nVidia are having with Fermi indicate that they haven't been doing much at all?

Surely all these manufacturing issues should have happened around early to mid 2009?

I'm pretty sure ATi had fully working engineering samples of their Evergreen GPUs floating about around January 2009?

It's bordering on 6 months ago now that Jen Hsun showed off that Fermi dummy.

What happened to the "up to 3 generations at a time?".

Sounds like they've had some issues that have prevented them from doing much at all or just being plain lazy and resting on their reputation for sales?
 
With seperate dev teams as well, so PC version should truly live up to its platform :D

I do admit that I was very impressed by the Crysis 2 footage, even more impressed that it was running at 7680x1600 (Eyefinity 30") on a 5870.
 
Sounds like they've had some issues that have prevented them from doing much at all or just being plain lazy and resting on their reputation for sales?

The problem for nVidia is... they had issues with the design that was supposed to be on 40nm (212) and had to pull forward their next gen design (GF100) that was never supposed to be on 40nm back when it was pencilled out. Despite what drunkenmaster says the design will work a lot better on sub 40nm and ATI's next generation (Northern Islands) will be far more like Fermi than anything they've designed before.
 
Back
Top Bottom