• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Anyone explain ATI & NVidia fanboys?

Status
Not open for further replies.
You're making a factual claim about the costs involved in implementing DirectX 10.1. When call that claim an 'opinion' all you're really saying is "this is something I believe".

It doesn't absolve you from the need to present evidence to justify your belief. And that's all people are asking in this thread: where is the cost of implementing DirectX 10.1 documented, i.e where is the source from which you obtained the cost estimate? You did call it an "approximation" after all, rather than "total stab in the darkness".


If you don't know, then it's not an approximation; it's a total guess. Why isn't the cost 15% or 20% or 35% or 75% or 500%? Can you give a reason for preferring any one of these figures over the others? If you can't, then you don't know any more than anyone else, here.

This is how Rroff rolls. He makes claims he can't back up and then changes his opinion to suit the responses he receives.
 
I wasn't talking about implementing DX10.1 - I was talking about the original proposed DX10 specification.



Ok a guess then whatever - I have a reasonable idea of the cost involved as you increase the complexity of your core, increase in cost as your yield per wafer decreases, increase in cost as you have to spend a LOT more time developing your drivers and an even bigger increase in the amount of cross testing you have to do... your software development and testing workload increases exponential with increased features.

You haven't got a clue.
 
So you really believe they could have implemented the original DX10 specification on an 80/90nm process without a significant increase in cost?
 
I wasn't talking about implementing DX10.1 - I was talking about the original proposed DX10 specification.



Ok a guess then whatever - I have a reasonable idea of the cost involved as you increase the complexity of your core, increase in cost as your yield per wafer decreases, increase in cost as you have to spend a LOT more time developing your drivers and an even bigger increase in the amount of cross testing you have to do... your software development and testing workload increases exponential with increased features.

The point is, you can tell looking at the specification of the 2900 that what DX11 is now is what DX10 should have been in the first place.

There's too many similarities to disregard this.

To make claims that it would have cost 3x as much to the end user is complete BS. Are you honestly saying that an 8800GTX, had it kept to the original DX10 spec would have been priced at £1200?

If you do, then you're full of more nonsense than I've ever thought.

There's thinking that you're clued up about graphics hardware and then there's making things up.

If ATi were able to incorporate the majority of the original features, then how can you make claims that it would have been too expensive for nVIdia to do so? What makes it worse still is that ATi then halved the price of the 2900s when they moved R600 over to an 55NM process and changed it to the 3800s.

What ever nVidia is doing, I don't think cost is a factor in it. I wouldn't be surprised if it was as plain as they can't get it working.

DX10.1 has demonstrated performance improvements that nVidia surely shouldn't ignore, but implement into their hardware. nVidia being nVidia though, rather than be innovative, they'll just pay off who ever it takes to cripple the competition.

The propaganda they spread about DX10.1 being useless, despite the performance gains we all were aware of in Assassin's Creed, nVidia couldn't have that so they got it removed.

DX10.1 was a good thing, nVidia would have you think otherwise to ensure they don't look bad. They wanted to create the impression of 'ATi are implementing pointless features that will never be used, we're giving the users what they want on the other hand.'

The reality is that it's down to nVidia, they've stifled innovation. They're basically abusing their position as market 'leader' in terms of market shares. They know if they rubbish something that the majority of devs will follow them due to the majority of cards out there lacking support for DX101.

The implementation of it though, in terms of Ubisoft made a huge amount of sense, it was less work for them and less work for the GPU, yet nVidia said it was broken? A feature that didn't work on their cards, so therefore wouldn't have had a direct impact on the performance of their cards.

The difference was, it gave ATi a much needed boost in performance ahead of nVidia cards, making them look even less value for money.

Same happened with 3D Mark Vantage too.

There's so many wrong things that nVidia does that I don't get how people even begin to try and defend their actions as if there's no issue.
 
Status
Not open for further replies.
Back
Top Bottom