Associate
- Joined
- 22 Nov 2020
- Posts
- 1,513
Mass appeal around these parts I expect.Apparently around October but the rumor so far is the 5090 could be starting out at £1999 which doesn't really surprise me.
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
Mass appeal around these parts I expect.Apparently around October but the rumor so far is the 5090 could be starting out at £1999 which doesn't really surprise me.
Mass appeal around these parts I expect.
I remember saying that to myself over a hundred times about the 2080ti, 3090 & 4090 gpu's after seeing the stupid pricesFor that price Nvidia can keep it, 4090 is my last stupidly priced GPU.
I remember saying that to myself over a hundred times about the 2080ti, 3090 & 4090 gpu's after seeing the stupid prices
And i still ended up buying a 2080ti & now a 4090
Especially when you can have just as good of a time playing at 2k rather than 4k for one quarter of the cost. And I'm saying that as someone who has both types of displays.I'm not in the same financial situation I was when the 4090 came out though as now I'm planning for the next few years, Home, Kids etc... so £2000 for something to draw pictures on a screen is a no go.
FixedEspecially when you can have just as good of a time playing 'hide the banana'
At GDC Jensen said the two dies talk to each other like there is only one. Probably MCM design becomes doable.
I was thinking that, too. But who knows, maybe they've done some more magic to go around the limitations. We'll see.As far as we know based on public information, unless a microchip architect or engineer can interject and correct us, using MCM to make a gaming GPU with two full dies on the same silicon substrate requires around 100TB/s of connection bandwidth between the two dies (50TB/s each way) and right now Nvidia's GB Blackwell GPU only has a 10TB/s connection (5TB/s each way). 10TB/s is enough for generative AI, inference and computational workloads because the workloads are not sensitive to latency and don't need to output full image frames within small frametimes, but gaming is sensitive to this and so requires much more bandwidth to work without issues.
And while I do not understand the exact technical reasons for why its hard to scale up the connection bandwidth between the dies; the dumbed down consumer friendly version is that there is lots of little wires that have to run between the two dies to transport the data and if you want to increase bandwidth you have to lay more wires between the two dies and its very very hard fitting all the wires into that tiny space.
GN's situational take on NV:
"ah yes my favorite programming languages: Java, Python, and 4chan"
Get a load of this guy
He has a Radeon 7 in the background, bottle of Pepsi on the right and a family photos of 3 monkeys on the left wall
No idea who that guy is, or why he is
Probably not even custard cream biscuits
GN's situational take on NV:
"ah yes my favorite programming languages: Java, Python, and 4chan"