• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA 4000 Series

We just need to get this launch done imho, things aren't going to move until that happens and for all we know this GTC event is just a teaser for a launch in a few months time... It's starting to drag now.
 
We just need to get this launch done imho, things aren't going to move until that happens and for all we know this GTC event is just a teaser for a launch in a few months time... It's starting to drag now.

Where did you get "few months" from? I expect availabilty in October at the latest, for 4090 at least.
 
Tbh the 4090 was always going to be super expensive, so this isn't a surprise. The real question was always going to be what the low/mid-range looks like.
 
If these are $2000 it works out at 1700 odds but of course ocuk will have to make on that too of course.

Ha! Look at the iPhone if you want to see where prices are going at the moment. The pound is at a pretty bad low at the moment, against the US dollar and other currencies.
$1000 = £1100 in apple-land

(which to, to be fair, once you take VAT into account is only about 50 quid off the mark)

So if there's a $2k card, including 20% VAT that's £2080 at today's rates, with no markup.
 
Still think this new font looks weird. I don’t like it.

Quad slot GPU’s, I imagine we will see more cases with integrated GPU support brackets.
 
I swear when the 3000 series came out we all thought that they were pushing the power envelope because they got stuck on a crap node and needed to keep AMD at bay.

So what's the excuse this time? Do they know something about AMD that we don't? Does the architect just not scale well? Or are the RT and AI cores really that power hungry?
 

GeForce RTX 4090​

Starting with RTX 4090, this model features AD102-300 GPU, 16384 CUDA cores and boost clock up to 2520 MHz. The card features 24GB GDDR6X memory, supposedly clocked at 21 Gbps. This means that it will reach 1 TB/s bandwidth, just as the RTX 3090 Ti did. Thus far, we have only heard about a default TGP of 450W, but according to our information, the maximum configurable TGP is 660W. Just note, this is the maximum TGP to be set through BIOS, and it may not be available for all custom models.

GeForce RTX 4080 16GB​

The RTX 4080 16GB has AD103-300 GPU and 9728 CUDA cores. The boost clock is 2505 MHz, so just about the same as RTX 4090. This model comes with 16GB GDDR6X memory clocked at 23 Gbps, and as far as we know, this is the only model with such a memory clock. The TGP is set to 340W, and it can be modified up to 516W (again, that’s max power limit).

GeForce RTX 4080 12GB​

GeForce RTX 4080 12GB is what we knew as RTX 4070 Ti or RTX 4070. NVIDIA has made a last-minute name change for this model. It is equipped with AD104-400 GPU with 7680 CUDA cores and boost up to 2610 MHz. Memory capacity is 12GB, and it uses GDDR6X 21Gbps modules. RTX 4080 12GB’s TGP is 285W, and it can go up to 366W.

As you can see, there is no RTX 4070 listed for now, and AIBs do not expect this to change (after all, it’s just 6 days until announcement). As far as the launch timeline is concerned, RTX 4090 is expected in the first half of October, while RTX 4080 series should launch in the first two weeks of November. We are waiting for detailed embargo information, so we should have more accurate data soon. To be confirmed are still PCIe Gen compatibility and obviously pricing.

Coming soon...

Xpc6Iwv.png

:cry:
 
I swear when the 3000 series came out we all thought that they were pushing the power envelope because they got stuck on a crap node and needed to keep AMD at bay.

So what's the excuse this time? Do they know something about AMD that we don't? Does the architect just not scale well? Or are the RT and AI cores really that power hungry?
Its the much higher clock speed, nvidia's first real increase since they 2016 with the gtx 1080 et al. Much of the performance of the 4000 series will come from higher clocks since they will be spending a lot of die space on l2 cache to provide enough bandwidth.
 
I swear when the 3000 series came out we all thought that they were pushing the power envelope because they got stuck on a crap node and needed to keep AMD at bay.

So what's the excuse this time? Do they know something about AMD that we don't? Does the architect just not scale well? Or are the RT and AI cores really that power hungry?
Well, with expensive nodes a running narrow but fast design makes sense but TPU has the rumoured GA102 at 600mm² anyhow. So it looks at least for the top consumer dies they've gone as big as they can and as fast as they can. Way outside the node's perf/watt sweetspot. Undervolting should be fun though - if only the prices weren't based on the clocked to with an inch of its life performance.
 
No stars or mention of ‘Official’ in thread title. Disappointing.

I wouldn’t be surprised in the slightest to see the card one rung from the top to be £1500+ and the top one £2k+. Enthusiasts have clearly shown they will pay the exorbitant prices. The only potential issue is the rising cost of living could put a squeeze on this discretionary spending.

Was it all "enthusiasts" though?

Once mining profitability tanked, so too did Nvidia's sales.

3090ti's have been sitting on shelves at almost half price for a while now. I'm sure Nvidia has noticed.

*edit* quoted from page one of the thread. Point still stands though.
 
Its the much higher clock speed, nvidia's first real increase since they 2016 with the gtx 1080 et al. Much of the performance of the 4000 series will come from higher clocks since they will be spending a lot of die space on l2 cache to provide enough bandwidth.

Well, with expensive nodes a running narrow but fast makes sense but TPU has the rumoured GA102 at 600mm² anyhow. So it looks at least for the top consumer dies they've gone as big as they can and as fast as they can. Way outside the node's perf/watt sweetspot. Undervolting should be fun though - if only the prices weren't based on the clocked to with an inch of its life performance.
Good Spot on the higher clocks. Does seem to indicate that they are worried with what AMD could be bringing to the table, otherwise they would constrain it a bit.





Coming soon...

Xpc6Iwv.png

:cry:

jonah-hill-annoyed.gif


Just name it the 4070
 
I swear when the 3000 series came out we all thought that they were pushing the power envelope because they got stuck on a crap node and needed to keep AMD at bay.

So what's the excuse this time? Do they know something about AMD that we don't? Does the architect just not scale well? Or are the RT and AI cores really that power hungry?

There was nothing wrong with Samsungs 8nm node, just people tried to justify that it was rubbish because AMD was on TSMC 7nm and bashed Samsungs node as not a true 8nm but 10nm.

Anyways clearly nothing wrong with it when we are getting 660w gpus from TSMC.. Now lets worry about how long them chips will last at these silly power levels and node size.
 
Back
Top Bottom