Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
the 4080 16 gb is the real 4080 on AD103, the 4080 12gb is on AD104 which has previously been a 70 class chip so should be a 4070 , the 4090 on AD102 is correct and when they release the 4090ti will be the full die with an extra 2000 cuda cores and should provide a nice bump ... also plenty of room between a 4080 and 4090 for a 4080Ti with 20gb and probably around 13000 cuda cores .. i imagine the Ti versions will come next year as a yearly refresh.Not quite... the 4080 12gb is a 4060, the 4080 16gb is a 4070 and the 4090 is a 4080... the real big chip, the 4090ti will be what the 4090/titan would have been.
Every SKU has been rebadged one level up.
At least the 4090ti will be a proper performance jump from the 4090 haha.
The hilarious part of RTX4000 is that the only card that is good value is the top of the line 4090, the other models are just terrible until they get price cuts
Thanks nvidia for increasing the 2nd hand value of my 3090
Yay! Palit unicorn turd edition is back!
Nvidia left off tensor and RT cores from the specs on their site, IGN got them. 4080s cut way down.
![]()
Nvidia left off tensor and RT cores from the specs on their site, IGN got them. 4080s cut way down.
![]()
Yay! Palit unicorn turd edition is back!
if you want a proper 4080 then that will be from £1280So did these hold comments screw a load of people?
I guess if you were previously looking for a 3080... now you have to pay £949+ for a 4080.
Yeah... the proper one haha.if you want a proper 4080 then that will be from £1280![]()
The 408012gb is a 4070 in everything apart from name , its even on a different die AD104 , The only reason its called a 4080 is so Nvida can get more money from people who know no better![]()
Is there no way for them to increase the number of RT cores per SM? seems like RT is really held back by how few RT cores they have.It's not cut down, it is and has always been 1 RT core for each SM since Turing - I don't see why people keep asking how many RT and Tensor cores it has, Nvidia told you how many SMs it has therefore they also told you RT and Tensors