• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD announce Radeon Pro SSG

That is certainly a possibility and with 128GB NVME drives coming in at about £60, 32GB of flash shouldn't add too much to the cost of a graphics card.

Definitely interesting times ahead.

Problem there is internally ssd controllers treat the NAND chips in raid which is why a 256GB stick is much faster than a 128GB stick(more so in writes than reads with those capacities). A 32GB stick would have so much less performance for read as well as write and a lot less utility in terms of wear levelling/life span, that it's not really feasible. Remember that they managed 850MB/s over the normal pci-e bus M2 storage, so a 32GB NVME stick that only had 300MB/s of reads wouldn't improve things. THen in price terms if you're using the same £20 pcb/controller/vrm regardless of if you use 1 Nand chip or 4, price doesn't come down as much as you think for such a low capacity anyway.


Maybe AMD is already cooking something up? Might explain why they not jumping onto the highend GPU market this year. Would love to see this used on Desktop GPUs can see it helping out in high demanding situations.

Cooking something up, sure, it's called Navi with 'nextgen memory'. That nextgen memory is 3d Xpoint memory. 3d xpoint is pretty much a game changer when it comes to density of memory and speed, which sits somewhere in the middle of Nand and RAM.

My guess is Radeon Pro SSG is more aimed at getting the industry prepared for Navi with a half way step, show them what can be achieved with two ssds on the pcb, tell them(under NDA) that Xpoint is coming that will offer higher capacity and more speed in the same size and they buy the Pro SSG to both use but also start moving their software to be optimised for that and ready for Navi.

The question would be, with Xpoint being faster per chip by a good deal than NAND and much lower latency(further increasing the bonus of being on the pcb rather than going through pci-e bus, driver, cpu and back through all of them again), will there be a benefit for gamers with lower capacity.

Release a high end Navi with a $15k 1TB xpoint version for professional and a $600 version with a single 32-64GB chip for gaming.
 
Problem there is internally ssd controllers treat the NAND chips in raid which is why a 256GB stick is much faster than a 128GB stick(more so in writes than reads with those capacities). A 32GB stick would have so much less performance for read as well as write and a lot less utility in terms of wear levelling/life span, that it's not really feasible. Remember that they managed 850MB/s over the normal pci-e bus M2 storage, so a 32GB NVME stick that only had 300MB/s of reads wouldn't improve things. THen in price terms if you're using the same £20 pcb/controller/vrm regardless of if you use 1 Nand chip or 4, price doesn't come down as much as you think for such a low capacity anyway.




Cooking something up, sure, it's called Navi with 'nextgen memory'. That nextgen memory is 3d Xpoint memory. 3d xpoint is pretty much a game changer when it comes to density of memory and speed, which sits somewhere in the middle of Nand and RAM.

My guess is Radeon Pro SSG is more aimed at getting the industry prepared for Navi with a half way step, show them what can be achieved with two ssds on the pcb, tell them(under NDA) that Xpoint is coming that will offer higher capacity and more speed in the same size and they buy the Pro SSG to both use but also start moving their software to be optimised for that and ready for Navi.

The question would be, with Xpoint being faster per chip by a good deal than NAND and much lower latency(further increasing the bonus of being on the pcb rather than going through pci-e bus, driver, cpu and back through all of them again), will there be a benefit for gamers with lower capacity.

Release a high end Navi with a $15k 1TB xpoint version for professional and a $600 version with a single 32-64GB chip for gaming.

It doesn't have to be an SSD, a better solution would be DDR3, even in single channel mode DDR3 is running at 8GB/s or higher depending on how high its clocked, 8GB/s would be 1866Mhz, thats very standard DDR3 RAM, nor does it need to be that much of it, 8GB of it is excessive for game Texture Caching.

Cost to AMD? 8GB of high density RAM IC's, <$30?

Such a GPU may look something like a 390X - Fury-X hybrid.
8 - 16GB HBM2 round the Core + 8GB Memory IC's round the SKU guard.
 
If they could easily just throw on lots of extra chips on a pcb, AMD and Nvidia would already do so, also texture cache is one thing, dramatically increasing texture quality by using massively larger textures needs more than just a cache.

RAM is low density, NAND is high density, Xpoint is epic density. Putting one or two small chips on a pcb that only require a dozen traces is far more feasible than putting 8 new chips all requiring 30 traces a piece and is the very reason they don't do it already.

Thinking a bit of extra ram for texture caching is thinking small and short term, thinking 10 times the memory capacity on the card is thinking big and longer term for a game changing difference in gpus.

A Titan X already has 12GB of more than fast enough gddr5 of which in most games less than 4GB is usually required with them just keeping stuff in memory rather than more aggressively collecting bits currently unused, it doesn't make an awful lot of difference.
 
Problem there is internally ssd controllers treat the NAND chips in raid which is why a 256GB stick is much faster than a 128GB stick(more so in writes than reads with those capacities). A 32GB stick would have so much less performance for read as well as write and a lot less utility in terms of wear levelling/life span, that it's not really feasible. Remember that they managed 850MB/s over the normal pci-e bus M2 storage, so a 32GB NVME stick that only had 300MB/s of reads wouldn't improve things. THen in price terms if you're using the same £20 pcb/controller/vrm regardless of if you use 1 Nand chip or 4, price doesn't come down as much as you think for such a low capacity anyway.




Cooking something up, sure, it's called Navi with 'nextgen memory'. That nextgen memory is 3d Xpoint memory. 3d xpoint is pretty much a game changer when it comes to density of memory and speed, which sits somewhere in the middle of Nand and RAM.

My guess is Radeon Pro SSG is more aimed at getting the industry prepared for Navi with a half way step, show them what can be achieved with two ssds on the pcb, tell them(under NDA) that Xpoint is coming that will offer higher capacity and more speed in the same size and they buy the Pro SSG to both use but also start moving their software to be optimised for that and ready for Navi.

The question would be, with Xpoint being faster per chip by a good deal than NAND and much lower latency(further increasing the bonus of being on the pcb rather than going through pci-e bus, driver, cpu and back through all of them again), will there be a benefit for gamers with lower capacity.

Release a high end Navi with a $15k 1TB xpoint version for professional and a $600 version with a single 32-64GB chip for gaming.

Yes good point about a lot of the speed coming from the controllers effectively using the flash in an internal raid like fashion.

Xpoint is going to open up lots of new possibilities, but I bet it will start out frightfully expensive.
 
Meh, people always say new memory will start off crazy expensive, yet gddr1-5, ddr1-4, HBM1... where do these memory types appear first more often than not, consumer level products.

They come to market at the time they do because costs have come down to make them viable cost wise to go into products. Before they become viable cost wise they remain in prototype/sampling/theory stage.

Despite interposer, HBM, being completely new and limited production line capacity Fury X was still pretty well priced, way cheaper than a Titan X and they didn't take a loss on each one sold.

More expensive than previous tech... usually(not always, they often appear on older process first to keep costs down but with some early power/capacity shortfalls compared to existing tech), but frightfully or too expensive to be used, almost never.

The difference will be that a product designed for 8k video editing in real time will have 2TB of the stuff and cost a bomb while a consumer product will have a modest amount for a modest cost.
 
Meh, people always say new memory will start off crazy expensive, yet gddr1-5, ddr1-4, HBM1... where do these memory types appear first more often than not, consumer level products.

They come to market at the time they do because costs have come down to make them viable cost wise to go into products. Before they become viable cost wise they remain in prototype/sampling/theory stage.

Despite interposer, HBM, being completely new and limited production line capacity Fury X was still pretty well priced, way cheaper than a Titan X and they didn't take a loss on each one sold.

More expensive than previous tech... usually(not always, they often appear on older process first to keep costs down but with some early power/capacity shortfalls compared to existing tech), but frightfully or too expensive to be used, almost never.

The difference will be that a product designed for 8k video editing in real time will have 2TB of the stuff and cost a bomb while a consumer product will have a modest amount for a modest cost.

If this does lead to higher quality textures, I'm all for it. I've been saying for a long time that I don't (within reason) care about higher resolutions or frame rates. I want the emphasis to be on more realistic / better graphics. My 1080p monitor is fine for the distance I sit from it. I can see 1440p would be a nice to have. I'm sure 4K is great but I can see no reason to ever go beyond it. I've also never used a monitor above 75Hz and whilst I believe those saying that 144Hz is great, again I can see no value in pushing beyond that. Maybe finally we're going to see the big emphasis being more realistic graphics.

I want to see the next Dragon Age have graphics like this:
https://youtu.be/rpDdOIZy-4k?t=180
 
Back
Top Bottom