Good Grief..
?
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
Good Grief..
NVLink: High-Speed GPU Interconnect
Yea, I'm not going to play this pointless game with you. The article and related sidebar links literally counter the rubbish you are trying to paint as fact. Just because you dress your rants up in as many words as possible does not make them any less factually wrong.
that means GPU kernels will be able to access data in host system memory at the same bandwidth the CPU has to that memory—much faster than PCIe. Host and device portions of applications will be able to share data much more efficiently and cooperatively operate on shared data structure, and supporting larger problem sizes will be easier than ever.
That articles 100% agrees with me, you just don't understand it.
Epic !!!!
![]()
Well, go on then one last little treat. NVlink removes the vast majority of latency as the CPU and GPU can directly access memory located in each others pools without having to copy it out. We don't know anything about cache coherency as of yet though. But you knew that already of course, as surely you wouldn't have gone off on such a rant without knowing the basics...
In Nvidia's world it does, in everyone else's reality it doesn't.
Here we get to the crux of the matter. Because this is Nvidia we are talking about, DM dismisses everything and anything that doesn't fit his agenda out of hand and then goes on to post big long spiels of text in the hope people will be confused enough by the end to just agree.
Sometimes I wonder why I bother.
There are some limitations of Unified Memory on current-generation GPU architectures:
Only heap allocations can be used with Unified Memory, no stack memory, static or global variables.
The total heap size for Unified Memory is limited to the GPU memory capacity.
When Unified Memory is accessed on the CPU, the runtime migrates all the touched pages back to the GPU before a kernel launch, whether or not the kernel uses them.
Concurrent CPU and GPU accesses to Unified Memory regions are not supported and result in segmentation faults.
That’s incorrect in both that only Apple are making money and that Apple are selling cheap ARM hardware. Apple are using in house designed CPU’s and a custom GPU’s from Imagination.“The only people making any real money in mobile are Apple, who sell cheap ARM hardware in shiny boxes. Qualcomm are pulling in most of their money from modems and licensing. Intel alone are making more than AMD and the commodity ARM players combined. “
Drunenmaster is correct nothing NVidia are doing is comparable. NVlink if anything is the opposite of the goal the HSA foundation has. NVlink is the very type of thing the HSA wants to move away from. NV links clearly doesn’t solve 99% of the problems that HSA is trying to solve.”Dude, seriously, read the whole article, not just the bits you want to.”
Would it not be more logical for AMD Intel and NVidia to design entirely new processors that are good at running both CPU and GPU type tasks which can be run together like GPUs can in parallel.
This would give the user the choice to add as many processors as is needed like you can with MGPUs to get the job done.
Sadly this would mean the end of Windows, too bad Microsoft.
Having said that I don't know much about the subject so I may have just written a load of rubbish.![]()
HSA solves that which is why many of the big players are all backing it. I suggest you re read the articles as you seem to have misunderstood what HSA is and you seem to have misunderstood drunkenmaster as what he said is correct.The problem is we already have issues with general purpose software making use of multiple cores as it is, trying to spread them over devices with potentially hundreds of cores is going to be even more complex.
Would it not be more logical for AMD Intel and NVidia to design entirely new processors that are good at running both CPU and GPU type tasks which can be run together like GPUs can in parallel.
This would give the user the choice to add as many processors as is needed like you can with MGPUs to get the job done.
Sadly this would mean the end of Windows, too bad Microsoft.
Having said that I don't know much about the subject so I may have just written a load of rubbish.![]()
HSA solves that which is why many of the big players are all backing it. I suggest you re read the articles as you seem to have misunderstood what HSA is and you seem to have misunderstood drunkenmaster as what he said is correct.
Unfortunately it doesn't magically solve the problem of how do you spread your code over many execution units. Code still has to be written to take advantage of 'moar corez'.
Again you show a complete lack of understanding of the problem. Because this is precisely what HSA is doing, oh no, wrong again.
Give it a rest layte, you waded in without knowing whereof you speak, just bow out gracefully.