• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The One Vision That Intel, AMD And Nvidia Are All Chasing – Why Heterogeneous Computing Is The Futur

“No, you still have to sufficiently 'thread' your code to take advantage of any increase in execution units.” “Unfortunately it doesn't magically solve the problem of how do you spread your code over many execution units. Code still has to be written to take advantage of 'moar corez'.”
No you don’t as I understand it. The point of HSA is the hardware doesn’t matter as you don’t code for a single types of hardware, don’t code for cores. You don’t even code for the CPU or GPU. You use the entire system compute power.




“HSA will likely gain some traction in the mobile ARM space, but with only AMD backing it on the desktop/laptop/server (ok the ARM guys have some hardware here nobody is currently interested in) space it's not going anywhere quickly there.”
You seem to be missing the point of HSA. When the standard is created it can be used on the desktop and you can swap away from Intel. HSA means you can use MIPS, ARM, AMD interchangeable. You are not fixed into a one hardware type.




“Why cant we have nice things, why cant points be raised without some going crazy? Nobody wanted to discuss my points, only beat them down because it didn't fit the agenda.”
No one is going crazy or beating down points as it didn’t fit our agenda. You completely misunderstood the situation then started blaming us. I directly asked you questions in a polite matter and all you did was go crazy at us. The very thing you are accusing us of. I am trying to discuss the points but you don’t seem to want to.


“The main premise of HSA is to allow CPU and GPU to work as one,”
That’s not the main premise of HSA it’s in the list of requirements for HSA though. The main premise of HSA is that the high level language is used so you are not locked into a single hardware solution. The premise is that you don’t have to recode the application when swapping between CPU’s, GPU brand. You don’t code for a single CPU or GPU or set amount of cores. You code in the high level language and HSA makes use of the entire system compute power.

The goal of HSA is that it’s not feasible to keep coding for dozens of hardware platforms. So you run one code across all the platforms. Making it so you can swap GPU and CPU without new code. I don’t see how Nivida are doing anything comparable to that and would go as far to say NVlink is the opposite to the goal of HSA.

Shared memory is part of the list of requirements for HSA. It is not the main premise of HSA.


“You say HSA is a standard. But it's a standard for markets where the current incumbents have no interest in taking any part of.”
That’s not true at least 3 of the CPU makers want HSA and want to work in servers.
 
Last edited:
Dear lord and this is the problem. NVlink is a interconnect, it isn't intended to make the CPU and GPU work as one, it's meant to improve scaling of multi gpu systems. Something you missed with all the articles from Nvidia comparing the interconnect to PCI-E 3, with all the articles with multi-gpu in the title, with every benchmark being about multi gpu.

The main premise of HSA is not to allow the cpu and gpu to work as one, it is ONE of the major premises. Having independent IP blocks available from different companies that will work together is frankly as large a premise and one of the most difficult to achieve.

One fundamental goal behind a cpu and gpu working as one is to have them on the same die to remove the communication latency. Currently ignoring unified memory, sending a smaller amount of data to a gpu across the pci-e bus to be processed before the cpu can continue has extreme latency and very low performance improvement if not a performance reduction.

99% of Nvidia's work is around having a single cpu in a system with up to 8 gpus and 99% of the work happening on the GPUs. When you do this bandwidth over the interconnect becomes saturated as you have 8 or even more GPUs using the same bus. This is entirely what NVLink is out to do, alleviate that bandwidth. It has zero goal, zero intention of enabling on die gpu/cpu to act as one unit quickly and efficiently.

You mentioned white papers, I did not. Funny I talk about Java, why, you think mentioning Java as well somehow links Nvlink and Java together to improve your argument, it doesn't. Java is a language, what you do with it doesn't have to be connected to what someone else does with it at all. You're inventing connections to support your argument where there are none.

You say it's a standard for markets where the incumbents have no interest in taking part... then say it will gain some traction in mobile. So mobile is a market, literally all the major players are involved.... but the incumbents have no interest, okay.

Thankfully there is zero overlap in desktop and mobile software. I mean people manipulate photos on mobile phones all the time but you'd never want to play with photo software or gpu acceleration on the desktop.

I'm not saying anything bad anywhere about Nvidia. YOU are making it AMD vs Nvidia.

YOU decided to poo poo HSA as unnecessary and pointless and attempted to imply that Nvidia have their own work to accomplish the same things. I am arguing against this utter nonsense you are coming out with. It's nothing to do with Nvidia. there is entirely nothing wrong with NVlink, it's not bad technology, it's not hugely interesting not least because it's only aimed at IBM compute machines which have and never will have any relevance to 99.99999% of users on this forum.

This is entirely about you attempting to deflect another AMD technology with absurd arguments on things you know very little about.


EDIT:- I would add that as a programmer who has studied computer science and still continues to learn more about programming, threading is pretty easily achieved, the incentive to do it is not there. It is expensive to have 30 extra people to continually code, bug check and update programs for every platform separately to achieve the required threading and gpu acceleration. The financial incentive, which is what man hours of coding are, a cost, is simply not that to push forward with wide spread gpu acceleration. When you remove all these barriers, allow less specialised coders easier ability to access gpu acceleration and make that code platform independent you dramatically reduce the cost of coding to take advantage of the available performance.

If you can gain 5% performance but it requires 1000 man hours of work on each of 5 platforms you might not bother, when the performance gain becomes 45%(because of things like unified memory) and it takes 200 hours of work total for all platforms because the code has been made higher level, easier and works on all HSA hardware you are changing the cost/benefit ratio so significantly that people will start to code for it. That is the main reason behind HSA, to make gpu acceleration/general compute of any kind usage cheaper to enable while also providing a larger performance benefit that gives the industry no reason not to move forwards.

Lastly, me and Pottsey agree on precious little, that we both agree should suggest just how painfully wrong you are on this.
 
Last edited:
Jesus Christ. I make an off hand reference to what Nvidia are up to, post some white papers that show how it could be used to accomplish very similar functionality and where they are heading, and here is where we end up. No dissenting voices allowed.

Well done everyone, lets pat our selves on the back.
 
Would it not be more logical for AMD Intel and NVidia to design entirely new processors that are good at running both CPU and GPU type tasks which can be run together like GPUs can in parallel.

This would give the user the choice to add as many processors as is needed like you can with MGPUs to get the job done.

Sadly this would mean the end of Windows, too bad Microsoft.:D

Having said that I don't know much about the subject so I may have just written a load of rubbish.:)

Qualcomm, ARM, AMD and Intel yes, as DM explained for this to work the Parallel and Serial compute cores need to work seamlessly in one memory pool, so the Parallel, Serial Cores and Memory Pool all need to be on the same die (Heterogeneous System Architecture), by definition an (Accelerated Processing Unit), an example of that would be the Kaveri A10 7850/70K.

Intel's CPU's with iGPU's are also by definition APU's tho somewhat less advanced, all the cores and memory architecture is less interconnected, they are more like AMD's first gen Llano APU's.

The problem is Nvidia have no APU's, no HSA, NV-link is a fast interconnect between separate Serial and Parallel and Parallel to Parallel, its not a singular architecture, there-for Serial cannot access Parallel and Parallel cannot access Serial simultaneously.
Its more a kin to a low latency GPGPU, it cannot accelerate a Serial workload.

The problem for Nvidia is they have no Heterogeneous Architecture and NV-Link is not the answer to this, it has other benefits.
 
Last edited:
Jesus Christ. I make an off hand reference to what Nvidia are up to, post some white papers that show how it could be used to accomplish very similar functionality and where they are heading, and here is where we end up. No dissenting voices allowed.

Well done everyone, lets pat our selves on the back.

It can't do what you are making it out to do, in fact it's completely different. Not sure why you keep on keeping on this for lol.
 
Jesus Christ. I make an off hand reference to what Nvidia are up to, post some white papers that show how it could be used to accomplish very similar functionality and where they are heading, and here is where we end up. No dissenting voices allowed.

Well done everyone, lets pat our selves on the back.
That’s what I am trying to discus with you. The NVidia whitepapers that you have shown do not have very similar functionality and head in the opposite direction to what HSA is heading to. I politely asked you more than once to explain how the functionality is similar and all you do is swear in response and complain about no dissenting voices allowed.

HSA is an interest of mine I am trying to have a discussion.
 
Yes it can. If people want to be wilfully ignorant because it fits their agenda then so be it.

Hmm.. That should be the tagline of this forum, it fits it perfectly.
 
That’s what I am trying to discus with you. The NVidia whitepapers that you have shown do not have very similar functionality and head in the opposite direction to what HSA is heading to. I politely asked you more than once to explain how the functionality is similar and all you do is swear in response and complain about no dissenting voices allowed.

HSA is an interest of mine I am trying to have a discussion.

I'm cool with that. What I've been saying is that it combined with other technologies described brings very similar shared computing and resource to the table that HSA does. It probably doesn't bring this magical look all our disparate hardware just plugs together malarkey that AMD is now touting as HSA desktop and mobile chips have as yet failed to win any traction outside embedded designs. These vendors need to work together as outside of low power SoC designs none of them are making any significant revenue.
 
Yes it can. If people want to be wilfully ignorant because it fits their agenda then so be it.

Hmm.. That should be the tagline of this forum, it fits it perfectly.

Ok then lets start over. No name calling, no agenda, no making it personal. Just facts. lets see if we can have an interesting friendly discussion.

Do you agree HSA is very different from NVlink? If not why not? Do you agree the main goal for HSA is far larger and different then NVlink, if not why not?

Can you see how NVlink can be taken as the opposite goal of what HSA? aims for? Both aim to link/share memory, NV is for NV GPU's only while HSA goal is to link/share memory between lots of brands of CPU and GPU. NV solution means you have to recode apps, HSA mean no re coding apps between different hardware.
 
Last edited:
You haven't rebutted any of their points instead get somewhat personal.

Because their 'points' are nothing more than ignoring anything they don't want to hear and posting off tangent walls of text. All that'll happen is I'll dig out a white paper or similar that refutes what's been posted and the cycle will continue, with the occasional sniping from the sides. It's an all to familiar and tiresome dance that I really don't want to bother with again.
 
Ok then lets start over. No name calling, no agenda, no making it personal. Just facts. lets see if we can have an interesting friendly discussion.

Do you agree HSA is very different from NVlink? If not why not? Do you agree the main goal for HSA is far larger and different then NVlink, if not why not?

Can you see how NVlink can be taken as the opposite goal of what HSA? aims for?

NVlink on its own yes, I'll agree with that. But as I've said, it enables other technology that Nvidia have already and are working on to bring similar shared computing to the table. I even alluded to that in my original post where all I said was NV and IBM were cooking something up built upon NVlink, then things went crazy.

What the HSA alliance are setting out to do is certainly different, but that's something else entirely. I'm talking about the technology itself.
 
I was curious about his point. What's the point of derailing the thread over Nvlink when it's not the same thing or in competition with or other?

I made a passing reference to what the other major players were up to. Others took that ball and kicked it through and angry old mans window.
 
Would it not be more logical for AMD Intel and NVidia to design entirely new processors that are good at running both CPU and GPU type tasks which can be run together like GPUs can in parallel.

This would give the user the choice to add as many processors as is needed like you can with MGPUs to get the job done.

Sadly this would mean the end of Windows, too bad Microsoft.:D

Having said that I don't know much about the subject so I may have just written a load of rubbish.:)

I guess the tricky thing is that having modular systems like ours means things like pcie being laggy and preventing proper coding to suit whats there as it'll wont offer much benefit.

It'll end up that pc's will be more like mobiles, you buy one and everythings integrated, might mean speed but then we lose the ability to pick and choose what combo's we want and instead end up limited to whatever the companies supply us (for example i doubt intel would be up for having amd gpu cores in it's systems).

On the flipside it means moar power (or at least escaping that horrible feeling when the fps counter is too damn low and your expensive computer is doing barely more than idling)
 
I guess the tricky thing is that having modular systems like ours means things like pcie being laggy and preventing proper coding to suit whats there as it'll wont offer much benefit.

It'll end up that pc's will be more like mobiles, you buy one and everythings integrated, might mean speed but then we lose the ability to pick and choose what combo's we want and instead end up limited to whatever the companies supply us (for example i doubt intel would be up for having amd gpu cores in it's systems).

On the flipside it means moar power (or at least escaping that horrible feeling when the fps counter is too damn low and your expensive computer is doing barely more than idling)

Intel have perfectly capable GPU cores of their own, they don't need AMD's GPU cores, all Intel need to do is update their architecture to be fully HSA compliant.
 
Back
Top Bottom