Hmmmm but it has to sit in a DC for the connection. Doesn't water cooling leak sometimes? If there was a leak, and the water dripped out of my rack onto the rack below and destroyed all their servers, would they blame me? Things like this bother me.
I guess I could get a 1Gbps connection to my house and do the water cooling thing, but what does such involve - Open Reach digging up my road probably.
I've noticed. And Thunderbolt is 40Gbps I think, which is far to slow... PCIe(4.0)x16 is 256Gbps I think.
Wow, you can build these things, that's a skill - maybe you can share pics of some stuff you've made - I know I can't.
Well yea, I've seen exactly what I want with "mining rigs" but they seem to connect over USB using PCIe x1 - I'm guessing it's some peculiarity of "mining" (whatever that is exactly) that you don't need much bandwidth between the CPU and the GPU; that isn't the case for me, I need a full PCIe(4.0)x16 for each GPU.
If you can't already get 1gbps, don't bother. Digging up road is expensive etc.
What I build isn't really data centre oriented, it's one off custom things, I aim at things like integrating silently into furniture or Art piece builds, hence not being worth it if you basically just need a box.
Yea mining only needs 1x because it just loads a file into vram and then its all local to gpu workload. But, that is only limited to 1x due to the motherboard pcie layout and risers used. You could get a frame, and use 16x risers and get the bandwidth you need. Technically any rack case with 4 x gpu support could do this, with the correct risers.
Yes watercooling can leak, should only be during initial setup /after a big move, but in a rack it's advisable to put it at the bottom etc just in case. I wss more imagining in a case at a desk rather than in a rack.
So are you looking to have a box of your GPUs, hosted in a DC?
You already have a server located with them?
What is the server currently?
Do you work for this company and is it for personal or business use?
If you have nothing currently bought, aren't the new nvidia nvlink based servers that were recently announced exactly this, or are they way beyond spec required?
I'm still learning towards doing it in one case is simpler than having a separate one for GPUs, since you'd need a half depth 4u case for GPUs, then risers. But if you went full depth you could fit them in the same box.
The interconnects for a separate gpu box just aren't that fast AFAIK, without going into individual/specifically designed solutions at least, and if your server isn't designed for external GPUs, it's not going to have any "nice" way to connect up.
So it really depends on what you currently own, what box its in, and what GPUs you need.