running a pipe splitter into cpu/gpu block

if that bucket is constantly being filled up as in a closed loop
You didn't say that, though...

Yes, the total flow rate of the water emptying from the bucket will be higher, but the flow rate through each individual hole and into whatever that connects to will be the same as the other... and yet in a closed loop it will all be slowed down to whatever the single inlet back into the pump can permit anyway, so unless you're using multiple inlets too it's a bit of a moot point.
 
yes which is why i was asking what the flow rate of the blocks would be(i know they all differ) but im asking can a cpu block flow more or less what the pump has to offer,in my case an EKDCP 4.0..and if so what kind of flow rate are we looking at for the cpu blocks in general?

i was going to use EK-Supreme LTX blocks on gpu and cpu.
 
fill the same bucket.
Attach your CPU and GPU etc to it in series and see how long it takes to empty
Attach your GPU and CPU and to different holes in parallel and then the rest of your stuff in series
Now see how long it takes to empty.

If it's the same then it doesn't really matter, one is quicker than the other then you have your answer.
This will give you your answer.

There are a few review sites which post flow rates of various blocks.
 
yep plan to do something like that when my stuff arrives,just awaiting connectors and tubing then i can have a go.

got a 50 gallon drum with a tap on one end and screw lid on top so i will do some testing on tubing with and without blocks.

are the flow meters expensive?
 
If you want to the flow to diverge but maintain the same flow rate, then you have to halve the cross-sectional area of the 2 pipes supplying/exiting the blocks.

I dunno if there are connectors to support that. I don't even know what the average cross-sectional area of a block's internal route is. Are they standard?
 
I see a lot of people running their GPUs in parallel and then in series with their CPU, I think at the end of the day due to the whole "not much temperature difference in a closed loop" thing it mostly boils down to aesthetics, ease of installation, and reducing junctions to reduce chance of leaking and impact on flow rate.
 
I see a lot of people running their GPUs in parallel and then in series with their CPU, I think at the end of the day due to the whole "not much temperature difference in a closed loop" thing it mostly boils down to aesthetics, ease of installation, and reducing junctions to reduce chance of leaking and impact on flow rate.

And sacrificing thermal performance for ascetics!
 
Right... The water temp within the loop will be very similar throughout the loop because the water is travelling so fast that it doesn't dwell within a block or rad long enough to change temperature IN COMPARISON to the water immediately after the block or rad... However, that doesnt mean that the water within the loop isn't being heated or cooled by the rads or blocks.

When your PC is off, the water will equalise temp with the room its in. As soon as you turn on the PC the water starts taking on heat from the blocks, it also transfers that heat to the rads which in turn shed that heat to the air in the room. There has to be a difference in temp between the water and the air for the heat to transfer.

The reason we don't all run a single 120mm rad is because a single 120mm can only shed about 100-200w of heat before the water temp starts shooting up past where you would want it to (its ability to shift heat actually gets better the hotter the water is, the bigger the difference between air and water)... The water in the loop would still all be the same temp though, just hotter than you would like.

If it helps to visualise it, the water just after the gpu will be say 0.5C warmer than the water just after the rad... But the water in the loop will be 5-10C warmer than the air in the room... That is where the heat from the PC is going, in to the water, that heat goes round and round the loop and some of it leaks back out of the radiator to the air, the more rad space you have the more heat is shed to air and the cooler the water is is comparison with the air/room temp... If you have enough rad space then the water stops heating up past a certain point, hence equilibrium... If you have a cpu and a gpu and just a 120mm then the water keeps getting warmer and warmer until a either the computer crashes or tubing/fitting/pump fails

Basically you wouldn't want 60C water in your loop

GPU blocks are less flow dependent than CPU blocks as they have a much larger surface area, parallel lowers restriction so putting gpu blocks in parallel helps flow rate in the rest of the loop, though actually again, with gpu blocks being low restriction it is pretty minimal... You absolutely don't want to put a gpu and cpu in parallel as the cpu block being much higher restriction will really suffer flow rate if you try

As mentioned above you would have to increase the restriction of the GPU block, which would also increase restriction in the whole loop, so again you'll be worse off than if you just ran them in series, parallel also halves the flow rate in each block, not a problem for a gpu block, but not ideal for a cpu block that relies on fast flow rates for efficiency

Watercooled GPU's usually sit well below 50C, so your CPU is what you want to concentrate on... Putting your CPU in parallel with anything else will halve the flow rate it gets, which is never going to be good
 
Last edited:
We used to do loops this way 10-15 years ago, but it was due to low pump specs not thermals, there's zero point nowadays.

Here's a split system that a user on this forum used to run:

Dcp_0175.jpg


It splits the tubing after the CPU running one line through the GPU and one through the motherboard before recombining before the reservoir. This type of set up is not needed anymore due to improvements to block and pump designs and will give no benefit just extra effort/cost and chance of failure.
 
Last edited:
Back
Top Bottom