Creating a 3d render farm

Soldato
Joined
30 Sep 2006
Posts
5,278
Location
Midlands, UK
Hi all,
hope someone can help with this.
I'm looking to create a render farm with our existing hardware. Here's the dealio;
we have a 6-core worksation (on a gigabit network) that runs Archicad. Our technician wants to render but using a farm to speed things up.
Now, we also have 23 x Dell desktops, all identical, running w7 pro x64, i3, 4gb ram.
We were thinking when most users bugger off for the night that these Dells could form the render farm and eat their way through any work the 6-core workstation hands out to them.
Reading up it seems that you can either render using CPU or GPU cycles. Well seeing as these Dell's have onboard GPU's the cheapest option is to use CPU power.

I have no idea of 3d/cad design software, i'm just the I.T. monkey charged with sorting this out.
The network is put together with cisco managed switches, so i could always look at trunking up the farm pc's to give the maximum throughput, if indeed it needs it.

Currently we have spent £27k in 6 months on sending our renders to a 3rd party (averaging approx £320 per render). That money could pay for our own in-house renderer to do all the work. Or at least train our existing guy up to top standard then get on with ourselves.

Can anyone advise please?
 
What specific processors?

What you can do is install ESXi to a memory stick, and then boot each machine from a memory stick or even PXE boot them (easier to deploy, use WOL).

Then you can cluster them all together and install an OS that uses all of their spec. You will need some fast SAN to keep up, or a lot of caching space.
 
The dells are i3-2100 with 4gb of ram (not 4gb as i mentioned in my op).
All data, although not on a san is on a ML370 with SAS 10k disks in a raid 5 array.
I can created a 4gbit trunk on the switches and dedicate it to the render nodes (i think).
These Dell's are in use daily by our regualr office staff, so they would need to utilise what is already on there. Only at night would they really be used in the render farm, or possibly if some users are out during the day, but then that would likely create network overhead for everyone else.
 
Not sure of any proprietary applications that can accomplish this. But to utilise full resources, ESX is definitely the way to go. It won't harm any PC individually and will be easiest to scale.
 
What rendering software are you using? And what OS are the desktops running? I'm assuming Windows on the desktops but Linux gives you even more options.

If you're using RenderMan then there is a version of RenderMan Pro Server that runs on Windows. That would remove the need for another OS so it would be simply be a case of installing the Pro Server software on the workstations.

If you do need to use a different OS for the render farm software then avoid VMware and any other kind of virtualisation. HPC and Virtualisation aren't a great combination as the hypervisor will always suck up some of the horsepower that's better used for compute.
 
Last edited:
If you do need to use a different OS for the render farm software then avoid VMware and any other kind of virtualisation. HPC and Virtualisation aren't a gerat combination as the hypervisor will always suck up some of the horsepower that's better used for compute.

I'm sorry but what? ESXi uses more resources than Windows 7 full desktop?
 
I'm sorry but what? ESXi uses more resources than Windows 7 full desktop?

It does once you put an OS on there to do some work. Not sure I know of any rendering software that runs natively on ESX.

I've worked on some pretty intense High Performance Compute clusters including render farms and none of them have used virtualisation on the compute nodes.
 
It does once you put an OS on there to do some work. Not sure I know of any rendering software that runs natively on ESX.

I've worked on some pretty intense High Performance Compute clusters including render farms and none of them have used virtualisation on the compute nodes.

Why would you license Windows for 24 desktops in a high performance compute cluster?

You wouldn't put an individual operating system on each, you would put one OS on the datastore (a SAN, for example) and it would leverage the power of each ESXi node completely and entirely.

1 OS, not 24 OS'es and layers and layers of rubbish.

Edit: Every single high performance computer cluster using virtualisation, otherwise your just delegating tasks - that isn't a cluster.
 
Last edited by a moderator:
Why would you license Windows for 24 desktops in a high performance compute cluster?

You wouldn't put an individual operating system on each, you would put one OS on the datastore (a SAN, for example) and it would leverage the power of each ESXi node completely and entirely.

1 OS, not 24 OS'es and layers and layers of rubbish.

Edit: Every single high performance computer cluster using virtualisation, otherwise your just delegating tasks - that isn't a cluster.

This is not a dedicated HPC cluster is it? It's using spare capacity on PCs that are used during the day.

1 OS across multiple ESXi nodes? When did VMware launch that? The maximum size of a VM is limited to the resources of the host it's running on.

If you want to have a single OS instance across multiple machines then you need to look at software SMP solutions like ScaleMP
 
Edit: Every single high performance computer cluster using virtualisation, otherwise your just delegating tasks - that isn't a cluster.

I meant virtualisation using hypervisor like VMware, Xen, Hyper-V, Virtual Box etc.

And yes most HPC clusters do split up a job into smaller tasks and send them to individual nodes. That's what the scheduler is for.
 
This is not a dedicated HPC cluster is it? It's using spare capacity on PCs that are used during the day.

1 OS across multiple ESXi nodes? When did VMware launch that? The maximum size of a VM is limited to the resources of the host it's running on.

If you want to have a single OS instance across multiple machines then you need to look at software SMP solutions like ScaleMP

VMHA, configure a resource pool of physicals.
 
Ok...post #5 onwards........shot straight over my head like an arrow.

As i mentioned in my op, the potential nodes run win7 pro x64 and are used daily by our office staff, BUT only bluebox seemed to pick up on that.

The renderer uses Archicad 13. I assume (or hope) that it has some kind of plug-in to allow for the setup of a render farm.
 
Ok, post #5 onwards shot over my head like an arrow. :rolleyes:

As my OP says, the potential nodes run win7 pro x64 and are used daily by office staff, but only bluebox seems to have borne that in mind ;).

Our renderer use Archicad 13 and i would assume (or hope) that it carries a plug-in to allow for pc's to be setup as render farm nodes.

I have to be looking at the cheapest possible option first, then maybe turn towards a dedicated farm setup if need be.
 
Ok...post #5 onwards........shot straight over my head like an arrow.

As i mentioned in my op, the potential nodes run win7 pro x64 and are used daily by our office staff, BUT only bluebox seemed to pick up on that.

The renderer uses Archicad 13. I assume (or hope) that it has some kind of plug-in to allow for the setup of a render farm.

From a quick google, Archicad actually uses Lightworks to do the rendering. There should be a way to setup the PCs as a Lighworks compatible renderfarm but it won't be simple and possibly not cheap.

My suggestion is talk to whoever supplied your Archicad and they will probably be able to help you get it setup. The biggest problem with probably be cost rather than technical.
 
You would need a renderer that can be distributed. Vray, Cinema4D, etc. You install a client on each host, and as long as the server (your Archicad/render host for example) can see the clients on the network it will dish out each frame/render to an available host/client.
 
Yes but no single VM can use the resources of more than one physical host so to actually use all the resources of 24 PCs you need to have 24 VMs and therefore 24 OSes plus the additional overhead of the hypervisor.

http://communities.vmware.com/thread/301685

Ah, worth knowing! I assumed it was configured with performance and availability came secondary to that - seems it's simply just availability.

Thanks.
 
Back
Top Bottom