Render farm set up (vmware vs dedicated standalone_

Associate
Joined
17 Jan 2004
Posts
1,769
Location
Staffordshire
Hello all,

Been a while. Right, I have a bit of a situation at work.
I am the sole CGI Artist for a largish furniture manufacturing company. I use 3ds Max and render outputs are frequent which of course mandates render farm usage. The question is not whether to have a farm or not, that's just a given.

The issue I seem to be experiencing is how to ascertain the validity to my IT manager of standalone render node machines (GPU's). I've a reasonable understanding of networking and computing at large I think and my argument is for the provision of dedicated render node hardware.

Currently the nodes (there are two and I can render on my workstation too) are hosted on VMware setups within the company mainframe. Problems with the setup are frequent and although I am assured the hardware is 'up to the task' I am struggling to believe this is an ideal setup (whether cost effective or not) as the nodes are fundamentally unreliable and unstable.

Is there a standard practise illustrated anywhere which clearly lists the pro's and cons of each set up (VMware vs dedicated hardware) and how the set ups compare?

Alternately, could anyone possibly offer up a reasonably concise sysnopsis?
 
Currently the nodes (there are two and I can render on my workstation too) are hosted on VMware setups within the company mainframe. Problems with the setup are frequent and although I am assured the hardware is 'up to the task' I am struggling to believe this is an ideal setup (whether cost effective or not) as the nodes are fundamentally unreliable and unstable.
Do you really mean an actual mainframe? What mainframe is this? Or do you just mean a server? What problems are you experiencing?

I have no experience of 3D Studio. But I'd be considering a dedicated Threadripper or Ryzen 3950 system, or looking at 3DS render farms in another companies cloud if stability on the current infrastructure was an issue.

But as to the question of dedicated hardware vs VM, my company (a large bank) has moved almost everything to VM's. They offer several advantages but a few disadvantages.:

Advantages
* Shared infrastructure so many applications can share the same physical machines to reduce costs.
* Ability to dynamically adjust resources as needs for each workload changes through the day or through the week.
* More resilience. If the physical server goes down you can bring up that guest on another physical machine easily.
* Easier backups. If there is a critical failure you can just restore the guest to the last backup.

Disadvantages
* Harder to isolate another guest from affecting your workload. They are isolated but there can still be issues.
* You don't have all of the resources on that server so there can be some slight issues with losing performance and increased latency.

If the workload is critical to the firm then there can be an argument that a physical machine is needed. But increasingly a VM offers almost everything a physical machine does, but with the ability to share resources, dynamically change resources and improved resilience.
 
Hi Hades thanks for taking the time to respond.

The VM's are hosted on one of a handful of drives, I think there are four. These four form the company mainframe system for data storage and active data entry and retrieval with approximately 25 access points. The rendering I set up is fairly high end with large graphic texture files (GPU processed) and complex models (more CPU processed). As far as my experience tells me the render nodes are reliant upon GPU and CPU power. The CPU power is plentiful however there are no graphics cards or GPU based architecture as far as I am aware. A dedicated GPU rendering machine would have GPU capability be that via actual cards physically installed. If I don't sound concrete about all this it's because I am not all that proficient in determining the nature of the architectural host.

As far as general processing goes I can see the logic in VM but for high end graphics I really am not sold that this works. Reliability is a bit of a challenge to diagnose reliably however, the actuall reliability of the VM set ups is historically not good. Interferance from maintenance routines and mainframe traffic may affect rendering performance. Fundamentally I do consider that the VM set up is liable to interferance as well as being un determined as a valid and professional GPU render node set up. It is this that I really seek guidance on and to be honest, my IT man is good but he's not a 3d modelling/rendering expert by any stretch. He does insist the setup is good enough and reliable but I am struggling to agree.
 
You would need to show throughput of non-GPU Vs GPU rendering. Your software suppliers should be able to provide that kind of benchmark data.

Then you would need to get GPUs compatible with the current frame/host architecture. The GPUs would need to be passed thru to the VM nodes and the appropriate drivers loaded at the VM level.

All of which is a lot of headache for your IT team for what is currently a single use case. How about renting a cloud based GPU farm and using that to build a business case?
 
So you basically don't have a GPU and your a modeller, that seems weird and as you implied counter productive. You should be looking into some like a nvidia quadro on a dedicated PC. But you have to justify the cost is what your saying. How slow are render times, can you somehow make a note or record on your desktop your render times then argue that your would be xyz times faster on xyz hardware making you xyz more productive.

Office politics seem so weird and bureaucratic at times that it's amazing these places even survive and people have jobs at all.

As somebody said look at cloud options is all the rage just now, it's essentially what your running internally just now.
 
Back
Top Bottom