Associate
Hello all,
Been a while. Right, I have a bit of a situation at work.
I am the sole CGI Artist for a largish furniture manufacturing company. I use 3ds Max and render outputs are frequent which of course mandates render farm usage. The question is not whether to have a farm or not, that's just a given.
The issue I seem to be experiencing is how to ascertain the validity to my IT manager of standalone render node machines (GPU's). I've a reasonable understanding of networking and computing at large I think and my argument is for the provision of dedicated render node hardware.
Currently the nodes (there are two and I can render on my workstation too) are hosted on VMware setups within the company mainframe. Problems with the setup are frequent and although I am assured the hardware is 'up to the task' I am struggling to believe this is an ideal setup (whether cost effective or not) as the nodes are fundamentally unreliable and unstable.
Is there a standard practise illustrated anywhere which clearly lists the pro's and cons of each set up (VMware vs dedicated hardware) and how the set ups compare?
Alternately, could anyone possibly offer up a reasonably concise sysnopsis?
Been a while. Right, I have a bit of a situation at work.
I am the sole CGI Artist for a largish furniture manufacturing company. I use 3ds Max and render outputs are frequent which of course mandates render farm usage. The question is not whether to have a farm or not, that's just a given.
The issue I seem to be experiencing is how to ascertain the validity to my IT manager of standalone render node machines (GPU's). I've a reasonable understanding of networking and computing at large I think and my argument is for the provision of dedicated render node hardware.
Currently the nodes (there are two and I can render on my workstation too) are hosted on VMware setups within the company mainframe. Problems with the setup are frequent and although I am assured the hardware is 'up to the task' I am struggling to believe this is an ideal setup (whether cost effective or not) as the nodes are fundamentally unreliable and unstable.
Is there a standard practise illustrated anywhere which clearly lists the pro's and cons of each set up (VMware vs dedicated hardware) and how the set ups compare?
Alternately, could anyone possibly offer up a reasonably concise sysnopsis?