Advice on speccing an AI Workstation to build around a 5090, when I get to the front of the queue lol.

Associate
Joined
7 Jan 2025
Posts
10
Location
Reading
Hey - I am looking to finally get a local system. Up til now I use cloud servers by the minute - I will need a whole AI workstation building - and will need help speccing it up and stuff I currently do all my imaging and video workflows on L40S 48GbVram 64GbRam - so thinking 5090 +128Gb Ram, no gaming needed. It will be running local Ollama / other LLMs and Comfyui workflows and probably unix based for ease of loRa training for Hunyuan Video etc. Ideally I want to keep the total cost under £5k - I already have a screen I use for my macbook, was thinking about 4Tb fast storage, I have 900mb home broadband via fibre - so something that can take advantage of that through ethernet, as power efficient as possible - as I think it's going to cost a fortune to run in the uk at 30p a kwh? But any experience on that gratefully received. Thank you. I hope I haven't opened a bag of worms...
 
Thank you - i didn’t realise there was a choice of 5090’s? I want 32Gb Vram - probably the fastest version available as I’d like to try and get 5 years use out of it - through the way AI progresses, it’s probably not realistic. Is the high power use likely to only be when really pushing it - or just when it’s left running - i will probably aim to use it remotely in the house most of the time - via my iPad Pro
 
I saw the msrp of the 5090 seems to be $1999 / £1999 (already a con lol) but I see 4090’s are way more than that already - is there some kind of demand based hiking from all suppliers of these?
 
Seems like there is some kind of new PC architecture recommended now for 5090 in AI. -
NVIDIA NIM microservices and AI Blueprints will be available starting in February with initial hardware support for GeForce RTX 50 Series, GeForce RTX 4090 and 4080, and NVIDIA RTX 6000 and 5000 professional GPUs. Additional GPUs will be supported in the future.

NIM-ready RTX AI PCs will be available from Acer, ASUS, Dell, GIGABYTE, HP, Lenovo, MSI, Razer and Samsung, and from local system builders Corsair, Falcon Northwest, LDLC, Maingear, Mifcon, Origin PC, PCS and Scan.

Learn more about how NIM microservices, AI Blueprints and NIM-ready RTX AI PCs are accelerating generative AI https://nvidianews.nvidia.com/news/nvidia-launches-ai-foundation-models-for-rtx-ai-pcs
 
Last edited:
Thanks for the advice all - I had a chat with NVIDIA - and I think I’ll end up leaving everything to settle for a while and see what options end up for 2 card potential systems, once everyone figures out power issues etc
 
I was thinking more along the lines of getting a single 5090 now and adding in a second card later - the systems I use have multi gpu addressing - or those that dont will by the time the cards are out. - I think the cold hard reality - that as an obsessed hobby AI’er - I’m still more cost effective in the cloud l40s servers at $1 an hour for a while longer.
 
Back
Top Bottom