Anybody bought any HP EPYC Servers recently?

Man of Honour
Joined
30 Oct 2003
Posts
13,819
Location
Essex
Wondering on current lead times as am trying to plan out a bit of a project and just realised I need a bunch more servers. Or a bunch more chips to go in the current servers. Whatever happens I don't have enough servers and more will be required but I shall need them quickly as it looks like this project will need to move.

Also anybody had a play with 7H12 yet as i'm seriously looking at it as an option when looking at VDI. It is either some sort of VDI setup or 100 or so HP 705 G5 mini's (ryzen 3400ge) racked up in a colo which although an option doesn't sound that smart. I never thought I would see the day where I am looking to colo an entire estate but hey looks like the day is here!
 
Hmm cheers lads some bits to consider. I am actually considering just buying CPU's. I have 3 spare EPYC sockets and 1.5 spare chassis so that's an option. The problem I have is a two week timeline to migrate everything so its tight. Right now I am going to lift and shift 100 prodesk 400 g5/g6 SFF machines that were peoples desktops and put them in racks as I have managed to negotiate space with my DC and have no time to work it out. That sorts me in the short term but I need a more permanent solution moving forward, although my DC have hooked me up I have made a commitment to do something about the racks and racks full of desktops and fairly quickly.

I have been looking closely at VMWare Horizons so I am thinking that might be the direction to go in.
 
HP had a $1.5 billion dollars worth of back orders just a few weeks back so I don't see anything moving quickly short term. Adding some CPUs may be a quicker solution for you.

HPE will bend over backwards for us (they are a client and we are their client) so I would expect them to be able to turn around kit. I think for now I am just getting it all in and deciding after. The current thinking is take a 7452 out of EPYC 3 and shove it in EPYC 1, then buy a couple of 64core chips and VDI the world on EPYC 3! or something like that.
 
Good position to be in then. Happy days :)

Yea generally HPE treat us very well, well enough that they gave me 3 x pre production epyc servers around a month before you could buy them. So I had lots of time to play and yes im one of those guys that needed to upgrade naples bios to run rome. To be honest the plan has now somewhat accelerated and I am making fairly large infrastructure changes every day atm. I have been working like a mad man provisioning new phone systems, working out how we serve our userbase better than rds to their machine in the office etc. I imagine the landscape will be pretty different for me in 3 to 6 months. I will keep you all updated with where I end up anyway.
 
But they do rackmount shelves for them and everything! :)



(I may or may not have 5x 800 mini's in production for remote access) :D

They do do shelves for them which is why they are on the cards in the first place :) fwiw we are going to be putting 100 minis in a rack while we don't have office space and they will eventually go back to an office. We are also looking at the VDI solutions and it is looking like we will implement horizons.
 
I know, although I fabricated my own out of the desk stands they come with, an old rack shelf and some cable ties :D

Proper ghetto. I like it! The DC won't put them in without some sort of shelves though as at first I was like can't you just stack 100 prodesk 400 SFF g5/g6's in a couple of racks? They wouldn't because of reasons so pulling the disks and throwing them into the minis is the next best option while we build something more future proof an robust. To be honest we have had a very good WFH experience so far running RDS to the desktops in the office so for now replicating that as closely as I can while thinking forward to the future and what might be the new working standards seems sensible. If I went full VDI and we have an office in a few months then I would be in some hot water with no desktops or machines to deploy back to the office. I need to remain flexible while offering a robust solution and small time frame. It's happening on the 27th if I am ready or not.
 
It's not even that ghetto, I am going to need a bit more density than that though. 100 across 1 or two racks is hopefully doable. :D After that I can then properly think about what comes next. I have been so close to pulling the trigger on new epyc CPU's a couple of times this week but I know the right thing to do is move, consolidate and then progress. One thing at a time!
 
I shuddered :D when you take it as it is, it's quite interesting. I just know that I would hate to manage it as a 'VDI'/RDS Solution because I am so ingrained to having something like Horizon/Citrix to manage my estate rather than a subset of potentially hundreds of mini PCs with no one single pane of glass, but to each their own, we don't all have the budget/capabilities/buy in for other solutions. 5 of them isn't too bad, I wouldn't take up alcoholism as a career with 5.

I guess that all boils down to how you were set up and where you are headed. The ideal solution is of course VDI for everything but with all the unknowns in what I am doing at the moment I need to keep flexibility and need to have an option to put it all back fairly quickly which means I need to keep the workstations in the estate, at least in the short term. As things progress that may change relatively quickly.

Effectively I am going with the multi pronged approach. Try loads of stuff and find something middling that works :)
 
Well its been intense, interesting and slightly mental these last few weeks, but it is done. Sadly I had to ride solo which meant me hiring a van and spending 48 hours solid at the DC with about 2 hours sleep but It got done. In the process because of BT epic failures I am now running HA pairs of netgate xg 7100's rather than the 200e's I had before and have also had to remove the inept BT engineering team from all infrastructure, the firewalls were a managed BT service but because they were so shocking I had to take some drastic last minute action. I also reworked the network design and a load of other stuff which I can go into in full if anybody wants to hear. Just two weeks is all I had in planning (I mean I knew it was a possibility earlier but I never thought it would actually happen) and it was delivered in a single weekend. I'm pretty sure it cost me some hair as you will see, some of you may be horrified by what you are about to see:

This row bar a couple of racks is mine, the apc rack in the middle I literally wheeled into the dc :D I also have some rackspace elswear in this DC a couple of rows back anyway I was sending updates to the continuity team at work so some of the pics have me in and ill include a pic I took at the end so you can see the toll the weekend took on a man:













Next mission is what to do about the racks and racks of desktops. I also need to clean up the mess of cables at some point.
 
Last edited:
Nice work Vince, you'll have to let us know how they go in terms of performance. As for the racks and racks of desktop, that's a hell of a homelab for you now haha.

It's interesting but there are some things I have done to make life easy, every desktop will try and turn itself on if it is off between 3am and 6am, it will also automatically come on with a random delay to booting so the whole estate doesn't try to draw all the power at once. If a users desktop is an issue I have roaming profiles and a number of VM/s and also spare machines we can quickly migrate them onto. wake on lan is enabled and so is wake on power loss so no issues there. We have 24/7 remote hands at the dc and I know the guys there so am happy for them to dig around my kit if need be. Performance wise so far we seem to be cracking along nicely, the internet connection sits on a 15gb backbone so is blisteringly quick. I have RDS and also multi factor auth ssl vpn for remote users, but I also have outlook anywhere configured on my exchange 2016 servers and also have set up inbound routes for all other services such as crm, cms, intranet etc directly. In essence we are better positioned then ever to take the inbound traffic. So far I have to say that I have been more than happy with how the estate is performing. Replication from one estate to the other for example has seen fairly big increases and compression on the inbound on the vpn seems to be coping better. It's early days but I think this move is super smart for the business. When you consider everything it makes absolute sense.
 
Doesn't surprise me with BT. I'm involved with a pilot installation for a project which if moved into production the value is few million quid. The network is outsourced to BT and they are clueless at times.

If only it was at times. I am so done with them, they left me half operational for a week as I refused to let them do any more and made them stop before they broke any more. Genuinely they are an absolute joke, one day I will write a massive post about my dealings with them over the last 10 years or more and it would read like some sort of horror story.
 
Had that random 'oh no have i messed up here' today. Then realised... nope i just didnt do the work on zscaler. Total rookie mistake! :D
 
That's my entire career to be fair :D

Early reports are fantastic from everybody in terms of speed into the DC. Everybody has a pair of monitors connected to a laptop and we use the -span flag on RDP sessions inbound and chop the screens up on the desktops using display fusion. For anybody with anything less and we have a few users with their own setups for example, rds inbound with multiple screens checked in RDP tends to do just fine. So far so good.
 
Good to see our kit still sitting proudly in the upper part of your rack.

Darktrace? Everything else is hpe and the firewalls are around the back :D Got to get rid of that gen 4 (i think) at the top. No real need for it anymore but it's the disk IO on that thing I am using. Also the gen 7 i think at the bottom needs to go and is unnecessary.
 
Nope not Darktrace.

I'll give you a clue... Old logo/security bezel - hint: upgrade time :p

HP/StoreOnce If so Ive never managed to get the fibre working properly on that damn thing :p? If so I have another one :D 2 is the magic number!

Also upgrade time? Jeeez, can't you see im still running EVA, one step at a time buddy lol.
 
I dare to ask what the issue is... :D. Copying between the two?

No sync has always been spot on but I do have to run them in nas mode using 4 ethernets. Every time I have tried to link them to vmware/veeam via FC I just never managed to get it to work. Ive read the documentation 100 times and tried as many. I'm sure there is something missing in it as it goes from where I get to to magically working in one step :p Also restore times are pretty silly pulling from them in nas mode with no FC and it's always been an issue. I even sometimes run single one off's to the qnap as it's much faster on the same config.

Other than that they are awesome. :D compression is also awesome. Probably should have taken up HP on the free install and config but I am stubborn and wanted to play myself!
 
Ah i thought you'd be running Catalyst! Not too sure on the Veeam side. Is it a physical server that Veeam runs on? I've got mine running in a VM, i think i've had it set up for FC connectivity in the past though, will have to double check.

We have the license, and I wanted catalyst over FC but couldn't really get it to do what I wanted to :) Again should have taken up that engineer time. For what we use them for and syncing backups etc they are still perfectly good enough in the current config. Just could be quicker. :)

Veeam is in a VM, on the hosts that FC to storage, the problem comes in exposing the FC into the VM. After that it all goes pear.
 
Last edited:
I ended up doing loads of StoreOnce training when I worked at CDW. Good product, but I am primarily a NetApp/VEEAM guy these days, would like to get back into it.

Decent bits of kit tbh. We are all hp/hpe pretty much and I like their products.
 
Yes would definitely recommend Catalyst over NAS, *should* get better speeds with it as well.

I'll make a note to check if my Veeam server has a FC passthrough card assigned to it.

Would be interested to know :) im running esx 6.5 on those 3 10th gen Servers running 2nd Gen Epyc 7452's.
 
Back
Top Bottom