Thanks Andre.
I was looking at SSD or 10k Enterprise disks for the RAID, but, at the capacity we require makes either prohibitively expensive. Would you consider running two datastores on separate RAIDs: one with slow discs for capacity, one SSD for core services? But no BBU will be used.
Disaster recovery consists basically of regular snapshots and datastore backups. Only real SLA to consider is that one of the VMs will be a LAMP staging box that needs external access to be secured to specific, whitelisted IPs. That's done at the firewall level though.
What it will be used for is primarily DevOps. Currently there are a number of different physical servers running different services, I'm looking to centralise as much as possible.
I'll be initially installing 2-3x essentially LAMP (extended configs including many code packages. These get hit quite heavily hosting various in-house services), GitHub Enterprise or GitLab (depending on budget concerns) server, Windows Server for Terminal Services and, the most resource hungry, a CI server which will allow me to spool up slave VMs as required.
So I know that I require multiple network interfaces (1 2xLAG bonded interfaces, 1 management interface etc.), big storage for project dumps and cacheing for in-house services, multiple CPUs etc. The company will be around 150 users in the next 6-8 months, maybe 75+ of those being active developers.
So, as you can see, what I want to see on the hardware doesn't quite match their budgetary expectations. Is my assumption incorrect that the de-facto standard for this kind of environment is still preconfigured SMB / enterprise grade hardware from the big manufacturers? Would you consider running production environment like this on consumer, custom build hardware?
Thanks.