I’d usually start at the top down, but for ease of explanation, I’ll start at the bottom (physical + vSphere) and work my way up from there.
Please note that a lot of this configuration isn’t supported by VMware. Don’t take anything in this post to be a supported configuration. It’s just how I’m doing it for one reason or another.
This is a review from the last post, but here are the physical components of my lab.
esxi-01 – Dell T610, Intel Xeon X5675 (6 core, 3.06GHz), 64GB RAM
esxi-02 – Dell R610, Intel Xeon L5640 (6 core, 2.26GHz), 64GB RAM
esxi-03 – Dell R610, Intel Xeon L5640 (6 core, 2.26GHz), 64GB RAM
Synology DS1815+ – 4x Western Digital Red Pro 2TB 7200 RPM, 2x Intel 730 240GB SSD (read/write cache), 6GB RAM
2x TP-Link TL-SG2424 24-port 1GbE switches
1x ZyXel USG50 dual WAN firewall
F5 BIG-IP VE Lab Edition load balancer
Microsoft Threat Management Gateway 2010 (reverse proxy)
These components have been cobbled together over the past couple of years. I started with the T610, a desktop form factor server from Dell. It was expanded until I got to the point where I decided that physical redundancy was important. At that point, I decided that I would start with rackmount 11G Dell servers due to lower power/heat and noise. While this is in my office, noise containment is a factor.
A second R610 later, I needed more switch ports. My local Microcenter had these TP-Link switches for cheap (and they support 10k MTU, something my cheapo Dell switches did not), and they have more than enough port count, so they were added. The ZyXel firewall allows me to grab two public IPv4 addresses (and, a recurring theme, was cheap), so it was next up. The F5 virtual load balancer is a no-brainer for me at a very high functionality to cost rate (~$100 for the lab edition), plus it means I don’t have to learn how to use pfSense. New to this configuration is the TMG 2010 VM, which is doing reverse proxy and URL filtering for Horizon Workspace Portal and Exchange OWA. I’m forced to do this, because I only have two public IPs and three services that want to use HTTPS over 443/TCP (View, Horizon Workspace Portal, OWA). Yes, I know I could do this with Squid or ngnix or whatever, but TMG is easy. Those are not.
Lastly, the Synology NAS gives me tons of expandability, capacity, and I/O. The SSD read/write cache works superbly. Only drawback is no 10GbE, but that’s not in the cards for the rest of the environment for a while anyway.
All three nodes are in a single vSphere HA/DRS cluster. I’d love to be able to run a management cluster, but that’s another story for another day.
The only notable config here is that I’ve set VM:VM separation rules in DRS.
I do have HA and DRS configured, but Admission Control is currently disabled until I get more RAM in the cluster. As you can see above, I’m at 122GB of 192GB total, which is awfully close to not being able to power on any more VMs.
Syslog and ESXi dump logs are stored on my Windows vCenter Server (4 vCPUs, 12GB RAM, 100GB disk). vCenter Server database is MS SQL 2014 configured in a Failover Cluster Instance (each SQL server is 2 vCPUs, 6GB RAM, cluster disks are in guest mounted iSCSI LUNs from the Synology, 10GB quorum, 2x100GB data). I also have 2012 SP2 running on that same cluster since App Volumes won’t run on 2014 yet. Update Manager is co-located with vCenter Server.
I have 4 NICs active per host (9000 MTU for vMotion and storage, everything else 1500), with these split 2/2 among separate vSwitches. They are currently vSphere Standard Switches, and I don’t have much need to migrate them to DVS.
From here, each vSwitch has an uplink to both physical switches. These switch ports are configured in trunk mode with the VLANs I use assigned to them.
Note: Misconfiguration here on DMZ11. Should be trunked to Gi1/0/24 instead of 23. Whoops.
You might be able to tell here that Gi1/0/9 – 14 are my ESXi facing ports, while Gi1/0/24 is the uplink to the ZyXel firewall, which also acts as the only L3 device in the lab.
Here, I have a multitude of VLANs configured…
And firewall rules (yes, I know some are lazy):
More on firewall, TMG 2010, and F5 configuration after we’ve gone through the application configuration.
Storage is all done over NFS from the vSphere environment to the Synology NAS. There, a single volume is configured with all four 7200 RPM disks in a Synology Hybrid RAID configuration and both SSDs acting as read/write cache. Simple enough.
This is presented to ESXi over two NFS mounts: VMs and ISOs. These belong in the same volume on the Synology, but there’s logical separation so I can keep things straight. I have the VAAI-NAS plugin installed on each of my ESXi hosts to give support for that.
And that’s it for now. Next time, View configuration.