Nota bene: This is unsupported by VMware and me, so use at your own risk, and if you do this to a production NSX-T Manager cluster you deserve whatever comeuppance is delivered.
Running enterprise infrastructure on what, typically, is fewer resources than your day job has for running their dev instance of Lotus 1-2-3 can be challenging. In the case of VCF, the compute and storage footprint can be overwhelming for a small home lab. To help with this, I’m going to show you how to shrink the default NSX-T deployment in a VCF Workload Domain.
By default, the NSX-T Manager nodes deployed by SDDC Manager in a Workload Domain have the following configuration:
- 12 vCPU
- 48GB RAM
- 200GB disk
and there are three of them. Now, just powering down and deleting two seems like a great idea, but reader, it is not. Stuff will break.
Removing NSX-T Manager nodes from the cluster
The NSX-T REST API is your friend on this. First, let’s get a list of everything important in our NSX-T Manager cluster. In this instance, my NSX-T Manager nodes and cluster are as follows:
- Cluster IP: 172.16.11.65
- Node 1 IP: 172.16.11.66
- Node 2 IP: 172.16.11.67
- Node 3 IP: 172.16.11.68
curl -k -u admin:VMw@re1!1234 -X GET https://172.16.11.66/api/v1/cluster
No, I don’t care if you have the admin password for my lab NSX-T instance.
The output should look something like this:
What we’re looking for are the three blocks in the JSON output marked “nodes”. I don’t want to remove the first one because something has to be left after all is said and done. I need to dig through here and find the two nodes marked ‘b’ and ‘c’, like so:
I can get rid of these behemoths with a simple API call each:
curl -k -u admin:VMw@re1!1234 -X POST https://172.16.11.66/api/v1/cluster/9b861242-7918-a09b-032e-8801088c1f88?action=remove_node
curl -k -u admin:VMw@re1!1234 -X POST https://172.16.11.66/api/v1/cluster/2b241242-0e62-9c13-8706-29e0098b47c6?action=remove_node
After that’s done, when I get the NSX-T Manager cluster state, I only see my ‘a’ node listed.
In the vSphere Client, go power those (‘b’ and ‘c’ nodes) suckers off and delete them from disk.
Resizing the remaining NSX-T Manager node (and have it still work)
Keeping in mind this is utterly unsupported by VMware, I’ve confirmed (and typically run in this state in my lab doing proofs-of-concept and such) setting the following works like a charm on a small lab NSX-T cluster:
- 4 vCPU
- 10GB RAM
- Don’t change anything with the disk, for the love of Pete
- Remove the CPU and memory reservations
In this state, as I mentioned, everything works fine. The one caveat I’ll throw in here is that startup takes a few minutes longer. The NSX-T Manager nodes go through CPU on startup like a hot knife through butter, so removing resources the node was sized for will, understandably, have an impact. That being said, this will free up precious CPU and memory resources for you to run Minecraft servers or whatever kids are doing these days.