I was doing some testing of the new vSphere 5.1 features and came across something odd with the new Distributed Switch Health Check feature with Cisco UCS.
In my lab I have 4 ESXi 5.1 hosts running on older B200 M1s with VIC M81KR adaptors.
On my ESXi service profiles I have 8 vNICs; 2 for management, 2 for vMotion 2 for IP storage and 2 for VM networking.
Each pair of vNICs has 1 in Fabric A and 1 in Fabric B
In VMware there are 3 standard vSwitches and 1 Distributed vSwitch for VM networking VLANs.
On the Distributed switch I enabled the new Health Check feature via the new vSphere Web Client.
About a minute later the following Alerts were triggered.
“vSphere Distributed Switch MTU supported status”
“vSphere Distributed Switch vlan trunked status”
The first thing I did was double check my vNICs to make sure they had the same VLANs trunked and to make sure my MTU settings were the default of 1500.
This got thinking that it must be something to do with the way UCS Fabric Interconnects handle vNICs or something to do with how End Host mode works. I then remembered the new option on the Network Control policy that controls the VLANs that vNIC MAC addresses get registered on.
I was already using a custom Network Control policy to enable CDP on ESXi vNICs.
By default the Network Control policy is set to only register vNIC MACs on the native VLAN. This is set this way to reduce the size of the MAC address tables on the Fabric Interconnects.
I changed this policy to register on all host VLANs and about a minute later the Health Check alerts cleared.
The max MAC addresses per Fabric Interconnect in UCS firmware 2.0/2.1 – http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/configuration_limits/2.0/b_UCS_Configuration_Limits_2_0.html
6100s = 13800
6200s = 20000
Those are fairly high limits so I don’t think most folks would run into that limit if the Network Control policy is changed to all host vlans.