I recently did a VMware project using an EMC Clariion CX3-10c and VMware 3.5 update 2. The plan was to use the ISCSI front end ports on the CX3-10 for VMware ISCSI storage connectivity. The design included two dedicated Cisco 3650g switches for the ISCSI network and two dedicate gigabit NICs on the ESX host for ISCSI traffic.
The ESX hosts have a total of 6 gigabit NICs split between 3 physical cards; two onboard, one quad port and one dual port. Below is a screen shot of the original vSwitch design.
- Two NICs from two different physical cards for Service Console and Vmotion.
- Two NICs from two different physical cards for Virtual Machine traffic.
- Two NICs from two different physical cards for ISCSI storage traffic. The two ISCSI NICs were each plugged into a different physical Cisco switch.
The ISCSI front end ports on the CX3-10c were also split between the two dedicated Cisco switches. See diagram below.
The IP addresses of all four front end ISCSI ports were originally in the same subnet. For example
I then tested connectivity from ESX to the ISCSI front end ports using the vmkping tool. I was able to successfully ping SPA0 and SPB0 but not SPA1 or SPB1.
I initially thought I had an incorrect configuration some where so I verified my ESX configuration and switch port configuration. After about 15 minutes of checking configurations I remembered that the VMkernel networking stack does not load balance like VM networking stacks. A VMkernel networking stack will only use the other NIC on a vSwitch if the first one fails.
I then tested this by unplugging the cables for the NIC in switch 1 and was then able to ping SPA1 and SPB1.
I then went back to the drawing board to come up with a way to see all 4 paths and also provide fault tolerance.
I did some searches on Powerlink and found an article (http://csgateway.emc.com/primus.asp?id=emc156408) that states having all both ISCSI NICs on the same subnet is not supported. After reading this I changed the IP addresses on the ISCSI front end ports on the Clariion to these so that the SPs are in different subnets.
I then changed the ESX configuration to have two ISCSI vSwitches with one NIC in each vSwitch. See screen shot below.
With this configuration I as then able to ping all four ISCSI front end ports on the Clariion from ESX using vmkping.
I then configured the ISCSI software initiator on ESX and added all four targets.
I did a rescan on the host and then checked the connectivity status on the Clariion and all four paths were registered.
With this configuration I am able to use both NICs, both switches and all four SPs for optimal load balancing.
The failover time is very quick as well.