ISCSI Multipathing with Clariion CX3-10c and VMware ESX 3.5

I recently did a VMware project using an EMC Clariion CX3-10c and VMware 3.5 update 2. The plan was to use the ISCSI front end ports on the CX3-10 for VMware ISCSI storage connectivity. The design included two dedicated Cisco 3650g switches for the ISCSI network and two dedicate gigabit NICs on the ESX host for ISCSI traffic.

The ESX hosts have a total of 6 gigabit NICs split between 3 physical cards; two onboard, one quad port and one dual port. Below is a screen shot of the original vSwitch design.

image

  • Two NICs from two different physical cards for Service Console and Vmotion.
  • Two NICs from two different physical cards for Virtual Machine traffic.
  • Two NICs from two different physical cards for ISCSI storage traffic. The two ISCSI NICs  were each plugged into a different physical Cisco switch.

The ISCSI front end ports on the CX3-10c were also split between the two dedicated Cisco switches. See diagram below.

image

The IP addresses of all four front end ISCSI ports were originally in the same subnet. For example

SPA0: 10.1.210.30

SPA1: 10.1.210.31

SPB0: 10.1.210.32

SPB1: 10.1.210.33

I then tested connectivity from ESX to the ISCSI front end ports using the vmkping tool. I was able to successfully ping SPA0 and SPB0 but not SPA1 or SPB1.

I initially thought I had an incorrect configuration some where so I verified my ESX configuration and switch port configuration. After about 15 minutes of checking configurations I remembered that the VMkernel networking stack does not load balance like VM networking stacks. A VMkernel networking stack will only use the other NIC on a vSwitch if the first one fails.

I then tested this by unplugging the cables for the NIC in switch 1 and was then able to ping SPA1 and SPB1.

I then went back to the drawing board to come up with a way to see all 4 paths and also provide fault tolerance.

I did some searches on Powerlink and found an article (http://csgateway.emc.com/primus.asp?id=emc156408) that states having all both ISCSI NICs on the same subnet is not supported. After reading this I changed the IP addresses on the ISCSI front end ports on the Clariion to these so that the SPs are in different subnets.

SPA0: 10.1.210.30

SPA1: 10.1.215.30

SPB0: 10.1.210.32

SPB1: 10.1.215.32

CX3-10c ISCSI 2

I then changed the ESX configuration to have two ISCSI vSwitches with one NIC in each vSwitch. See screen shot below.

image

With this configuration I as then able to ping all four ISCSI front end ports on the Clariion from ESX using vmkping.

I then configured the ISCSI software initiator on ESX and added all four targets.

image

I did a rescan on the host and then checked the connectivity status on the Clariion and all four paths were registered.

image

With this configuration I am able to use both NICs, both switches and all four SPs for optimal load balancing.

The failover time is very quick as well.

Advertisement

17 thoughts on “ISCSI Multipathing with Clariion CX3-10c and VMware ESX 3.5

  1. Hi Jeremy,

    Have you found this setup to be stable? I tried the same thing and eventually was no longer able to connect to my virtual machines. I could still see the LUNs and browse the different datastores, but my machine could not be reached from the console in the virtual infrastructure client. Once I went back to a single vswitch and subnet, things started working correctly again.

    -Darren

  2. Darren, this setup has been very stable and performance has been great. This client has about 60 VMs running across 5 VMFS data stores that are on the CX3-10c. If you can browse your data stores then there may have been something else going on. When you say you could no longer connect to the VMs what do you mean exactly? Were the VMs still powered on? could you ping them?

  3. Hi Jeremy,

    Just wanted to thank you for this post. I’m setting up a VMware environment with 2x ESX 3.5 servers on HP hardware, with an HP 2012i iSCSI SAN for the storage, and this post helped me tremendously.

    Initially I tried the same exact thing you did by putting all of the iSCSI interfaces on the same flat network – but eventually I split up the ports on the HP SAN and put them on different subnets, which did the trick.

    Thanks again!

    -Will

  4. Does this solution allow for data to be driven down both SP’s paths? Specifically, if the LUN is owned by SP-A, with the setup that you describe, can you pass data to SPA0 and SPA1, as you can if you are utilizing Powerpath in a Windows environment?

    Thanks in advance for your reply.

  5. Yes, but you will have to go to each VMFS data store and manually specify which path (A0, B0, A1, B1) you want to use.

  6. Hi Jeremy, Great Blog and great post.

    The CX-310 uses the native iSCSI ports, I was wondering how you would go about setting up the same configuration with a Celerra? Any ideas ?

  7. I don’t think this configuration would work for a Celerra because instead of SPA/SPB it uses data movers on one of them is always in standby mode.

  8. This is a great article Jeremy. When I think it through, I have to ask, could you achieve the same result by linking the two switches rather than putting in two VMKernel ports? This would allow connectivity to all ISCSI ports on the Clariion. Is it best done one way or the other? Also, I may be wrong, but ISCSI doesn’t require seperation the same way FC requires zoning, so is there an actual benefit to having two subnets like this? Just trying to get some ideas about the best way to implement a similiar setup.

  9. The two vmkernel ports are required because there are two subnets for the CX ISCSI ports. The reason for the two subnets is to help load balance across the two NICs and CX ISCSI ports.

  10. Jeremy,

    I now have our cluster setup this way, but i am not seeing any utilization (or very very little) of the 2nd NIC in the 2nd subnet. Fail over works just fine, i can ping all Paths???

    One of my hosts was using both paths, but i had to blow away the vSwitches to turn off jumbo frames (another issue). After i recreated the vSwitches it now just uses one path.

    Any help would appreciated.

    thanks

  11. Brian Norris
    If you get the appropriate license and configure your Celerra to run in active/active mode you should be able to load balance between them.
    Alternatively you could use the iSCSI ports on the captive storage array (if it has them).
    See the following techbook on powerlink for detailed configuration instructions: H5536-vmware-esx-srvr-using-emc-celerra-stor-sys-wp.pdf .

    Chris

  12. Pingback: EMC Clariion iSCSI e ESX - Virtual to the Core

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s