ISCSI Multipathing with Clariion CX3-10c and VMware ESX 3.5

I recently did a VMware project using an EMC Clariion CX3-10c and VMware 3.5 update 2. The plan was to use the ISCSI front end ports on the CX3-10 for VMware ISCSI storage connectivity. The design included two dedicated Cisco 3650g switches for the ISCSI network and two dedicate gigabit NICs on the ESX host for ISCSI traffic.

The ESX hosts have a total of 6 gigabit NICs split between 3 physical cards; two onboard, one quad port and one dual port. Below is a screen shot of the original vSwitch design.

image

  • Two NICs from two different physical cards for Service Console and Vmotion.
  • Two NICs from two different physical cards for Virtual Machine traffic.
  • Two NICs from two different physical cards for ISCSI storage traffic. The two ISCSI NICs  were each plugged into a different physical Cisco switch.

The ISCSI front end ports on the CX3-10c were also split between the two dedicated Cisco switches. See diagram below.

image

The IP addresses of all four front end ISCSI ports were originally in the same subnet. For example

SPA0: 10.1.210.30

SPA1: 10.1.210.31

SPB0: 10.1.210.32

SPB1: 10.1.210.33

I then tested connectivity from ESX to the ISCSI front end ports using the vmkping tool. I was able to successfully ping SPA0 and SPB0 but not SPA1 or SPB1.

I initially thought I had an incorrect configuration some where so I verified my ESX configuration and switch port configuration. After about 15 minutes of checking configurations I remembered that the VMkernel networking stack does not load balance like VM networking stacks. A VMkernel networking stack will only use the other NIC on a vSwitch if the first one fails.

I then tested this by unplugging the cables for the NIC in switch 1 and was then able to ping SPA1 and SPB1.

I then went back to the drawing board to come up with a way to see all 4 paths and also provide fault tolerance.

I did some searches on Powerlink and found an article (http://csgateway.emc.com/primus.asp?id=emc156408) that states having all both ISCSI NICs on the same subnet is not supported. After reading this I changed the IP addresses on the ISCSI front end ports on the Clariion to these so that the SPs are in different subnets.

SPA0: 10.1.210.30

SPA1: 10.1.215.30

SPB0: 10.1.210.32

SPB1: 10.1.215.32

CX3-10c ISCSI 2

I then changed the ESX configuration to have two ISCSI vSwitches with one NIC in each vSwitch. See screen shot below.

image

With this configuration I as then able to ping all four ISCSI front end ports on the Clariion from ESX using vmkping.

I then configured the ISCSI software initiator on ESX and added all four targets.

image

I did a rescan on the host and then checked the connectivity status on the Clariion and all four paths were registered.

image

With this configuration I am able to use both NICs, both switches and all four SPs for optimal load balancing.

The failover time is very quick as well.

Advertisement

Free XenApp Configuration Reporting Tool

XTS Introspect Trial – Includes FREE Configuration Reporting

With Introspect, you can quickly generate comprehensive historical usage and configuration reports for better planning and management of your Citrix farm – no agents or custom code required. XTS Introspect uses business intelligence technology to analyze Citrix data and translate it into metrics that CIOs, non-techie management, and engineers can all understand.

Key Uses:

Benefits:

* Capacity planning
* License usage analysis
* Application management
* Compliance and security audits
* Chargebacks and billing
* Configuration analysis

* Manage server consolidation & growth
* Ensure proper licensing compliance
* Optimize your application delivery
* Enhance your audit capabilities
* Monetize your XenApp investment
* Troubleshoot and document your farm

Free Fully Functional Virtual Machine Download

BONUS – FREE Configuration Report Generator included with trial download. Even after your 30 day trial expires, you’ll be able to generate for FREE any Citrix configuration reports in your XenApp v4 or v4.5 environment. .

Specific benefits of the free configuration reports include.

* Document Citrix environmental configurations to create baselines for future troubleshooting, disaster recovery or audit requirements.
* Verify baselines for migrations to XenApp 4.5.
* Track Citrix hotfixes to ensure consistency across servers
* Attain more in-depth security insight with user/application assignment reports, leveraging integration with Active Directory.
* Gain immediate visibility into risks that might exist in their Citrix environment due to misconfigurations

To learn more, download a white paper from industry guru, Doug Brown
OR simply download a trial demo virtual appliance of Introspect today, which includes your FREE Configuration Report Generator.

http://www.xtsinc.com/dnn/FreeConfigToolLanding/tabid/422/Default.aspx

Exchange 2007 CCR

I built my first two-node Exchange 2007 CCR last week and I have to say that I am impressed. In the past I have frowned upon Microsoft Clusters because there was only one data set that all nodes in the cluster must have access to. The MSCS cluster only provided HA in the situation the physical/Windows OS server failed. There was not mechanism for replicating the data, so if there was data corruption having a cluster didn’t buy you anything.

With an Exchange 2007 CCR (Continuous Cluster Replication) cluster in combination with the new MSCS quorum type (Majority Node Set, more on this later) shared storage is no longer required.

Majority Node Set quorum type – With an MNS quorum type each cluster node has its own, locally stored copy of the quorum DB. This type of quorum means that shared storage for the quorum is not a requirement. One downside to a typical MNS quorum is that at least 3 cluster nodes are required. 3 nodes are required in a typical MNS quorum is because there must be a majority of the nodes online before the cluster resources will come online. In order to have a majority there has to be 3, if there were only 2 then there would never be a majority and the cluster would not come online.

Windows 2003 Sp1 with this hotfix http://support.microsoft.com/kb/921181 or Sp2 there is new MNS configuration called “File Share Witness”.

The file share witness feature is an improvement to the current Majority Node Set (MNS) quorum model. This feature lets you use a file share that is external to the cluster as an additional “vote” to determine the status of the cluster in a two-node MNS quorum cluster deployment.

Microsoft best practice is to use the Exchange 2007 Hub Transport server for the “File Share Witness”. To do this simply create a new folder and share it. Make sure the Administrators group and the Cluster service account has full control permissions to this share/folder.

Below is a diagram of an Exchange 2007 CCR two-node cluster using the hub transport server as the file share witness.

e2k7ccr

 

Requirements

  • Windows 2003 Sp2/Windows 2008 x64 Enterprise
  • Exchange 2007 Enterprise
  • Two servers with the same amount of RAM and disk space
  • Two NICS, one for LAN one for cluster heartbeat

Steps

  • Create MSCS choosing quorum type of Majority Node Set.
  • Configure MNS to use file share witness on hub transport server by using this command
    “cluster res “Majority Node Set” /priv MNSFileShare=\\servername\sharename.
  • Apply the MNS configuration by moving the cluster to the other node.
  • Install Exchange 2007 Enterprise on the first node choosing Custom and then “Active Clustered Mailbox Role”
  • When installing on the second node choose “Passive Clustered Mailbox Role”
    e2k7ccrsetup1
  • Choose Cluster Continuous Replication and file in the name and IP.
    e2k7ccrsetup2