Custom ESXi 5 ISO for UCS, Nexus 1000v and PowerPath VE

With ESXi 5 VMware made it very easy to create custom installation ISOs. I have been doing a lot of upgrades from ESXi 4.1 in our customer sites that have UCS, Nexus 1000v and PowerPath VE so I decided to create a custom ISO that includes the current UCS enic/fnic drivers and Nexus 1000v/PowerPath VE.

When I first started doing these upgrades I would remove the host from the Nexus 1000v VDS, uninstall Nexus 1000v VEM and uninstall PowerPath VE. After upgrading/rebuilding the host I would then use VUM to install Nexus 1000v VEM add it back to the VDS and then install PowerPath VE.

Continue reading

Creating NFS Exports on Virtual Data Movers on EMC Celerra

I learned something new about NFS exports on a recent EMC NS480 project.

The client I was working with wanted some CIFS shares that could be accessed by both Windows and NFS clients at the same time. I initially thought that this wouldn’t be possible because the CIFS shares were going to be on a file system that was mounted on a Virtual Data Mover (vdm) and I thought that you could only create NFS exports on file systems mounted on a data mover (server_2).

After some research on EMC Powerlink I figured out how to do it,

  1. First I create the desired folder structure using the Window file share.
  2. Next I ran this command to create the NFS export on the vdm –
    “server_export server_2 -Protocol nfs /root_vdm_1/production-FS/unix/nfs1 -option root=,rw=”

To mount this export on an NFS client you would use this path

Cisco UCS Palo and EMC PowerPath VE Incompatibility

******UPDATE****** There is a new VMware driver out that corrects this incompatibility. You can download it here

I came across a much unexpected incompatibility this week between Cisco UCS VIC M81KR (Palo) and PowerPath VE.

I was implementing Cisco Nexus 1000v and EMC PowerPath VE on Cisco UCS blades with the new Cisco UCS VIC M81KR Virtual Interface Card (Palo). We did the Nexus 1000v implementation first and that went flawlessly. Being able to present 4 10G vNICs to the UCS blade with Palo makes for a very easy and trouble free Nexus 1000v  install because you don’t have to put the ESX Service Console in the Nexus 1000v vNetwork Distributed Switch.

After the Nexus 1000v was complete we moved on to PowerPath V/E. This environment was already using PowerPath VE on their other UCS blades but those have the Menlo mezzanine cards with the QLogic HBA chip set. We were expecting this piece of the project to be the easiest because with PowerPath V/E you install it on each ESX host, license it and then that is it. There is zero configuration with PowerPath VE on ESX.

So we downloaded the latest PowerPath VE build from Powerlink (5.4 sp1). We then configured an internal vCenter Update Manager patch repository so that we could deploy PowerPath V/E with VUM. After we deployed PowerPath VE to the first host we noticed in the vSphere client that the LUNs were still owned by NMP. At first I thought maybe it was because it wasn’t licensed yet but then I remembered on the other PowerPath VE installs I did that PowerPath should already own the SAN LUNs.

I SSHed into the host and looked at the vmkwarning log file and sure enough there were lots of these warnings and errors.

WARNING: ScsiClaimrule: 709: Path vmhba2:C0:T1:L20 is claimed by plugin NMP, but current claimrule number 250 indicates that it should be claimed by plugin PowerPath.

vmkernel: 0:00:00:50.369 cpu8:4242)ALERT: PowerPath: EmcpEsxLogEvent: Error:emcp:MpxEsxPathClaim: MpxRecognize failed

It took us a few minutes but then we realized it was probably an incompatibility between Palo and PowerPath VE. We opened both a Cisco TAC and EMC support case on the issue and sure enough there is an incompatibility between the current ESX Palo driver and PowerPath VE. Cisco TAC provided us a beta updated fnic ESX driver for us to test but said that it wasn’t production ready.

We tested the new driver and that fixed the issue. PowerPath VE was then able to claim the SAN LUNs. Since the driver is beta and not fully tested by VMware we are going to hold off using it. Cisco didn’t give us date as to when the driver would be released. I imagine that once VMware gives it their blessing they will post it to the vCenter Update manager repository and it can be installed from there. Cisco may even have it out sooner as a single driver download from their UCS downloads page.

Since both the UCS Palo and PowerPath VE are part of vBlock I am very surprised this wasn’t already tested by Cisco, VMware and EMC. Oh well I know Cisco will have this fixed soon so it isn’t that big of a deal.

Windows 2008 Failover Clustering on vSphere with EMC PowerPath VE

VMware/Microsoft doesn’t support third-party multipathing or the Round Robin path policy for VMs setup in a Microsoft Failover Cluster. This fact is stated in the “Setup for Failover Clustering and Microsoft Cluster Service” PDF on pages 11, 25 and 36

The reason why this isn’t supported is due to the way the SCSI-3 command sets are changed when the NMP (Native Multipathing Plugin) Round Robin is set or when third-party multipathing software (EMC PowerPath VE) is installed.

If you try to create a cluster on an ESX host with either of these the Microsoft cluster validation check will fail on the SCSI-3 Persistent Reservation check.

If you need to run a Microsoft Cluster on a host with PowerPath VE installed you can modify the claim rules so that PowerPath VE doesn’t claim the LUNs that the cluster will be using.

Lets say for example the LUNs the Microsoft cluster will be using are LUNs 12 – 14. To modify the claim rules using the ESX COS CLI run these commands

To list the current claim rules run

esxcli corestorage claimrule list

To add claim rules that force the Microsoft cluster LUNs to be owned by the NMP (Native Multipathing Plugin) run these commands

esxcli corestorage claimrule add –rule=210 –plugin=”NMP” –lun=12 –type=”location”
esxcli corestorage claimrule add –rule=211 –plugin=”NMP” –lun=13 –type=”location”
esxcli corestorage claimrule add –rule=212 –plugin=”NMP” –lun=14 –type=”location”
esxcli corestorage claimrule load
esxcli corestorage claimrule run

NOTE – The rule number must be between 201 and 249

Run esxcli corestorage claimrule list to verify the new rules were added.

You can also use the VMware vSphere CLI 4 from a remote machine or the VMware vSphere Management Assistant (vMA). When running esxcil from either of these the commands will look like this.

esxcli –server=esxhostname –username=root corestorage claimrule add –rule=210 –plugin=”NMP” –lun=12 –type=”location” corestorage claimrule add –rule=210 –plugin=”NMP” –lun=12 –type=”location”
esxcli –server=esxhostname –username=root corestorage claimrule add –rule=211 –plugin=”NMP” –lun=13 –type=”location” corestorage claimrule add –rule=210 –plugin=”NMP” –lun=12 –type=”location”
esxcli –server=esxhostname –username=root corestorage claimrule add –rule=212 –plugin=”NMP” –lun=14 –type=”location” corestorage claimrule add –rule=210 –plugin=”NMP” –lun=12 –type=”location”
esxcli –server=esxhostname –username=root corestorage claimrule load
esxcli –server=esxhostname –username=root corestorage claimrule run

esxcli –server=esxhostname –username=root corestorage claimrule list

How to install EMC Navisphere Host Agent on Citrix XenServer 5

If you are integrating Citrix XenServer with EMC Clariion storage the Navisphere Host Agent (naviagent) can be installed in the Linux management OS (domain 0) so that the host registers with Navisphere.

  1. Download the Navisphere Host Agent 6.28 from here (requires Powerlink login) –
  2. Unzip and copy the RPM to the /tmp folder of the XenServer host. I use a tool from Bitvise called Tunnelier to do this –
  3. From the SSH shell go to the /tmp folder and give give yourself execute permissions to the rpm with this command “chmod 755 *.rpm”
  4. Type this command to install “rpm -i naviagent-”
  5. Open the firewall ports using these commands

iptables -D RH-Firewall-1-INPUT -j REJECT –reject-with icmp-host-prohibited
iptables -A RH-Firewall-1-INPUT -m state –state NEW -p tcp –dport 6389 -j ACCEPT –src SPA-IP-ADDRESS
iptables -A RH-Firewall-1-INPUT -m state –state NEW -p udp –dport 6389 -j ACCEPT –src SPA-IP-ADDRESS
iptables -A RH-Firewall-1-INPUT -m state –state NEW -p tcp –dport 6389 -j ACCEPT –src SPB-IP-ADDRESS
iptables -A RH-Firewall-1-INPUT -m state –state NEW -p udp –dport 6389 -j ACCEPT –src SPB-IP-ADDRESS
iptables -A RH-Firewall-1-INPUT -m state –state NEW -p tcp –dport 6390 -j ACCEPT –src SPA-IP-ADDRESS
iptables -A RH-Firewall-1-INPUT -m state –state NEW -p tcp –dport 6390 -j ACCEPT –src SPB-IP-ADDRESS
iptables -A RH-Firewall-1-INPUT -m state –state NEW -p tcp –dport 6391 -j ACCEPT –src SPA-IP-ADDRESS
iptables -A RH-Firewall-1-INPUT -m state –state NEW -p tcp –dport 6391 -j ACCEPT –src SPB-IP-ADDRESS
iptables -A RH-Firewall-1-INPUT -m state –state NEW -p tcp –dport 6392 -j ACCEPT –src SPA-IP-ADDRESS
iptables -A RH-Firewall-1-INPUT -m state –state NEW -p tcp –dport 6392 -j ACCEPT –src SPB-IP-ADDRESS
iptables -A RH-Firewall-1-INPUT -j REJECT –reject-with icmp-host-prohibited
iptables-save >/etc/sysconfig/iptables

Reboot the host and then check Navisphere to see if the host registered.

CIFS Shares and EMC Celerra Replicator Switchover

I recently setup Celerra Repliator v2 between an old NS to a new NS to migrate CIFS shares and VDM configuration. The replication worked great and replicated 3 TB of data in about 30 hours. Once replication was complete I switched over the files system and VDM replication sessions in Celerra Manager. The switcheover went great and I was able to access the CIFS server and shares on the new NS.

When I used Celerra Manager to view the CIFS shares and CIFS server on the new NS there was nothing listed on the shares tab or CIFS servers tab. I new the shares and server was there because I could use Computer Managment MMC to connect to the CIFS server name and vew the shares.

The only I found to get the CIFS shares and server to show up in Celerra Manager was to reboot the control stations.

Anyone else know of a way to update the control station without having to reboot it?

Windows 2008 Partitioning Alignment

The partitioning alignment “bug” in Windows 2003 and earlier has been fixed in Windows 2008.

You no longer have to use diskpart.exe to align new partitions in Windows 2008.

Using navicli to Create Raid Groups and LUNs on EMC Clariion Arrays

I recently was involved in a SAN project implementing two EMC Clariion CX4 arrays. In the past I have used Navispere to create the Raid Groups and LUNs but this time I decided to use the command line tool navicli.exe.

Creating the RGs and LUNs via Navispere is very painful because of Java and all of the mouse clicking required.

I downloaded the “Navisphere CLI (Windows) 6.2″ from EMC Powerlink and installed it on my laptop. I also used the “EMC Navispere Command Line Interface Reference” PDF to figure out the syntax.

By using navicli I as able to create the RGs and LUNs in half the time it would have taken me using the Java interface.

To create a Raid Group use this command syntax.

navicli h ip-of-array-sp createrg Raid-Group-ID-Number disks bus_enclosure_disk”

Example: “navicli – h createrg 1 disks 0_0_6 0_0_7 0_0_8 0_0_9 0_0_10”

To bind a LUN using navicli use this syntax.

navicli h ip-of-array-sp bind raid-type LUN-# rg raid-group-number –sq size qualifier(mb,gb) cap capacity sp SP-A, SP-B

Example: “navicli -h bind r5 105 -rg 1 -sq mb -cap 512000 -sp B”

Syntax for renaming a LUN.

navicli h ip-of-array-sp chglun l LUN-# name New-Name

Example: “navicli -h chglun -l 105 -name VMWARE_VMFS01”

Additional references

ISCSI Multipathing with Clariion CX3-10c and VMware ESX 3.5

I recently did a VMware project using an EMC Clariion CX3-10c and VMware 3.5 update 2. The plan was to use the ISCSI front end ports on the CX3-10 for VMware ISCSI storage connectivity. The design included two dedicated Cisco 3650g switches for the ISCSI network and two dedicate gigabit NICs on the ESX host for ISCSI traffic.

The ESX hosts have a total of 6 gigabit NICs split between 3 physical cards; two onboard, one quad port and one dual port. Below is a screen shot of the original vSwitch design.


  • Two NICs from two different physical cards for Service Console and Vmotion.
  • Two NICs from two different physical cards for Virtual Machine traffic.
  • Two NICs from two different physical cards for ISCSI storage traffic. The two ISCSI NICs  were each plugged into a different physical Cisco switch.

The ISCSI front end ports on the CX3-10c were also split between the two dedicated Cisco switches. See diagram below.


The IP addresses of all four front end ISCSI ports were originally in the same subnet. For example





I then tested connectivity from ESX to the ISCSI front end ports using the vmkping tool. I was able to successfully ping SPA0 and SPB0 but not SPA1 or SPB1.

I initially thought I had an incorrect configuration some where so I verified my ESX configuration and switch port configuration. After about 15 minutes of checking configurations I remembered that the VMkernel networking stack does not load balance like VM networking stacks. A VMkernel networking stack will only use the other NIC on a vSwitch if the first one fails.

I then tested this by unplugging the cables for the NIC in switch 1 and was then able to ping SPA1 and SPB1.

I then went back to the drawing board to come up with a way to see all 4 paths and also provide fault tolerance.

I did some searches on Powerlink and found an article ( that states having all both ISCSI NICs on the same subnet is not supported. After reading this I changed the IP addresses on the ISCSI front end ports on the Clariion to these so that the SPs are in different subnets.





CX3-10c ISCSI 2

I then changed the ESX configuration to have two ISCSI vSwitches with one NIC in each vSwitch. See screen shot below.


With this configuration I as then able to ping all four ISCSI front end ports on the Clariion from ESX using vmkping.

I then configured the ISCSI software initiator on ESX and added all four targets.


I did a rescan on the host and then checked the connectivity status on the Clariion and all four paths were registered.


With this configuration I am able to use both NICs, both switches and all four SPs for optimal load balancing.

The failover time is very quick as well.