Cisco UCS Palo and EMC PowerPath VE Incompatibility

******UPDATE****** There is a new VMware driver out that corrects this incompatibility. You can download it here

I came across a much unexpected incompatibility this week between Cisco UCS VIC M81KR (Palo) and PowerPath VE.

I was implementing Cisco Nexus 1000v and EMC PowerPath VE on Cisco UCS blades with the new Cisco UCS VIC M81KR Virtual Interface Card (Palo). We did the Nexus 1000v implementation first and that went flawlessly. Being able to present 4 10G vNICs to the UCS blade with Palo makes for a very easy and trouble free Nexus 1000v  install because you don’t have to put the ESX Service Console in the Nexus 1000v vNetwork Distributed Switch.

After the Nexus 1000v was complete we moved on to PowerPath V/E. This environment was already using PowerPath VE on their other UCS blades but those have the Menlo mezzanine cards with the QLogic HBA chip set. We were expecting this piece of the project to be the easiest because with PowerPath V/E you install it on each ESX host, license it and then that is it. There is zero configuration with PowerPath VE on ESX.

So we downloaded the latest PowerPath VE build from Powerlink (5.4 sp1). We then configured an internal vCenter Update Manager patch repository so that we could deploy PowerPath V/E with VUM. After we deployed PowerPath VE to the first host we noticed in the vSphere client that the LUNs were still owned by NMP. At first I thought maybe it was because it wasn’t licensed yet but then I remembered on the other PowerPath VE installs I did that PowerPath should already own the SAN LUNs.

I SSHed into the host and looked at the vmkwarning log file and sure enough there were lots of these warnings and errors.

WARNING: ScsiClaimrule: 709: Path vmhba2:C0:T1:L20 is claimed by plugin NMP, but current claimrule number 250 indicates that it should be claimed by plugin PowerPath.

vmkernel: 0:00:00:50.369 cpu8:4242)ALERT: PowerPath: EmcpEsxLogEvent: Error:emcp:MpxEsxPathClaim: MpxRecognize failed

It took us a few minutes but then we realized it was probably an incompatibility between Palo and PowerPath VE. We opened both a Cisco TAC and EMC support case on the issue and sure enough there is an incompatibility between the current ESX Palo driver and PowerPath VE. Cisco TAC provided us a beta updated fnic ESX driver for us to test but said that it wasn’t production ready.

We tested the new driver and that fixed the issue. PowerPath VE was then able to claim the SAN LUNs. Since the driver is beta and not fully tested by VMware we are going to hold off using it. Cisco didn’t give us date as to when the driver would be released. I imagine that once VMware gives it their blessing they will post it to the vCenter Update manager repository and it can be installed from there. Cisco may even have it out sooner as a single driver download from their UCS downloads page.

Since both the UCS Palo and PowerPath VE are part of vBlock I am very surprised this wasn’t already tested by Cisco, VMware and EMC. Oh well I know Cisco will have this fixed soon so it isn’t that big of a deal.


10 thoughts on “Cisco UCS Palo and EMC PowerPath VE Incompatibility

  1. Pingback: Virtualization Short Take #41 - - The weblog of an IT pro specializing in virtualization, storage, and servers

  2. Hi everyone! This is a really good discussion on different unified storage options. It looks like a lot of you are EMC fans already. For those who don’t know EMC’s solutions as well, I think this paper will help you see that they’ve got some really great options. And check out the new guarantee: with EMC’s unified storage systems you’ll use 20% less raw storage capacity than with other products. That means efficiency and cost savings for your company. — Brian, EMC Social Outreach Team

  3. Keeping the management IP/SC IP on a standard vSwitch eases deployment and in the event you have N1KV issues you will not lose access to your ESX host. There isn’t any value add in putting the management IP under the control of N1KV anyway. The whole purpose of the N1VK is for virtual machines.

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s