Custom ESXi 5 ISO for UCS, Nexus 1000v and PowerPath VE

With ESXi 5 VMware made it very easy to create custom installation ISOs. I have been doing a lot of upgrades from ESXi 4.1 in our customer sites that have UCS, Nexus 1000v and PowerPath VE so I decided to create a custom ISO that includes the current UCS enic/fnic drivers and Nexus 1000v/PowerPath VE.

When I first started doing these upgrades I would remove the host from the Nexus 1000v VDS, uninstall Nexus 1000v VEM and uninstall PowerPath VE. After upgrading/rebuilding the host I would then use VUM to install Nexus 1000v VEM add it back to the VDS and then install PowerPath VE.

Continue reading

Advertisement

Creating NFS Exports on Virtual Data Movers on EMC Celerra

I learned something new about NFS exports on a recent EMC NS480 project.

The client I was working with wanted some CIFS shares that could be accessed by both Windows and NFS clients at the same time. I initially thought that this wouldn’t be possible because the CIFS shares were going to be on a file system that was mounted on a Virtual Data Mover (vdm) and I thought that you could only create NFS exports on file systems mounted on a data mover (server_2).

After some research on EMC Powerlink I figured out how to do it,

  1. First I create the desired folder structure using the Window file share.
  2. Next I ran this command to create the NFS export on the vdm –
    “server_export server_2 -Protocol nfs /root_vdm_1/production-FS/unix/nfs1 -option root=172.24.102.240,rw=172.24.102.0/255”

To mount this export on an NFS client you would use this path

172.24.102.40:/root_vdm_1/production-FS/unix/nfs1

Cisco UCS Palo and EMC PowerPath VE Incompatibility

******UPDATE****** There is a new VMware driver out that corrects this incompatibility. You can download it herehttp://downloads.vmware.com/d/details/esx40_cisco_cna_v110110a/ZHcqYmRwKmViZHdlZQ

I came across a much unexpected incompatibility this week between Cisco UCS VIC M81KR (Palo) and PowerPath VE.

I was implementing Cisco Nexus 1000v and EMC PowerPath VE on Cisco UCS blades with the new Cisco UCS VIC M81KR Virtual Interface Card (Palo). We did the Nexus 1000v implementation first and that went flawlessly. Being able to present 4 10G vNICs to the UCS blade with Palo makes for a very easy and trouble free Nexus 1000v  install because you don’t have to put the ESX Service Console in the Nexus 1000v vNetwork Distributed Switch.

After the Nexus 1000v was complete we moved on to PowerPath V/E. This environment was already using PowerPath VE on their other UCS blades but those have the Menlo mezzanine cards with the QLogic HBA chip set. We were expecting this piece of the project to be the easiest because with PowerPath V/E you install it on each ESX host, license it and then that is it. There is zero configuration with PowerPath VE on ESX.

So we downloaded the latest PowerPath VE build from Powerlink (5.4 sp1). We then configured an internal vCenter Update Manager patch repository so that we could deploy PowerPath V/E with VUM. After we deployed PowerPath VE to the first host we noticed in the vSphere client that the LUNs were still owned by NMP. At first I thought maybe it was because it wasn’t licensed yet but then I remembered on the other PowerPath VE installs I did that PowerPath should already own the SAN LUNs.

I SSHed into the host and looked at the vmkwarning log file and sure enough there were lots of these warnings and errors.

WARNING: ScsiClaimrule: 709: Path vmhba2:C0:T1:L20 is claimed by plugin NMP, but current claimrule number 250 indicates that it should be claimed by plugin PowerPath.

vmkernel: 0:00:00:50.369 cpu8:4242)ALERT: PowerPath: EmcpEsxLogEvent: Error:emcp:MpxEsxPathClaim: MpxRecognize failed

It took us a few minutes but then we realized it was probably an incompatibility between Palo and PowerPath VE. We opened both a Cisco TAC and EMC support case on the issue and sure enough there is an incompatibility between the current ESX Palo driver and PowerPath VE. Cisco TAC provided us a beta updated fnic ESX driver for us to test but said that it wasn’t production ready.

We tested the new driver and that fixed the issue. PowerPath VE was then able to claim the SAN LUNs. Since the driver is beta and not fully tested by VMware we are going to hold off using it. Cisco didn’t give us date as to when the driver would be released. I imagine that once VMware gives it their blessing they will post it to the vCenter Update manager repository and it can be installed from there. Cisco may even have it out sooner as a single driver download from their UCS downloads page.

Since both the UCS Palo and PowerPath VE are part of vBlock I am very surprised this wasn’t already tested by Cisco, VMware and EMC. Oh well I know Cisco will have this fixed soon so it isn’t that big of a deal.

How to install EMC Navisphere Host Agent on Citrix XenServer 5

If you are integrating Citrix XenServer with EMC Clariion storage the Navisphere Host Agent (naviagent) can be installed in the Linux management OS (domain 0) so that the host registers with Navisphere.

  1. Download the Navisphere Host Agent 6.28 from here (requires Powerlink login) – http://tinyurl.com/n5cbu2
  2. Unzip and copy the RPM to the /tmp folder of the XenServer host. I use a tool from Bitvise called Tunnelier to do this – http://dl.bitvise.com/Tunnelier-Inst.exe
  3. From the SSH shell go to the /tmp folder and give give yourself execute permissions to the rpm with this command “chmod 755 *.rpm”
  4. Type this command to install “rpm -i naviagent-6.28.20.1.40-1.noarch.rpm”
  5. Open the firewall ports using these commands

iptables -D RH-Firewall-1-INPUT -j REJECT –reject-with icmp-host-prohibited
iptables -A RH-Firewall-1-INPUT -m state –state NEW -p tcp –dport 6389 -j ACCEPT –src SPA-IP-ADDRESS
iptables -A RH-Firewall-1-INPUT -m state –state NEW -p udp –dport 6389 -j ACCEPT –src SPA-IP-ADDRESS
iptables -A RH-Firewall-1-INPUT -m state –state NEW -p tcp –dport 6389 -j ACCEPT –src SPB-IP-ADDRESS
iptables -A RH-Firewall-1-INPUT -m state –state NEW -p udp –dport 6389 -j ACCEPT –src SPB-IP-ADDRESS
iptables -A RH-Firewall-1-INPUT -m state –state NEW -p tcp –dport 6390 -j ACCEPT –src SPA-IP-ADDRESS
iptables -A RH-Firewall-1-INPUT -m state –state NEW -p tcp –dport 6390 -j ACCEPT –src SPB-IP-ADDRESS
iptables -A RH-Firewall-1-INPUT -m state –state NEW -p tcp –dport 6391 -j ACCEPT –src SPA-IP-ADDRESS
iptables -A RH-Firewall-1-INPUT -m state –state NEW -p tcp –dport 6391 -j ACCEPT –src SPB-IP-ADDRESS
iptables -A RH-Firewall-1-INPUT -m state –state NEW -p tcp –dport 6392 -j ACCEPT –src SPA-IP-ADDRESS
iptables -A RH-Firewall-1-INPUT -m state –state NEW -p tcp –dport 6392 -j ACCEPT –src SPB-IP-ADDRESS
iptables -A RH-Firewall-1-INPUT -j REJECT –reject-with icmp-host-prohibited
iptables-save >/etc/sysconfig/iptables

Reboot the host and then check Navisphere to see if the host registered.

CIFS Shares and EMC Celerra Replicator Switchover

I recently setup Celerra Repliator v2 between an old NS to a new NS to migrate CIFS shares and VDM configuration. The replication worked great and replicated 3 TB of data in about 30 hours. Once replication was complete I switched over the files system and VDM replication sessions in Celerra Manager. The switcheover went great and I was able to access the CIFS server and shares on the new NS.

When I used Celerra Manager to view the CIFS shares and CIFS server on the new NS there was nothing listed on the shares tab or CIFS servers tab. I new the shares and server was there because I could use Computer Managment MMC to connect to the CIFS server name and vew the shares.

The only I found to get the CIFS shares and server to show up in Celerra Manager was to reboot the control stations.

Anyone else know of a way to update the control station without having to reboot it?

Windows 2008 Partitioning Alignment

The partitioning alignment “bug” in Windows 2003 and earlier has been fixed in Windows 2008.

You no longer have to use diskpart.exe to align new partitions in Windows 2008.

http://technet.microsoft.com/en-us/library/bb738145.aspx

http://theessentialexchange.com/blogs/michael/archive/2008/03/07/Exchange-2007-Disk-Performance-Partition-Alignment-.aspx

ISCSI Multipathing with Clariion CX3-10c and VMware ESX 3.5

I recently did a VMware project using an EMC Clariion CX3-10c and VMware 3.5 update 2. The plan was to use the ISCSI front end ports on the CX3-10 for VMware ISCSI storage connectivity. The design included two dedicated Cisco 3650g switches for the ISCSI network and two dedicate gigabit NICs on the ESX host for ISCSI traffic.

The ESX hosts have a total of 6 gigabit NICs split between 3 physical cards; two onboard, one quad port and one dual port. Below is a screen shot of the original vSwitch design.

image

  • Two NICs from two different physical cards for Service Console and Vmotion.
  • Two NICs from two different physical cards for Virtual Machine traffic.
  • Two NICs from two different physical cards for ISCSI storage traffic. The two ISCSI NICs  were each plugged into a different physical Cisco switch.

The ISCSI front end ports on the CX3-10c were also split between the two dedicated Cisco switches. See diagram below.

image

The IP addresses of all four front end ISCSI ports were originally in the same subnet. For example

SPA0: 10.1.210.30

SPA1: 10.1.210.31

SPB0: 10.1.210.32

SPB1: 10.1.210.33

I then tested connectivity from ESX to the ISCSI front end ports using the vmkping tool. I was able to successfully ping SPA0 and SPB0 but not SPA1 or SPB1.

I initially thought I had an incorrect configuration some where so I verified my ESX configuration and switch port configuration. After about 15 minutes of checking configurations I remembered that the VMkernel networking stack does not load balance like VM networking stacks. A VMkernel networking stack will only use the other NIC on a vSwitch if the first one fails.

I then tested this by unplugging the cables for the NIC in switch 1 and was then able to ping SPA1 and SPB1.

I then went back to the drawing board to come up with a way to see all 4 paths and also provide fault tolerance.

I did some searches on Powerlink and found an article (http://csgateway.emc.com/primus.asp?id=emc156408) that states having all both ISCSI NICs on the same subnet is not supported. After reading this I changed the IP addresses on the ISCSI front end ports on the Clariion to these so that the SPs are in different subnets.

SPA0: 10.1.210.30

SPA1: 10.1.215.30

SPB0: 10.1.210.32

SPB1: 10.1.215.32

CX3-10c ISCSI 2

I then changed the ESX configuration to have two ISCSI vSwitches with one NIC in each vSwitch. See screen shot below.

image

With this configuration I as then able to ping all four ISCSI front end ports on the Clariion from ESX using vmkping.

I then configured the ISCSI software initiator on ESX and added all four targets.

image

I did a rescan on the host and then checked the connectivity status on the Clariion and all four paths were registered.

image

With this configuration I am able to use both NICs, both switches and all four SPs for optimal load balancing.

The failover time is very quick as well.