Cisco UCS PowerShell Health Check Report

I have been slack on my blog posts of late, mostly because of motivation but I have also been very busy with very little free time to spare.

I like being busy and I have been working on some cool projects, mostly with UCS, Nexus, vSphere and EMC storage.

A few weeks ago I finally had a few days in the lab so I decided to take a look a the Cisco UCS Powertool. I didn’t really have anything big planned I was more just curious about it.

Continue reading

Cisco UCS 6248 Unified Port Configuration

One of the exciting new updates from Cisco recently is the 2nd generation UCS hardware.

UCS gen 2 hardware included updated:

  1. Fabric Interconnect – 6248s
  2. Fabric Extender – 2208
  3. Mezzanine Adapter – VIC 1280

For details on the new hardware check out Sean McGee’s post – http://www.mseanmcgee.com/2011/07/ucs-2-0-cisco-stacks-the-deck-in-las-vegas/

This post will focus on configuration of Unified Ports that are part of the 6248 hardware. As with most hardware/software capabilities of the Fabric Interconnects, Unified Ports came from the Cisco Nexus 5500 line that has been around for about a year now.

Continue reading

VMware SRM Testing on Cisco UCS with Routing

I work for a Cisco/EMC/VMware VAR named Varrow and we do a fair amount of VMware SRM projects.

One of the challenges we face in doing SRM failover testing is being able to route between VMs that are brought up at the recovery site in a test bubble.  Not all of our customers that use SRM need to be able to test this as a lot of them just need to verify that the VMs boot and can access storage.

For the few that need to be able to do more extensive testing and need to be able to route between VMs on different VLANs we have come up with a simple solution.

This solution will work in any VMware environment but when the customer has Cisco UCS at their recovery site there are additional benefits and functionalities that can be realized.

The solution utilizes a free VM router appliance from Vyatta and can be downloaded from the VMware Virtual Appliance Market – http://www.vmware.com/appliances/directory/va/383813

The advantage you get when you have Cisco UCS at the recovery site is that you can easily create a new vNIC and the way the layer 2 switching works within UCS allows you to be able to route between VMs across multiple ESX hosts.

For non-UCS environments it will not be possible to route between VM on different ESX hosts without some additional hardware; pNIC and L2 switch.

To test this out in our lab here are the steps I followed:

  1. Created 3 new test VLANs that only exist in UCS. It is important that these VLANs do not exist on your northbound layer switch.
    image
  2. Created a new vNIC template in UCS Manager named vmnic8-srm-b and added it to my ESXi Service Profile Template. This vNIC is configured to use Fabric B as primary but with failover enabled so that if B is down it will failover to A. I normally configure 2 vNICs per VMware vSwitch and let VMware handle the failover but with this solution I needed a vSwitch with only 1 uplink so that routing between VMs across multiple ESX host could be achieved.
    image
  3. After a reboot of my UCS hosted ESXi host the new vmnic8 was present
    image
  4. Created a new vSwitch and uplinked vmnic8 to it.
  5. Created 3 new VM port groups on the new vSwitch; one for each test VLAN.
    image
  6. Imported the Vyatta OVF into vCenter and placed the 3 default vNICs into each of the new port groups.
    image
  7. Powered on the Vyatta VM and logged into the console as root with the default password of vyatta.
  8. Configured the 3 Ethernet interfaces using these commands

configure
set interfaces ethernet eth0 address 10.120.10.254/24
set interfaces ethernet eth0 description “VLAN-120-SRM-TEST”
set interfaces ethernet eth1 address 10.130.17.253/24
set interfaces ethernet eth1 description “VLAN-117-SRM-TEST
set interfaces ethernet eth2 address 10.13.7.245/24
set interfaces ethernet eth2 description “VLAN-107-SRM-TEST”
commit
save

After the interface configuration I issued these commands to verify configuration and routing

image

From my 2 test VMs I was then able to ping between them across ESXi hosts

image

Cisco UCS 1.4 Post Upgrade Warnings

After upgrading our UCS lab to 1.4 all of my Service Profiles and Service Profile Templates were in a warning state with blue boxes around them. There wasn’t an outage and all of our blades were still functioning.

The warnings were due to they way Cisco changed the Serial over LAN Policy. My current Service Profile Template had the Serial over LAN Policy set to <Not Set> but in 1.4 that isn’t a valid option. I changed the Serial over LAN Policy to “No Serial over LAN Policy” and all of the warnings disappeared.

image

Cisco UCS Firmware 1.4

I have to say now that I have had a chance to implement Cisco UCS firmware 1.4 and look at the new features I am blown away. Cisco should have made this version 2.0.

Here is a list of my favorite new features included in 1.4:

SAN Port Channeling:

This allows you to bundle all of the FC connections on a 6120 so that there is 1 logical uplink to the northbound FC switch. Port channels provide faster convergence when there is link failure because a server vHBA doesn’t have to get re-pinned to another uplink.

image

Maintenance Policies:

Remember when making a change on a Service Profile rebooted the server without asking? Or even worse making a change on an updating Service Profile template caused all of the Service Profiles bound to that template to reboot?
Well Maintenance Policies prevent these unplanned reboots by forcing the user to acknowledge the reboot or you can even schedule it to happen after hours.

image

Policy Usage Reporting:

Ever wondered what Service Profiles and Service Profile templates were using one of the many UCS policies? Well wonder no more there is now a Show Policy Usage report on every policy in UCSM. I wish there was a similar feature for templates, I am guessing that will be included in a future update.

image

Enhanced Active Directory Support:

You no longer have to extend the Active Directory schema and you can map UCS roles to Active Directory groups.

Multiple Authentication Sources:

Pre-1.4 you could only have one authentication source at a time. Now you can create authentication domains and have the option to login to all of them.

image

Local File System Download Option:

A remote SCP, SFTP server is no longer required for downloading new firmware or saving backups.

image

There are lots of other new features in 1.4 that I didn’t mention here, these were just some of my favorites.

Cisco UCS Service Profile Coolness

Last week I did a Cisco UCS project that included building out a new VMware vSphere ESXi 4.1 cluster using the new UCS blades. Now that ESXi boot from SAN is supported we used 5GB boot LUNs for the install.

During the course of the install the customer stated that in a few weeks they will be purchasing another chassis of blades and will be using these for additional ESXi 4.1 hosts. To demonstrate the power of the UCS stateless model I went ahead and provisioned all 8 of the new ESXi hosts even though the hardware did not exist.

Sounds like a magic trick right? Well sort of, but using the power of UCS Service Profiles and boot from SAN here is how we accomplished this:

  1. Built out the UCS Service Profiles for the hosts from the Service Profile template.
  2. Put 4 of the new ESXi hosts into maintenance mode and powered them down.
  3. Disassociated the service profiles from those 4 blades.
  4. Using these 4 free blades we associated them with 4 of the Service Profiles for the hardware that did not yet exist.
  5. Created the boot LUNs
  6. Installed ESXi 4.1.
  7. Added the hosts to vCenter.
  8. Applied our vSphere Host Profile to configure the new hosts.
  9. Powered down the 4 new servers.
  10. Pre-built the next 4 the same way.
  11. Powered those down and associated the original service profiles with the 4 blades.

Now when the customers gets the new chassis and blades all they have to do is associate the service profiles and they are done.

VMware vCenter 4.1 Upgrade/Migration Gotchas

I have recently been doing vCenter 2.5/4.0 to vCenter 4.1 upgrades and migrations for our clients. Upgrades to 4.1 have been more difficult than doing upgrades to 4.0 because vCenter 4.1 requires an x64 version of Windows.

Most of our clients that are on vCenter 2.5 are running it on 32-bit versions of Windows with most of those being Windows 2003.

For these migrations we have mostly been building new Windows 2008 R2 virtual machines and using the new vCenter Data Migration tool – Migrating an existing vCenter Server database to 4.1 using the Data Migration Tool

The data migration tool is handy for backing up the SSL certificates and registry settings from the vCenter 2.5 server and then restoring those to the new vCenter 4.1 server. The tool will also backup and restore SQL Express databases but most of our clients are running full SQL.

On my most recent upgrade/migration I ran into an SQL issue that I hadn’t seen before. In this case the client was running SQL 2005 as their database server for vCenter 2.5. Since they were already running SQL 2005 we didn’t see the need to move the database or upgrade SQL since 2005 is supported by vCenter 4.1.

During the install of vCenter 4.1 the database upgrade processes started as normal but then about 5 minutes into it we received this error “Exception Thrown while executing SQL script” and the install was halted. We immediately looked at the vCenter install log (vminst.log) to investigate what happened. The log had some additional SQL logging information but nothing specific so I then Googled the exception message and the first link in the search results was just what I was looking for.

Upgrading vCenter Server 4.0 to 4.1 fails with the error: Exception Thrown while executing SQL script

The issue was due to the fact that the vCenter database was set to SQL 2000 compatibility mode. The reason it was set like this is because this server was originally SQL 2000 and then upgraded to SQL 2005.

The fix wasn’t as easy as just changing the database to SQL 2005 compatibility because the failed upgrade left the database in an inconsistent state. To correct the issue we first had to restore the database from the backup we made before the upgrade and then change the compatibility mode to 2005.

The other issue we ran into was more of an annoyance than an issue. After upgrading vCenter to 4.1 and installing vCenter Converter we noticed this error in the vCenter Service Status tool “Unable to retrieve health data from https://<VC servername or IP address>/converter/health.xml”. A quick Google search led us to the fix for this. – vCenter Service Status displays an error for com.vmware.converter

Sample script for automating the installation of ESXi 4.1 on Cisco UCS with UDA

Thanks to Mike Laverick the Ultimate Deployment Appliance now supports ESXi 4.1. – http://www.rtfm-ed.co.uk/vmware-content/ultimate-da/

I developed a script for automating the installation of ESXi 4.1 on Cisco UCS boot from SAN. The UDA template I setup uses a subtemplate for the host names, management IP and vMotion IP.

Here is my configuration:

  1. 6 GB Boot LUN hosted on an EMC CX4
  2. Cisco UCS B200-M1 blades
  3. Cisco VIC (Palo) adapters
  4. These vNICs

image

And these vHBAs

image

 

Here is the subtemplate I am using

SUBTEMPLATE;IPADDR;HOSTNAME;VMOTIONIP
UCSESX1;10.150.15.11;ucsesx1;10.150.12.11
UCSESX2;10.150.15.12;ucsesx2;10.150.12.12

Here is the script I am using

accepteula
rootpw –iscrypted $1$NvqID7HA$mkw26HiBQgbso6jk1jX014
clearpart –alldrives –overwritevmfs
autopart –firstdisk=fnic –overwritevmfs
reboot
install url
http://[UDA_IPADDR]/[OS]/[FLAVOR]
network –bootproto=static –ip=[IPADDR] –gateway=10.150.15.254 –nameserver=10.150.7.3 –netmask=255.255.255.0 –hostname=[HOSTNAME].domain.com –addvmportgroup=0

%firstboot –unsupported –interpreter=busybox

#Set DNS
vim-cmd hostsvc/net/dns_set –ip-addresses=10.150.7.3,10.150.7.2

#Add pNIC vmnic1 to vSwitch0
esxcfg-vswitch -L vmnic1 vSwitch0

#Add new vSwitch for vMotion
esxcfg-vswitch -a vSwitch1

#Add vMotion Portgroup to vSwitch1
esxcfg-vswitch -A vMotion vSwitch1

#Add pNIC vmnic2 to vSwitch1
esxcfg-vswitch -L vmnic2 vSwitch1

#Add pNIC vmnic3 to vSwitch1
esxcfg-vswitch -L vmnic3 vSwitch1

#Assign ip address to vMotion vmk1
esxcfg-vmknic -a -i [VMOTIONIP] -n 255.255.255.0 -p vMotion

#Assign VLAN to vMotion PortGroup
esxcfg-vswitch -v 12 -p vMotion vSwitch1

#Enable CDP listen and advertise
esxcfg-vswitch -B both vSwitch0
esxcfg-vswitch -B both vSwitch1

sleep 5

#Enable vMotion to vmk1
vim-cmd hostsvc/vmotion/vnic_set vmk1

#Set NIC order policy for vMotion port groups
vim-cmd hostsvc/net/vswitch_setpolicy –nicorderpolicy-active=vmnic3 –nicorderpolicy-standby=vmnic2 vSwitch1

#enable TechSupportModes
vim-cmd hostsvc/enable_remote_tsm
vim-cmd hostsvc/start_remote_tsm
vim-cmd hostsvc/enable_local_tsm
vim-cmd hostsvc/start_local_tsm
vim-cmd hostsvc/net/refresh

# NTP time config
echo restrict default kod nomodify notrap noquerynopeer > /etc/ntp.conf
echo restrict 127.0.0.1 >> /etc/ntp.conf
echo server 0.vmware.pool.ntp.org >> /etc/ntp.conf
echo server 2.vmware.pool.net.org >> /etc/ntp.conf
echo driftfile /var/lib/ntp/drift >> /etc/ntp.conf
/sbin/chkconfig –-level 345 ntpd on
/etc/init.d/ntpd stop
/etc/init.d/ntpd start

#One final reboot
reboot

The four additional vNICs will be used as uplinks to the Cisco Nexus 1000v dvSwitch. 

image

Creating NFS Exports on Virtual Data Movers on EMC Celerra

I learned something new about NFS exports on a recent EMC NS480 project.

The client I was working with wanted some CIFS shares that could be accessed by both Windows and NFS clients at the same time. I initially thought that this wouldn’t be possible because the CIFS shares were going to be on a file system that was mounted on a Virtual Data Mover (vdm) and I thought that you could only create NFS exports on file systems mounted on a data mover (server_2).

After some research on EMC Powerlink I figured out how to do it,

  1. First I create the desired folder structure using the Window file share.
  2. Next I ran this command to create the NFS export on the vdm –
    “server_export server_2 -Protocol nfs /root_vdm_1/production-FS/unix/nfs1 -option root=172.24.102.240,rw=172.24.102.0/255”

To mount this export on an NFS client you would use this path

172.24.102.40:/root_vdm_1/production-FS/unix/nfs1