VMware SRM Testing on Cisco UCS with Routing

I work for a Cisco/EMC/VMware VAR named Varrow and we do a fair amount of VMware SRM projects.

One of the challenges we face in doing SRM failover testing is being able to route between VMs that are brought up at the recovery site in a test bubble.  Not all of our customers that use SRM need to be able to test this as a lot of them just need to verify that the VMs boot and can access storage.

For the few that need to be able to do more extensive testing and need to be able to route between VMs on different VLANs we have come up with a simple solution.

This solution will work in any VMware environment but when the customer has Cisco UCS at their recovery site there are additional benefits and functionalities that can be realized.

The solution utilizes a free VM router appliance from Vyatta and can be downloaded from the VMware Virtual Appliance Market – http://www.vmware.com/appliances/directory/va/383813

The advantage you get when you have Cisco UCS at the recovery site is that you can easily create a new vNIC and the way the layer 2 switching works within UCS allows you to be able to route between VMs across multiple ESX hosts.

For non-UCS environments it will not be possible to route between VM on different ESX hosts without some additional hardware; pNIC and L2 switch.

To test this out in our lab here are the steps I followed:

  1. Created 3 new test VLANs that only exist in UCS. It is important that these VLANs do not exist on your northbound layer switch.
    image
  2. Created a new vNIC template in UCS Manager named vmnic8-srm-b and added it to my ESXi Service Profile Template. This vNIC is configured to use Fabric B as primary but with failover enabled so that if B is down it will failover to A. I normally configure 2 vNICs per VMware vSwitch and let VMware handle the failover but with this solution I needed a vSwitch with only 1 uplink so that routing between VMs across multiple ESX host could be achieved.
    image
  3. After a reboot of my UCS hosted ESXi host the new vmnic8 was present
    image
  4. Created a new vSwitch and uplinked vmnic8 to it.
  5. Created 3 new VM port groups on the new vSwitch; one for each test VLAN.
    image
  6. Imported the Vyatta OVF into vCenter and placed the 3 default vNICs into each of the new port groups.
    image
  7. Powered on the Vyatta VM and logged into the console as root with the default password of vyatta.
  8. Configured the 3 Ethernet interfaces using these commands

configure
set interfaces ethernet eth0 address 10.120.10.254/24
set interfaces ethernet eth0 description “VLAN-120-SRM-TEST”
set interfaces ethernet eth1 address 10.130.17.253/24
set interfaces ethernet eth1 description “VLAN-117-SRM-TEST
set interfaces ethernet eth2 address 10.13.7.245/24
set interfaces ethernet eth2 description “VLAN-107-SRM-TEST”
commit
save

After the interface configuration I issued these commands to verify configuration and routing

image

From my 2 test VMs I was then able to ping between them across ESXi hosts

image

Advertisement