I was doing some reading this morning on the new MDS 9710 Director Class switch and came across a feature I hadn’t heard of before. The feature is called “Smart Zoning” and is a real godsend for SAN and data center administrators.
I happened across an interesting scenario this morning while I was doing configuring jumbo MTU of 9216 on the Nexus 5500 switches in our lab.
I wanted to enable jumbo frames and with Nexus 5500s you have to do this with a QoS policy map. Here are the steps:
- Create a new policy map of type network-qos
- Add the default network-qos class type of class-default
- Configure the MTU to 9216
- Add the new policy map to system qos
See my post on configuration and migration to multihop FCoE for details on my lab setup – http://jeremywaldrop.wordpress.com/2013/04/11/cisco-ucs-fcoe-multihop-configuration-and-migration/
When I first configured UCS multihop FCoE I experienced terrible SAN performance. It was so bad that it took 20 minutes to boot a single virtual machine.
This post will walk through the configuration of FCoE multihop in UCS and cover how to migrate from traditional FC to FCoE.
I was doing some testing of the new vSphere 5.1 features and came across something odd with the new Distributed Switch Health Check feature with Cisco UCS.
Cisco released UCS Firmware 2.1 last weekend with a host of new features. Along with this update the UCS Central management software was also released.
This post will summarize most of the new features, I left out the less exciting/ground breaking features that most don’t care about.
I am not sure why I waited so long to check out DCNM but I wish I hadn’t because DNCM Rocks!!
For those of you that don’t know DCNM is management software for monitoring, configuring and troubleshooting the following Cisco Nexus and MDS switches.
It’s been a while since I have posted anything on my blog. I just haven’t had the motivation to do it.
The current unified fabric project I am working on and the newly released CCIE DC has sparked a renewed interest in digging deeper into NX-OS.
As I stated above I am working on a unified fabric project where the customer is using a pair of Nexus 5596s for both 10G server access and SAN FC switching for host HBAs and SAN connectivity.
With ESXi 5 VMware made it very easy to create custom installation ISOs. I have been doing a lot of upgrades from ESXi 4.1 in our customer sites that have UCS, Nexus 1000v and PowerPath VE so I decided to create a custom ISO that includes the current UCS enic/fnic drivers and Nexus 1000v/PowerPath VE.
When I first started doing these upgrades I would remove the host from the Nexus 1000v VDS, uninstall Nexus 1000v VEM and uninstall PowerPath VE. After upgrading/rebuilding the host I would then use VUM to install Nexus 1000v VEM add it back to the VDS and then install PowerPath VE.
I have been doing a lot of ESXi 4.1 to 5 upgrades of late and every time I tried using vCenter Update Manager I would get an incompatible driver warning for these 2 ESXi 4.1 driver packages
As a follow up from yesterday’s post (UCS PowerShell Health check HTML report) I figured I would post a video demonstrating a UCS configuration script that I just finished.
The video starts with me showing an un-configured UCS cluster, I go through all of the tabs, pools, policies and templates showing that there is no configuration.
I have been slack on my blog posts of late, mostly because of motivation but I have also been very busy with very little free time to spare.
I like being busy and I have been working on some cool projects, mostly with UCS, Nexus, vSphere and EMC storage.
A few weeks ago I finally had a few days in the lab so I decided to take a look a the Cisco UCS Powertool. I didn’t really have anything big planned I was more just curious about it.
I just upgraded our lab Nexus 1000v from 1.4 to 1.5. Before the upgrade I assumed that it be like the upgrade from 1.3 to 1.4 where you first had to upgrade the VEM modules on each ESX host and then upgrade the VSM.
One of the exciting new updates from Cisco recently is the 2nd generation UCS hardware.
UCS gen 2 hardware included updated:
- Fabric Interconnect – 6248s
- Fabric Extender – 2208
- Mezzanine Adapter – VIC 1280
For details on the new hardware check out Sean McGee’s post – http://www.mseanmcgee.com/2011/07/ucs-2-0-cisco-stacks-the-deck-in-las-vegas/
This post will focus on configuration of Unified Ports that are part of the 6248 hardware. As with most hardware/software capabilities of the Fabric Interconnects, Unified Ports came from the Cisco Nexus 5500 line that has been around for about a year now.
There seems to be some confusion on what the supported Nexus 7000/5000 FEX topologies are.
The first section deals with the supported Nexus 5000/5500 FEX topologies. The next section will show the Nexus 7000 FEX topologies.
The following topologies are valid for these NX-OS versions.