My Journey to CCIE Data Center

I am still on cloud nine after passing the CCIE DC lab last week at RTP on my second attempt. This is my first CCIE and this certification means a lot to me. I had started down the path of CCIE RS back in 2010 but I soon realized that without extensive on the job experience working with routing protocols I would never be able pass the lab.

When Cisco announced the CCIE DC certification back in 2012 I knew this one was for me and couldn’t wait until they released the written exam. I didn’t take the beta written exam but as soon it as it was released sometime in late 2012 I took it. I went into the exam over confident and didn’t think I really needed to prepare because I had worked with the technologies since 2009. Well needless to say I failed and learned a valuable lesson. I went back, prepared and knocked it out of the park the second time in May 2013.

After passing the written I was excited to get the lab scheduled ASAP. Little did I know that the lab seat availability was so sparse and had to settle for a date in December. Hind site this was a good thing for me as I wasn’t ready for the lab yet.

On my first attempt last December I was initially a bit over whelmed at the amount of configuration and devices I had to configure. It took me a while to get into a groove and focus. I was constantly looking at the clock and getting panicked that I wouldn’t be able to finish. I was able to get through all of the configuration by around 3:00 but that didn’t leave much time for verification and testing. I left feeling fairly confident that I did enough to pass. Since I took the lab on a Friday I had to wait until Sunday to know the results. I was on pins and needles all weekend and could hardly sleep. I received the email early Sunday morning that the results were posted, I was so nervous it took me a good 30 minutes to get the courage to check. I was so disappointed and defeated when I didn’t pass. 

I wasn’t able to get another seat until mid-March and it was so frustrating having to wait for 4 months. All I could think about was the lab, it was killing me having to wait that long. On my second attempt I had this extreme focus and didn’t hardly come up for air long enough to get a sip of water.

For me the lab was the one of the most mentally stressful days of my life. I walked out of there leaving everything on the table, I have had similar feelings after running ultra marathons. Although those are a bit different because they are a mix of physical and mental stress.

During my second lab attempt I was able to finish all of the configuration by around 1 pm. This gave me plenty of time re-read everything, verify and test. Just like after my first attempt I was so nervous waiting for the results. I didn’t really expect to know anything until the next day so I was shocked to get an email only 3 hours after finishing the lab. I was so nervous, my heart was pounding and it took me a while to get the courage to check the web site. When I saw the word “Pass” I was so excited, I don’t get visibly excited about much but I was jumping up and down excited.

What makes the lab so challenging is that you are presented with this elaborate topology that you have never seen before and are expected to build it out in less than 8 hours. If you were to design and implement such a solution it would be a good 3-4 week project.

Here are some tips for the lab:

  1. It is very important to carefully read and re-read all of the requirements and restrictions. This was something that was very difficult for me as I tend to read too fast and skip over words.
  2. As with most things in technology there is more than one way to do things. In the lab make sure you do every task the way they want it. While it isn’t always clear which way they want it if you read carefully and take into account all of the tasks you will understand which way they are looking for. In the end there is only one way to configure once you take in all of the requirements, restrictions and other tasks.
  3. There are lots of devices to configure on this lab, I am not going say how many but a lot. This makes is very easy to get confused as to which device and interface you are configuring. I was constantly going back to the diagram to verify I as on the correct device and interface.
  4. Do not rely on the provided configuration guides. I am not sure if this is on purpose but I found the navigating the configuration guide menus to be very, very slow. I only needed to verify a couple of things in the config guides but it was very painful.
  5. If you get hung up on a task don’t waste a lot of time trying to troubleshoot it. Move on and come back to it later.

There are no shortcuts to becoming a CCIE. To master all of the technologies in the CCIE DC you must have lots of experience working them. I have been working with most of the technologies on the lab since 2008 and a lot of them I work with on a daily basis.

I would estimate my preparation for the lab consisted of 50% on the job experience, 30% focused practice and 20% reviewing config guides, best practice guides and other various sources. I a very fortunate that I work for such an awesome company (Varrow, Inc.) that has supported in this journey. Being a Cisco partner we were able to acquire most of the hardware tested on in the lab. The only thing we really didn’t have was a pair of Nexus 7000.

For learning and practicing Nexus 7000 features like OTV, VDC and FabricPath I made good use of the Cisco Gold Labs.

I didn’t go to any boot camps or training classes. I learn much better on my own as long as I have access to the gear and documentation.

Since there is not yet a CCIE DC guide out there are several books that I used to help prepare for both the written and lab:

    1. NX-OS and Cisco Nexus Switching-Next-Generation Data Center Architectures
    2. IO Consolidation in the Data Center
    3. Cisco Press – Data Center Fundamentals
    4. Data Center Virtualization Fundamentals Understanding Techniques and Designs
    5. Cisco Unified Computing System (UCS)
    6. Implementing Cisco UCS Solutions
    7. Cisco Storage Networking Cookbook: For NX-OS release 5.2 MDS and Nexus Families of Switches
    8. Then there are all of the Cisco configuration guides that are very dry but very good
    9. There are also a lot of CiscoLive presentation PDFs that are very good. Here is a list of the ones I used:

BRKCOM-2002.pdf, BRKCOM-2005.pdf, BRKCRS-3145.pdf, BRKCRS-3146_1.pdf, BRKDCT-1044_1.pdf
BRKDCT-2048.pdf, BRKDCT-2049.pdf, BRKDCT-2081.pdf, BRKDCT-2121.pdf, BRKDCT-2202.pdf
BRKDCT-2204.pdf, BRKDCT-2237.pdf, BRKDCT-2951.pdf, BRKDCT-3103_1.pdf, BRKRST-2509.pdf
BRKRST-2930.pdf, BRKSAN-2047_1.pdf, BRKSAN-2282.pdf, BRKVIR-3013.pdf

Other resources that I used:

  1. Peter Revill has a great blog with lots of posts on various CCIE DC topics – 
  2. The CCIE DC Facebook groups –  and

image                                                           CCIEData_Center_UseLogo


Nexus 5500 FCoE and Jumbo MTU

I happened across an interesting scenario this morning while I was doing configuring jumbo MTU of 9216 on the Nexus 5500 switches in our lab.

I wanted to enable jumbo frames and with Nexus 5500s you have to do this with a QoS policy map. Here are the steps:

  1. Create a new policy map of type network-qos
  2. Add the default network-qos class type of class-default
  3. Configure the MTU to 9216
  4. Add the new policy map to system qos

Continue reading

Cisco UCS Multihop FCoE QoS Gotcha

See my post on configuration and migration to multihop FCoE for details on my lab setup –

When I first configured UCS multihop FCoE I experienced terrible SAN performance. It was so bad that it took 20 minutes to boot a single virtual machine.

Continue reading

Nexus 5500 SAN Admin RBAC

It’s been a while since I have posted anything on my blog. I just haven’t had the motivation to do it.

The current unified fabric project I am working on and the newly released CCIE DC has sparked a renewed interest in digging deeper into NX-OS.

As I stated above I am working on a unified fabric project where the customer is using a pair of Nexus 5596s for both 10G server access and SAN FC switching for host HBAs and SAN connectivity.

Continue reading