UCS VIC 1340/1380 PCI Placement

I recently did a UCS implementation that included the B460-M4 blades. If you aren’t familiar with these beasts you should look them up. They are two B260 full-width blades connected together with a scalability connector on the front to create one giant server. Each of the B260s had a VIC 1340 MLOM to give the server two VIC 1340s.

download

I did the initial design and configuration consistent with our standard UCS design and logical build out.

We have a standard vNIC/vHBA design for ESXi hosts where there are a minimum of 6 vNICs and 2 vHBAs. The vNICs/vHBAs are split between Fabric A and Fabric b and then map to vSphere vSwitches (Standard and Distributed).

Here is a screen shot of our standard vNIC deign for ESXi on blades with a single VIC

Screen Shot 2016-02-12 at 12.03.17 PM

With two VICs we have a different configuration so that we make use of both VICs. In this configuration we place all of Fabric A vNICs/vHBAs on vCon 1 and all of Fabric B vNICs/vHBAs on vCon 2. With this configuration the vNIC to vmnic numbering changes and so does the vSwitch to vmnic uplinks

Picture1

In this design I implemented this two adapter configuration and built out my templates, pools and policies as usual.

Everything was going well until we built our first ESXi 6 host and couldn’t get management connectivity working. Upon further investigation we realized the vNIC to vmnic mappings were not correct.

After some research I came across this Cisco Bug ID that described our problem to a T – https://quickview.cloudapps.cisco.com/quickview/bug/CSCut78943

I personally wouldn’t call this bug but more of an explanation of the configuration options on the new VICs.

The new VIC 1340/1380s have two PCI channels (1 and 2) and you can control which specific channel the vNIC/vHBA is created on. This new configuration option is called “Admin Host Port” and by default is set to AUTO. With the AUTO setting UCS will round robin each vNIC/vHBA across both Admin Host Ports per vCon.

This Round Robin configuration will place every other vNIC on Admin Host Port 1. This causes an issue because the installed Operating System detects all vNICs on Admin Host Port 1 first and Admin Host Port 2 second.

With the two VICs per blade the configuration that worked like we wanted was to place all of the vNICs on Fabric A on Admin Host Port 1 and the vHBA on Admin Host Port 2 on vCon1 and then the same for Fabric B.

We placed the vHBAs on Admin Host Port 2 so that we could make full use of both PCI lanes.

To configure this on the Service Profile template:

  • Go do the Network tab
  • Click the Modify vNIC/vHBA Placement link
  • Set the placement to Specify Manually
  • Place all Fabric A vNICs/vHBAs on vCon 1
  • Set the Admin Host Port for all vNICs to 1
  • Set the Admin Host Port for the vHBA to 2
  • Place all Fabric B vNICs/vHBAs on vCon 2
  • Set the Admin Host Port for all vNICs to 1
  • Set the Admin Host Port for the vHBA to 2

Here are two screen shots showing this configuration

Picture1

Picture1

Here is a screen shot of the applied configuration, notice all Fabric A vNICs are on the Desired Placement of 1 and Fabric B vNICs on 2. Also notice that the Admin Host Port is 1 for all vNICs.

Picture1

 

 

 

Advertisement