It has been far too long since my last post, no real excuse for it I simply lost motivation to blog.
Since my last post on UCS Director there have been two significant versions released. Review these release notes to see what’s new in each release.
One of the nice new features of 5.1 is a Guided Setup tool that provides wizards that walk you through the initial system configuration. The Guided Setup screen automatically opens on login unless you have chosen the option not to display it again. If you do not see the Guided Setup screen you can access it at any time from the Administration, Guided Setup menu.
I highly recommend using the Initial System Configuration wizard to configure licensing, SMTP, DNS and NTP settings.
Aside from the Guided Setup there really wasn’t much that changed with regards to the initial installation and configuration since 4.1.
In this post I will go over adding the infrastructure elements that will be managed by UCS Director. For a list of supported infrastructure elements that UCS Director can manage refer to this matrix – Compatibility Matrix for Cisco UCS Director, Release 5.1
In the lab these infrastructure elements will be as follows:
- Compute – Cisco UCS B-Series
- Storage – EMC VNX 5300
- Network Switches – A pair of Nexus 5500s and one Catalyst 3750
- Virtualization – VMware vSphere 5.5
Before adding your infrastructure elements there are a few design considerations. UCS Director organizes infrastructure elements into Pods and Pods into sites. This Site and Pod structure must be defined so that when you add each infrastructure element UCS Director knows which pod that element is tied to. The pod definition is also where you tell UCS Director what type of pod and license to expect. The pod types are as follows:
- ExpressPod Medium
- ExpressPod Small
For me this was very simple; one site named Greensboro and one Pod name GSO-LAB.
The next design consideration is around what user accounts you are going to use to add each infrastructure element to UCS Director. The simplest thing to do is to use the default admin account for each element. There are a couple problems with doing this, one is that you or that elements admin will not be able to look at audit logs to see what changes UCS Director made. The other problem is that if the elements admin wants to restrict UCS Director from managing the device the default admin password has to be changed.
The best practice is to create a UCS Director specific account on each infrastructure element or better yet if each element is using LDAP, RADIUS or TACACS a UCS Director Active Directory account can be leveraged.
Another new feature of UCS Director 5.x are the Credential Policies. I really like this feature because it makes it easier to add like devices and manage the credentials UCS-D uses to communicate with those devices. For example most environments will have multiple network and storage switches with common credentials. For this I can create a credential policy for SSH with the username and password that is common. When I add each network element I can choose the credential policy and not have to key in the same username and password a lot of times.
Before adding infrastructure elements I highly recommend creating a credential policy for the network/storage switches. To do this go to Policies, Physical Infrastructure Policies, Credential Policies menu.
In my case I created a new user in each element named ucsd-admin and for simplicity sake set the password the same on each element.
After each infrastructure element had a UCS Director account I added each to UCS Director starting with UCS.
Adding UCS is very straightforward.
- Go to Administration, Physical accounts
- On the Physical accounts tab click Add
- Select the desired Pod, select Computing for the Category and UCSM for account type
- Provide the info for IP, login and other optional info
The next account to add is the EMC VNX SAN. This one is bit more involved because for some reason Cisco decided not to include the NaviSecCli command line tool as part of the UCS Director Appliance. NaviSecCli is the only way to manage VNX block from the command line as there isn’t an API or SSH capability for VNX block management. In previous versions of UCS Director the NaviSecCli tool was included with the appliance.
In UCS Director 5.x you must build a Linux VM, install the Linux NaviSecCli tool and then point UCS Director at the Linux VM. It wasn’t very difficult to build a Linux VM with NaviSecCli it was just a big annoyance. Here is a summary of the steps to get this Linux VM built and configured:
- Download CentOS 6.4 minimal install ISO
- Build a new VM and install CentOS
- Download the Linux EMC NaviSecCli RPM and install it on the new VM
- Test NaviSecCli from the Linux VM to verify communication with the VNX
Once you have the Linux VM built go to Administration, Physical Account menu and Physical Account tab and click Add.
Select the pod, select Storage for the Category and EMC VNX Block or Unified for type
Fill in the required info for the Control Station IP, Storage Processor IPs, NaviSecCli Host IP and credentials.
Next go over to the Managed Network Elements tab and add your network and SAN switches. Adding these is very straight forward and this is where the Credential Policies can be leveraged.
Lastly add your virtualization platform, this is located under the Administration, Virtual Accounts menu. When you select Add you are presented with this prompt to select your virtualization platform.
Once all of your infrastructure elements are added go over to the Converged menu to see your Pod with the various infrastructure elements. If you had an environment with multiple Pods you would see all of the Pods listed here or if you had multiple sites you can select the Site from the Site drop-down list.
If you select the Pod it will expand to show you each element in more detail
From here you can then double-click each element to drill down into the details of that element, view the inventory and even configure.
Here is what the VMware details looks like. The summary tab has some high level utilization reports and connection information. Each one of these individual reports can be added to a custom dashboard tied to your specific UCS Director account.
Over on the VM tab if you select a VM and then click on the purple button on the top right you can see all of the configuration options available. Most of what a user can do to a VM in the vSphere Client is available through UCS Director.
Another really cool view on the VMware, VM tab is the Stack View that shows you where a specific VM lives on each infrastructure element. This level of visibility across all infrastructure elements in the data center stack is very awesome.
Over on the Top 5 Reports tab there are several utilization and performance reports that you ran run on demand
The Topology tab is similar to the vSphere Client maps but a bit more useful
The Map Reports tab has some very useful heat maps for looking at various resource usage data.
All of the other infrastructure elements can be managed, monitored and run reports against similar to vSphere.
With UCS Director it is possible for a company to use it for the management of all of their data center infrastructure instead of managing each infrastructure element individually through their management console.
UCS Director has some great built-in reports and a Report Builder to create custom reports on anything managed by UCS Director. Here is a list of the built-in reports that come with UCS Director.
Here is what the UCS Data Center Inventory Report looks like
Here is a custom report created with the Report Builder showing the UCS Firmware versions
Hopefully my next post will not be 6 months from now but in the very near future. I hope to cover UCS Director Workflows in Part 4.