I just upgraded our lab Nexus 1000v from 1.4 to 1.5. Before the upgrade I assumed that it be like the upgrade from 1.3 to 1.4 where you first had to upgrade the VEM modules on each ESX host and then upgrade the VSM.
My first attempt at a video blog demonstrating the UCSM Equipment tab. I hope to do more of these and get better as I go.
Click the following link to view the Screencast video
UCSM Equipment Tab Walkthrough
Good read on full VM backups.
I built my first two-node Exchange 2007 CCR last week and I have to say that I am impressed. In the past I have frowned upon Microsoft Clusters because there was only one data set that all nodes in the cluster must have access to. The MSCS cluster only provided HA in the situation the physical/Windows OS server failed. There was not mechanism for replicating the data, so if there was data corruption having a cluster didn’t buy you anything.
With an Exchange 2007 CCR (Continuous Cluster Replication) cluster in combination with the new MSCS quorum type (Majority Node Set, more on this later) shared storage is no longer required.
Majority Node Set quorum type – With an MNS quorum type each cluster node has its own, locally stored copy of the quorum DB. This type of quorum means that shared storage for the quorum is not a requirement. One downside to a typical MNS quorum is that at least 3 cluster nodes are required. 3 nodes are required in a typical MNS quorum is because there must be a majority of the nodes online before the cluster resources will come online. In order to have a majority there has to be 3, if there were only 2 then there would never be a majority and the cluster would not come online.
Windows 2003 Sp1 with this hotfix http://support.microsoft.com/kb/921181 or Sp2 there is new MNS configuration called “File Share Witness”.
The file share witness feature is an improvement to the current Majority Node Set (MNS) quorum model. This feature lets you use a file share that is external to the cluster as an additional “vote” to determine the status of the cluster in a two-node MNS quorum cluster deployment.
Microsoft best practice is to use the Exchange 2007 Hub Transport server for the “File Share Witness”. To do this simply create a new folder and share it. Make sure the Administrators group and the Cluster service account has full control permissions to this share/folder.
Below is a diagram of an Exchange 2007 CCR two-node cluster using the hub transport server as the file share witness.
- Windows 2003 Sp2/Windows 2008 x64 Enterprise
- Exchange 2007 Enterprise
- Two servers with the same amount of RAM and disk space
- Two NICS, one for LAN one for cluster heartbeat
- Create MSCS choosing quorum type of Majority Node Set.
- Configure MNS to use file share witness on hub transport server by using this command
“cluster res “Majority Node Set” /priv MNSFileShare=\\servername\sharename.
- Apply the MNS configuration by moving the cluster to the other node.
- Install Exchange 2007 Enterprise on the first node choosing Custom and then “Active Clustered Mailbox Role”
- When installing on the second node choose “Passive Clustered Mailbox Role”
- Choose Cluster Continuous Replication and file in the name and IP.
A contact of mine at Citrix pointed me to a site that I had never heard of.
The is an App Delivery Best Practices site that has lots of great technical details on all of the products Citrix offers.
I did my first SAN Copy yesterday. Like anything else in IT once you know how to do something and you do it a couple times it becomes easy.
I am using SAN Copy to migrate LUNs from a CX300 to a CX3-20.
Here are the steps that I followed:
- Zoned SPA1 on CX300 to SPA1 on CX3-20
- Zoned SPB1 on CX300 to SPB1 on CX3-20
- In Navisphere update SAN Copy connections on both arrays
- Created a storage group called SAN Copy
- Created a Reserve LUN Pool – this is for doing incremental copies, if doing full copies this is not required.
- Added the RLP LUNs to the RLP Configuration in Navisphere
- Added target LUNs to the SAN Copy storage group
- On the SAN Copy storage group configured the SAN Copy connections between the arrays
- Created a SAN Copy session by right-clicking the source LUN on the CX300, SAN Copy, Create Session from LUN.
- Set the session type of either Full or Incremental. Full is a one time copy and the host has to be offline during the entire copy. Incremental is for when you have a small downtime window. Incrementals can run while the host is online.
- Set the destination LUN
- Set the Session Throttle
- To start the SAN Copy session go under SAN Copy Session in Navisphere, drill down under the SP where the LUN is located, right-click the session and click start
- Monitored the status by right-clicking on the session and going to status
- Once it was complete I ran a few more copies to get any changes
- Power off the host
- Ran one last copy
- Removed the SAN Copy session
- Removed the host from the storage group on the CX300
- Created a new storage group on the CX3-20
- Zoned the host with the CX320
- Added the host to the storage group
- Added the LUN to the storage group
- Powered on the host
- Verified drive access
Formula for calculating RLP LUN sizing
Total up all the LUNs (ie: 4 LUNs at 25gb apiece = 100gb)
Determine the change rate. 20% is a good number that covers most environments
Take 20% of the 100gb = 20gb
Take 20gb (or the # that comes out as 20% of the total space) – divide that by the number of LUNs (4): 20gb div by 4 = 5gb
Create 4 LUNs at 5gb apiece for RLP (for SPA)C
Create 4 LUNs at 5gb apiece for RLP (for SPB)