Exchange 2007 CCR

I built my first two-node Exchange 2007 CCR last week and I have to say that I am impressed. In the past I have frowned upon Microsoft Clusters because there was only one data set that all nodes in the cluster must have access to. The MSCS cluster only provided HA in the situation the physical/Windows OS server failed. There was not mechanism for replicating the data, so if there was data corruption having a cluster didn’t buy you anything.

With an Exchange 2007 CCR (Continuous Cluster Replication) cluster in combination with the new MSCS quorum type (Majority Node Set, more on this later) shared storage is no longer required.

Majority Node Set quorum type – With an MNS quorum type each cluster node has its own, locally stored copy of the quorum DB. This type of quorum means that shared storage for the quorum is not a requirement. One downside to a typical MNS quorum is that at least 3 cluster nodes are required. 3 nodes are required in a typical MNS quorum is because there must be a majority of the nodes online before the cluster resources will come online. In order to have a majority there has to be 3, if there were only 2 then there would never be a majority and the cluster would not come online.

Windows 2003 Sp1 with this hotfix or Sp2 there is new MNS configuration called “File Share Witness”.

The file share witness feature is an improvement to the current Majority Node Set (MNS) quorum model. This feature lets you use a file share that is external to the cluster as an additional “vote” to determine the status of the cluster in a two-node MNS quorum cluster deployment.

Microsoft best practice is to use the Exchange 2007 Hub Transport server for the “File Share Witness”. To do this simply create a new folder and share it. Make sure the Administrators group and the Cluster service account has full control permissions to this share/folder.

Below is a diagram of an Exchange 2007 CCR two-node cluster using the hub transport server as the file share witness.




  • Windows 2003 Sp2/Windows 2008 x64 Enterprise
  • Exchange 2007 Enterprise
  • Two servers with the same amount of RAM and disk space
  • Two NICS, one for LAN one for cluster heartbeat


  • Create MSCS choosing quorum type of Majority Node Set.
  • Configure MNS to use file share witness on hub transport server by using this command
    “cluster res “Majority Node Set” /priv MNSFileShare=\\servername\sharename.
  • Apply the MNS configuration by moving the cluster to the other node.
  • Install Exchange 2007 Enterprise on the first node choosing Custom and then “Active Clustered Mailbox Role”
  • When installing on the second node choose “Passive Clustered Mailbox Role”
  • Choose Cluster Continuous Replication and file in the name and IP.

I did my first SAN Copy yesterday. Like anything else in IT once you know how to do something and you do it a couple times it becomes easy.

I am using SAN Copy to migrate LUNs from a CX300 to a CX3-20.

Here are the steps that I followed:

  1. Zoned SPA1 on CX300 to SPA1 on CX3-20
  2. Zoned SPB1 on CX300 to SPB1 on CX3-20
  3. In Navisphere update SAN Copy connections on both arrays
  4. Created a storage group called SAN Copy
  5. Created a Reserve LUN Pool – this is for doing incremental copies, if doing full copies this is not required.
  6. Added the RLP LUNs to the RLP Configuration in Navisphere
  7. Added target LUNs to the SAN Copy storage group
  8. On the SAN Copy storage group configured the SAN Copy connections between the arrays
  9. Created a SAN Copy session by right-clicking the source LUN on the CX300, SAN Copy, Create Session from LUN.
  10. Set the session type of either Full or Incremental. Full is a one time copy and the host has to be offline during the entire copy. Incremental is for when you have a small downtime window. Incrementals can run while the host is online.
  11. Set the destination LUN
  12. Set the Session Throttle
  13. To start the SAN Copy session go under SAN Copy Session in Navisphere, drill down under the SP where the LUN is located, right-click the session and click start
  14. Monitored the status by right-clicking on the session and going to status
  15. Once it was complete I ran a few more copies to get any changes
  16. Power off the host
  17. Ran one last copy
  18. Removed the SAN Copy session
  19. Removed the host from the storage group on the CX300
  20. Created a new storage group on the CX3-20
  21. Zoned the host with the CX320
  22. Added the host to the storage group
  23. Added the LUN to the storage group
  24. Powered on the host
  25. Verified drive access

Formula for calculating RLP LUN sizing

  • Total up all the LUNs (ie: 4 LUNs at 25gb apiece = 100gb)

  • Determine the change rate. 20% is a good number that covers most environments

  • Take 20% of the 100gb = 20gb

  • Take 20gb (or the # that comes out as 20% of the total space) – divide that by the number of LUNs (4): 20gb div by 4 = 5gb

  • Create 4 LUNs at 5gb apiece for RLP (for SPA)C

  • Create 4 LUNs at 5gb apiece for RLP (for SPB)