High Volume Mailbox Moves


One challenges of planning your migration to Office 365 is how fast can you go?  Office 365 migration performance and best practices covers some great information, but I’ll add to it here based on my experience with real world projects.

Spoilers Ahead

One of my most recent engagements is wrapping up and I have done some anaylsis on the summary statistics. Note this was a move from a legacy dedicated version of Office 365 – so the throughput can be a bit higher than coming from an on-premises Exchange deployment.  On average (throwing out the high and low values) we moved about 3,000 mailboxes per week. One of the most impressive things from this migration was actually that it included a deployment of Office Pro Plus.  There was only a couple of months for planning – and deploying to over 30,000 workstations with very little impact to the helpdesk was a great surprise.

Another project I’m working on we have just started pilot migrations coming from on-premises Exchange 2010 servers.  Initially, we saw pretty limited performance when routing traffic through typical network infrastructure (e.g. hardware load balancer).  When we changed the configuration, we more than doubled our throughput and continued to tune it resulting in our last test was over 50GB/hr (our initial test was closer to 4 GB/hr).  Not too bad!

Migration Architecture

How did we get this speed boost?  A typical architecture for accessing mail (in this case the /EWS endpoint) is done over https (443) from the client to the hardware load balancer (HLB).  You may have a reverse proxy in front of the HLB, and you may have an additional interior firewall.  Some customers do not allow external access to the EWS virtual directory, but as part of establishing hybrid connectivity with Office 365 this is required.


You may just reuse the same endpoint for the MRS traffic.  In this case your mailbox migrations will follow the same data path as the rest of your traffic.  A few additional constraints may need to be met: a publicly signed certificate, and you cannot bridge the SSL traffic (break encryption and re-encrypt).  If you meet this bar, then this design will meet the minimal requirements for MRS – however it may not perform very well as there are so many layers of infrastructure its traversing, plus it may impact the total available bandwidth of those devices.  Creating 1:1 MRS endpoints is a way to bypass all of this infrastructure, and ramp up throughput.


In this example, three new DNS names are created, each resolving to a specific server. The firewall must allow traffic only from Exchange Online to the on-premises servers (see Office 365 URLs and IP address ranges).  The certificate with the additional MRS names will have to be redeployed to all the infrastructure (e.g. HLB) and Exchange servers (unless using a wildcard certificate – e.g. *.rosenlabs.com).  Now when you create migration requests you can choose across the endpoints.  For most customers, the ACL on the firewall is enough security to allow this configuration – at least for the duration of the mailbox migrations.

Other Considerations

There is always a bottleneck in the system, the question is do you hit it before you achieve the velocity speeds you would like to hit.  I work with customers to walk through every point in the data flow and see what the bottleneck will be.  In the original architecture above, the first bottleneck is nearly always the HLB – either because of its network connection, or the load it’s already under.  After that, the source Exchange servers tend to be unable to keep up and cause migration stalls. Also be aware of things like running backups that could severely impact resources. Finally, other items like the day after helpdesk support capacity or network downloads (OAB, or maybe changing your offline cache value) may prove to also limit your velocity speeds.  MRS migrations usually have a very low failure rate, but other ancillary things like mobile clients, etc. that are coupled with the migration need to be considered.