Real World Solutions – The Case of DLP Event Tracking

the case if

In one of my projects, the customer is planning on using Office 365 DLP.  However, they have a third party company who manages the front-line investigation for violations.  The customer needs a way to allow a non-employee enough access to do their initial discovery and track it.

The first attempt was to use the out-of-box alerts in the Security & Compliance center.


There were a few challenges with this feature – the main one being no appearant way to restrict access to the DLP event only.  The other was no way to input comments or use this as a tracking system.

It got me thinking (or as we say in consulting doing ideation) on how to solve this. One good solution for tracking things is SharePoint.  So, we need a way to get the alert information (either in email, or through the event API) to SharePoint.  Not wanting to create a whole application to make this work – there must be a way for a power-user to wire up applications.  And of course, there is – Microsoft Flow.  Microsoft Flow is a cloud-based service that makes it practical and simple for line-of-business users to build workflows that automate time-consuming business tasks and processes across applications and services. It’s comparible to a service like IFTTT (If This Then That), but tightly integrated with Office 365.

With Flow being the glue – the overall solution is:

  1. Configure the DLP policy to send notifications to a mailbox
  2. Create a custom SharePoint list to track DLP events
  3. Configure Flow to populate the list with the DLP event information from email

Now I’ll walk through each step to understand the configuration.

Configuring the DLP Policy

The first step is to configure your DLP rule to send a notification email to a mailbox. In this example, in the Security & Compliance Center, I edited an existing DLP policy.

DLP notification

Note you can control the information that is included if you do not want some content to be in the alert.

Configure SharePoint

Next, we’ll configure the SharePoint list.  Again, I’m assuming you have basic knowledge of creating a SharePoint teamsite.  For our example, I only added a ‘status’ field – which is a choice of open, investigating, resolved, and closed.  I could see adding fields for comments, or more date fields for tracking time to resolution.  The point here is we’ll be able to pre-populate some of the field using flow. Additonally, you can setup the security and permissions for your analysts.

sharepoint list settings

Configure Flow

On the newly created list, click the ‘Flow’ button to create a new flow. I find it easiest to choose ‘See your flows’.  From the Manage your flows page, you can ‘create from blank’.

flow button

From there click on ‘search hundreds of connectors and triggers’.

I’ll break down the flow in to its parts.

  1. When new mail arrives (Outlook).  Ensure you change the Has Attachments and Include Attachments to ‘Yes’.

when new email arrives

2. Export email.  You would think we would be able to use the attachments flow functionality out of the box.  However, the item that is attached to the system generated notification is an embedded message (NOT an eml).  The attachment connector does not currently know how to parse this – so the workaround is to use the preview Export email feature.

export email

3. Create Item (SharePoint).  This step creates the list item in the custom list we defined.  It will recognize any custom properties you created – in this case ‘Status Value’.  I set the new list item to ‘Open’ by default.  You can also see in the Title property – we can combine functions with text as well.  For example, the utcNow() function could be used to set a date property…or you could set an SLA and calculate the estimated time for closure.

create item

4. Add Attachment (SharePoint)

The final step is adding the email attachment to the list item’s attachment.  The key is the File Content field – make sure you choose the Body from the Export flow.

add attachment

We need to include the Body coming from the Export Email, not the body coming from the new email trigger.

export body

Thats it, the next time the notification mailbox recieves an email, the Flow will tigger.

The Results

You can see in the screenshot someone sent an email with a DLP violation.  This results in a new item in my SharePoint list, with the status set to open, and the original attachment is included on the list item.


I’m excited that we’re able to solve this for the customer – this is a really elegant, and relatively easy solution that didn’t require custom code.

Exchange Archives

archivesWhen it comes to Exchange, one of the confusing things for customers is the Exchange Archive feature – especially for customers coming from an existing 3rd party archiving solution.  When I work with customers who are upgrading to a newer on-premises version, or Exchange Online, and have a current archiving system in place, the first thing I ask is what is the current solution used for? Archives are used for either compliance reasons (e.g. retention, records, litigation, legal requirements, etc.) or to extend mailbox capacity (e.g. provide large mailboxes by using lower cost storage). Occasionally, the archive may serve both functions.

When planning for the new end-state design – the question is what do to?  Most customers assume they should just deploy Exchange Online archiving.  This post will give some reasons to reconsider that decision.  [Spoiler] Exchange’s online archive feature has nothing to do with compliance.

Exchange Archive: The Origin Story

in the beginning

The origin of the archive feature (the name has changed many times over the years) was first introduced in Exchange 2010.  One of the goals in Exchange 2010 was to provide support for large mailboxes (which in retrospect were not all that large compared to Office 365 today!)  The main problem was that Outlook 2010’s cached mode would cache the (whole) mailbox, so rather than rely on a change to the Outlook client, Exchange added the archive feature – which is an extention to your mailbox that would not be cached.  If you deployed an archive, you could enjoy a very large mailbox, and not need to cache all your mail. For on-premises deployements, you could even put the archive on seperate storage, or even seperate servers. This was great since really large mailboxes take a very long time on the initial download or if you had to recreate your profile (which for many customers is a standard troubleshooting step).  Also, many laptops were very limited on drive space.

What about compliance features and the online archive?  The online archive actually did not bring any new compliance features with it.  All the compliance features apply to the mailbox – the whole mailbox – not just the primary mailbox.  Any retention or legal hold applied to the person apply to both the primary and the archive, or just the primary mailbox if an archive was not used.  In other words – having an archive added no additional compliance capabilies.  This was true in Exchange 2010, and is still true today.

Why Deploy an Online Archive?

If we don’t get additional features, then why deploy an online archive?

  1. You exceed the capacity of your primary mailbox storage (currently at the time of writing this, Office 365 E3 includes a 100GB primary mailbox)
  2. You have Outlook 2010 (or older) clients and want to have large mailboxes. Given Outlook 2010 is out of support, customers should be doing everything possible to upgrade.

If you have deployed an archive product for addressing mailbox capacity issues, then I strongly recommend that you do not deploy the online archive by default. Why not?

  • Not all mail clients can access the online archive
  • Mobile clients cannot search the online archive
  • It more complex and can be confusing to people

In this scenario, just use a large primary mailbox as Outlook 2013 or newer have the option of setting the amount (based on time) of cached content.  This cache setting effectively works just like having the archive (since content not in your cache is only available while online).


If you deployed an archive product to meet compliance or records management needs, consider using the native Exchange features such as hold, retention, MRM, and labels.  Keeping all email within Exchange versus using an external archive product lets you easily perform content and eDiscovery searches.  Also, its much easier to manage your data lifecycle with the mail being on one solution.  I’ll reiterate – these compliance and records features work in Exchange regardless if you deploy the Exchange online archive or not.  In other words, you could retire your external archive, only use a primary mailbox, and enable retention policies to continue providing an immutable copy of the person’s mailbox data.

A very common scenario for customers as they move to Office 365 is to ingest all their 3rd party archive data, and PST (local / personal archives) in to Office 365.  Given this could be a lot of data, exceeding the 100GB limit, customer migrate this data directly into the online archive.  Exchange Online does offer an unlimited, auto-expanding archive.  Note that for migrations, the archive expansion takes time – so you cannot just import everything at once.  Once the content is in Exchange, retention policies can be applied to all content to start to control your enterprise data and limit risk exposure.

As long as the archive on the source system corresponds to a mailbox, this type of migration is straight forward.  If your archive solution is for journaled mail, typically the archive is not associated to specific mailboxes.  This is much harder to ingest in to Exchange, and a better strategy could be to just sunset the journal solution (let it age out) and moving forward implement retention and the other compliance features mentioned above.  A nice benefit of using retention over journaling is journaling only captured email sent and received.  There are scenarios where people shared folders to trade messages, which never actually go through transport!

Hopefully this sheds some light and helps you decide when to use Exchange online archives, how they work, and the benefits / drawbacks if you do plan to use them.

High Volume Mailbox Moves


One challenges of planning your migration to Office 365 is how fast can you go?  Office 365 migration performance and best practices covers some great information, but I’ll add to it here based on my experience with real world projects.

Spoilers Ahead

One of my most recent engagements is wrapping up and I have done some anaylsis on the summary statistics. Note this was a move from a legacy dedicated version of Office 365 – so the throughput can be a bit higher than coming from an on-premises Exchange deployment.  On average (throwing out the high and low values) we moved about 3,000 mailboxes per week. One of the most impressive things from this migration was actually that it included a deployment of Office Pro Plus.  There was only a couple of months for planning – and deploying to over 30,000 workstations with very little impact to the helpdesk was a great surprise.

Another project I’m working on we have just started pilot migrations coming from on-premises Exchange 2010 servers.  Initially, we saw pretty limited performance when routing traffic through typical network infrastructure (e.g. hardware load balancer).  When we changed the configuration, we more than doubled our throughput and continued to tune it resulting in our last test was over 50GB/hr (our initial test was closer to 4 GB/hr).  Not too bad!

Migration Architecture

How did we get this speed boost?  A typical architecture for accessing mail (in this case the /EWS endpoint) is done over https (443) from the client to the hardware load balancer (HLB).  You may have a reverse proxy in front of the HLB, and you may have an additional interior firewall.  Some customers do not allow external access to the EWS virtual directory, but as part of establishing hybrid connectivity with Office 365 this is required.


You may just reuse the same endpoint for the MRS traffic.  In this case your mailbox migrations will follow the same data path as the rest of your traffic.  A few additional constraints may need to be met: a publicly signed certificate, and you cannot bridge the SSL traffic (break encryption and re-encrypt).  If you meet this bar, then this design will meet the minimal requirements for MRS – however it may not perform very well as there are so many layers of infrastructure its traversing, plus it may impact the total available bandwidth of those devices.  Creating 1:1 MRS endpoints is a way to bypass all of this infrastructure, and ramp up throughput.


In this example, three new DNS names are created, each resolving to a specific server. The firewall must allow traffic only from Exchange Online to the on-premises servers (see Office 365 URLs and IP address ranges).  The certificate with the additional MRS names will have to be redeployed to all the infrastructure (e.g. HLB) and Exchange servers (unless using a wildcard certificate – e.g. *  Now when you create migration requests you can choose across the endpoints.  For most customers, the ACL on the firewall is enough security to allow this configuration – at least for the duration of the mailbox migrations.

Other Considerations

There is always a bottleneck in the system, the question is do you hit it before you achieve the velocity speeds you would like to hit.  I work with customers to walk through every point in the data flow and see what the bottleneck will be.  In the original architecture above, the first bottleneck is nearly always the HLB – either because of its network connection, or the load it’s already under.  After that, the source Exchange servers tend to be unable to keep up and cause migration stalls. Also be aware of things like running backups that could severely impact resources. Finally, other items like the day after helpdesk support capacity or network downloads (OAB, or maybe changing your offline cache value) may prove to also limit your velocity speeds.  MRS migrations usually have a very low failure rate, but other ancillary things like mobile clients, etc. that are coupled with the migration need to be considered.










Office 365 Consumption Analytics


One of the great things about Office 365 is that you can get great telemetry data to understand the actual consumption of your tenant.  The Office 365 admin portal has built-in usage reports that give some quick hight level stats (e.g. emails send/received, OneDrive for Business storage consumed, the number of Office activation, etc.).  But what if you want to slice the data differently, like by department or other properties?  The Office 365 Adoption pack, currently in preview, is a Power BI solution that is customizable to your organization’s specific needs.

Office 365 Adoption Pack Installation

The overall steps are detailed in this post.  I’ll walk through a summary of the steps here.  First, you must enable the usage reports to work with Power BI. To do this, open the Admin portal ( and open the Admin center.  Under Reports, Usage, you will see a section for enabling the content pack.  This is shown below on the bottom right pane.


Once you click the ‘Get started’ button – it will take a while (between 2 and 48 hours) before you can move on.  Eventually you will see that your data is ready and your tenant ID (needed for a later step).

usage 3usage 4

At this point the reports will not have anonymized data.  If this is required, in the Admin Center, on the left nav menu open Settings > Services & add-ins.  Find Reports and you will be able to toggle on anonymous identifies instead of names on all reports.  This applies to all usage reports, not just the Adoption pack.

Now that you have the infrastructure configured, you need to set up the Power BI dashboard.

Configuring the Office 365 Adoption Content Packconfigure

There are several ways to install the content pack, but I’m going to highlight one deployment option here.  In this scenario, I will create an app workspace and configure the content pack there.  The benefit with this model is it makes the deployment independent of a specific user account.  I could have  deployed the content pack into my workspace, and share it as needed.  But, what would happen if I leave the company or change job roles – this would break sharing for everyone.  Note, for each of these deployment scenarios you will need to check and ensure everyone is properly licensed. At this time of writing this, all internal users need a Power BI pro or greater license.

Open up Power BI in the app selector or from the Usage report page.  Under workspaces, there is a button to create a new workspace. Behind the scenes this creates an Office 365 group. There are several options for configuration, such as allowing members to have edit rights.  Open the workspace and under Microsoft AppSource, click ‘get’ under services.

usage 5

Search for the Office 365 Adoption Preview – click ‘get it now’

usage 6

There can only be one deployment of the content pack for the organization. You will then need to input the tenant ID (from the earlier step).  It will then have you authenticate and start to import the data.  Once loaded you can interact with the usage data and drill down to see the nitty-gritty details.  Other members can view and interact with the data as well.

That should get you running with the out of box dashboards and report.  In another post I may show some neat things you can do to extend the capabilities.  In the mean time, for more information on how to customize the solution, check out this web page.


Ever since I can remember, I’ve had challenges with organzation..and I’ve tried a lot of ‘gimmicks’ to figure out something that would help.  I tried paper calendars, sticky notes, virtual sticky notes, OneNote, Evernote, WunderList, Planner..nothing really worked for me.  I needed something that was easy and accessible.  Wunderlist came very close since it has a mobile client and a fairly simple model for tracking tasks. But, I mainly work in Outlook and it just felt very disconnected.

I decided to write a modern plug-in for it only to find out that Wunderlist was getting an official add-in.  This was better, but it still didn’t feel very integrated and it was slow.

Enter Microsoft To-Do

todoMicrosoft To-Do is the eventual replacement for Wunderlist (brought to you by the same team).  What makes it special is that it fully integrates with Office 365 Exchange Online – tasks. This means I can manage my tasks in Outlook and they will surface in To-Do, and vice-versa.  I can use the ‘Quick Step’ feature in Outlook to create a task from an email, or simply drag an email to the task icon.  If I’m mobile, then the Android To-Do app makes it easy to quickly add a new task as well. My routine still needs some work, but I set aside the first hour of my day to use To-Do to plan my day.  “My Day” lets you prioritize tasks with intelligent suggestions based on a smart algorithm.  The best part is when you complete a task you get a rewarding ‘ding’ sound.

Note that this is still in preview and must be enabled on your Office 365 tenant.  The instructions can be found here. There is some fine print to be aware of before enabling the feature. Once enabled, you can control who can use it through licenses at the user object level.

I encourage you to check it out – let me know what you think in the comments.




One of the greatest benefits (in my humble opinion) of Office 365 is having all your data in the cloud unlocks new capabilities. Easier sharing and data insights are a couple of examples.  One of the early features that took advantage of this centralized information storage is called ‘Delve’.  ‘Delve’ – the application let you know what others were working on (documents) and feeds.  Delve also provided some analytics, Delve Analytics, and the feature has evolved over time and is now ‘MyAnalytics’.


On a weekly basis I get an email with highlights of my week (real example shown below) and it also is surfaced in my Outlook client.  At first, my reaction was like a lot of my customers – turn this off.  However, as the feature has matured and I spent some time reviewing it and understanding its value, I completely changed my mind.


“If it can be measured, it can be fixed”

-Lots of people

While there is a lot of data here are the top two things that jumped out at me. First, is rethinking how I do things. One challenge is understanding and drawing conclusions from the raw data.


Working during meetings probably means I’m not really paying attention to the meeting. I’m guilty on more than one occasion not paying attention, someone calls on me, and I have to ask them repeat their question. Maybe I’m not really required to be in the meeting and use this as an oppourtunity rethink which meetings I actually accept.

A second example, which probably hits home for many of you, is email overload. I do a lot of email.  This graphic is one snip from my data.  There is a great new feature that breaks this down by people, but to protect the innocent I won’t show it here.


This may look like a familiar pattern to some of you.  Once the family goes to bed, work can begin.  The system also give you some strategies for changing your behavior:




Protecting your data in Office 365

Having a hard time understand data protection technologies in Office 365? This post addresses service keys, BYOK, and HYOK.

secureThe old adage “If you are confused about something, then someone else probably is too” applies to this post.  I am working with several customers who want to understand their options when it comes to securing their data in Office 365.

There are multiple technologies available and its important to understand the scenarios and risks you are trying to protect to apply the right controls.  To better understand how Office 365 protect customers data in a multi-tenant (shared) platform take a look at Scott Schnoll’s post that makes it easy to view material in the risk assurance library.  This site has links to an interactive PowerPoint that if you select Encryption, links to a whitepaper.  There is a section titles ‘Office 365 Service Encryption’  that explains how data at rest and in transit is protected for the various workloads (e.g. SharePoint Online, Skype for Business Online, Exchange Online).  The data at rest protection acts as a security boundary and is analogous to ‘drive encryption’ (like BitLocker).  Alternatively, Azure RMS addresses data at rest and in transit. In summary, the protection features I’m discussing are:

The FAQ ‘Customer Key for Office 365 FAQ‘ summarizes the difference between the two very well:

Both options enable you to provide and control your own encryption keys; however, service encryption with Customer Key encrypts your data at rest, residing in Office 365 servers at-rest, while BYOK with Azure Information Protection for Exchange Online encrypts your data in-transit and provides persistent online and offline protection for email messages and attachments for Office 365. Customer Key and BYOK with Azure Information Protection for Exchange Online are complementary, and whether you choose to use Microsoft’s service-managed keys or your own keys, encrypting your data at-rest and in-transit can provide added protection from malicious attacks.

Each of these features have the options around key management.  Either Microsoft manages your keys (the default) or the customer can manage the keys.  It is best to not share a key across service encryption and Azure RMS, but to use distinct separate keys.

99% of the time, I highly recommend customers leave the default and have Microsoft do the key management.  Managing your own keys is not trivial, and if done incorrectly risks losing access to all your tenant data – and Microsoft will not be able to fix this.


Now that we are clear on the difference between service keys (customer keys) and RMS – there are multiple options for RMS.  (I call these *YOK – star your own key).

BYOK is simply using the Azure RMS (AIP) service with customer managed keys.  HYOK is an architecture that integrates your on premises AD RMS with Azure RMS.  The benefit of HYOK is that for some data, you keep it out of the cloud and use your OWN AD, your OWN RMS server, and your OWN HSM.  Sounds great, but there are some important considerations.  These are well documented here Azure Information Protection with HYOK (Hold Your Own Key).

To summarize, 99% of the time customers should use the default – Microsoft managed RMS keys.  If you need your own keys, 99% of the time you should go with the BYOK architecture.


When I go down this path with a customer who wants to do their own keys, I ask what is it they are trying to achieve.  The main reason for managing your own keys if should you ever leave the service – you can ensure that no one has any access to your data (by revoking the key).  While in the service, Microsoft still needs and has access to the data – otherwise features like search and indexing would not work.  Given the added complexity and potential catastrophic loss of data, this is a decision that needs careful consideration and understanding on why a customer should go down this path.


note:edited to remove link to outdated whitepaper