Crypto Miner


I recently became interested in learning more about crypto currencies and how  they work.  There is no better way to learn than to build out a miner (aka rig) and try it out myself.  I happen to have an old motherboard, power supply, and hard drive lying around (who doesn’t?) – the only thing I was missing, and most vital component is the GPU (a high-end graphics card). Note, due to the mining craze, the cost of graphics cards has shot up.

I opted for an ‘open air’ build. Literally, I bought some cheap plastic shelving and used zip ties to secure it down.  It’s not pretty, but functional, and also has great heat dispersal.


Once the hardware is build, you will need to load the mining software.  You need to pick a coin – certain coins are better mined with specific graphics cards.  Once you know what you want to mine (you can check, you’ll need a corresponding wallet to deposit currency in to, as well as a pool to join.  Unless you have significant hardware, solo mining isn’t possible, so pool mining allows small rigs to participate in the mining, while making profit from a share of any blocks that are collectively mined.  Once it was up and running, I tuned the hardware (called over clocking) to get the most processing power from my GPU’s.  Now I just sit back and wait..and burn electricity.

There are lots of good videos on YouTube, or websites that can get you going.  I learned a lot from Reddit, in the /Ethermining subreddit. Make sure you read the in-depth guide before posting, as it’s considered bad etiquette to post a question that is already covered.

Keep in mind, now may not be the best time to invest a lot of money in mining, as there are changes coming that may make it much harder for small miners to make any money.  If you still believe the coins will raise in value, you can just buy some coins directly.


While the crypto currency aspect of mining is interesting,  the real exciting game changer is what the crypto currency is based on – blockchain technology.  The blockchain is what it sounds like – a continuously growing list of blocks (chunks of data) that are linked to one another using cryptography.  In more general terms, its also known as a ledger.  Currency is an easy way to understand the model.  Like bank transactions, you can record the movement of value between individuals.  However, unlike a bank, there is not a central authority.  The blockchain is completely distributed, and works with an unknown number of nodes dropping in and out of the network.  In fact, the network is untrusted, and its possible some nodes may behave in a way that is negative.  Once a block is written to the chain, it becomes nearly impossible to go back and alter it.  You would need to alter every subsequent chain (since they are linked by information from the previous block) and the cost and compute power to make this type of change grows exponentially.  The fact that a secure, completely transparent, peer-to-peer system was created is quite mind-blowing.

Now, replace currency transactions with other things, like land ownership records or other contracts and you can start to understand the power of the blockchain.  In fact, one coin, Ethereum was created to allow people to put code in to the blocks and have self executing contracts.  The example I heard was around travel insurance.  If someone has a contract that if the weather disrupts a trip, the contract can self-execute and make a payment based on weather data.  This example just scratches the surface – there are many other uses like providing micro-payments directly from person to person (with low fees, and incredible speed), or potentially could be used for voting.  A more whimsical use is cryptokitties – a virtual  pet that is collectible and breedable.

My next venture is to learn more about the programming language (Solidity) and see how these contracts work. I predict this will be one of the most disruptive technologies to come along – and goes well beyond just making money off of a crypto currency.

Real World Solutions – The Case of DLP Event Tracking

the case if

In one of my projects, the customer is planning on using Office 365 DLP.  However, they have a third party company who manages the front-line investigation for violations.  The customer needs a way to allow a non-employee enough access to do their initial discovery and track it.

The first attempt was to use the out-of-box alerts in the Security & Compliance center.


There were a few challenges with this feature – the main one being no appearant way to restrict access to the DLP event only.  The other was no way to input comments or use this as a tracking system.

It got me thinking (or as we say in consulting doing ideation) on how to solve this. One good solution for tracking things is SharePoint.  So, we need a way to get the alert information (either in email, or through the event API) to SharePoint.  Not wanting to create a whole application to make this work – there must be a way for a power-user to wire up applications.  And of course, there is – Microsoft Flow.  Microsoft Flow is a cloud-based service that makes it practical and simple for line-of-business users to build workflows that automate time-consuming business tasks and processes across applications and services. It’s comparible to a service like IFTTT (If This Then That), but tightly integrated with Office 365.

With Flow being the glue – the overall solution is:

  1. Configure the DLP policy to send notifications to a mailbox
  2. Create a custom SharePoint list to track DLP events
  3. Configure Flow to populate the list with the DLP event information from email

Now I’ll walk through each step to understand the configuration.

Configuring the DLP Policy

The first step is to configure your DLP rule to send a notification email to a mailbox. In this example, in the Security & Compliance Center, I edited an existing DLP policy.

DLP notification

Note you can control the information that is included if you do not want some content to be in the alert.

Configure SharePoint

Next, we’ll configure the SharePoint list.  Again, I’m assuming you have basic knowledge of creating a SharePoint teamsite.  For our example, I only added a ‘status’ field – which is a choice of open, investigating, resolved, and closed.  I could see adding fields for comments, or more date fields for tracking time to resolution.  The point here is we’ll be able to pre-populate some of the field using flow. Additonally, you can setup the security and permissions for your analysts.

sharepoint list settings

Configure Flow

On the newly created list, click the ‘Flow’ button to create a new flow. I find it easiest to choose ‘See your flows’.  From the Manage your flows page, you can ‘create from blank’.

flow button

From there click on ‘search hundreds of connectors and triggers’.

I’ll break down the flow in to its parts.

  1. When new mail arrives (Outlook).  Ensure you change the Has Attachments and Include Attachments to ‘Yes’.

when new email arrives

2. Export email.  You would think we would be able to use the attachments flow functionality out of the box.  However, the item that is attached to the system generated notification is an embedded message (NOT an eml).  The attachment connector does not currently know how to parse this – so the workaround is to use the preview Export email feature.

export email

3. Create Item (SharePoint).  This step creates the list item in the custom list we defined.  It will recognize any custom properties you created – in this case ‘Status Value’.  I set the new list item to ‘Open’ by default.  You can also see in the Title property – we can combine functions with text as well.  For example, the utcNow() function could be used to set a date property…or you could set an SLA and calculate the estimated time for closure.

create item

4. Add Attachment (SharePoint)

The final step is adding the email attachment to the list item’s attachment.  The key is the File Content field – make sure you choose the Body from the Export flow.

add attachment

We need to include the Body coming from the Export Email, not the body coming from the new email trigger.

export body

Thats it, the next time the notification mailbox recieves an email, the Flow will tigger.

The Results

You can see in the screenshot someone sent an email with a DLP violation.  This results in a new item in my SharePoint list, with the status set to open, and the original attachment is included on the list item.


I’m excited that we’re able to solve this for the customer – this is a really elegant, and relatively easy solution that didn’t require custom code.

Consulting 101


Lately, I’ve been working with a lot of new hires – many are college hires with zero real world experience. I occasionally get an opportunity to mentor them, or sometimes it’s a consultant struggling on the job.  Mentoring people is something I have done on and off during my almost 20 year career at Microsoft and I enjoy it.  The good news is that I’ve made a lot of mistakes in my career, so you don’t have to, and I share them without hesitation.

Here are the top 3 things I tell new consultants to master first.

1. Deliver on What You Promise


This is easily my #1 rule.  If you make a promise to do something, do it.  Don’t try to make the mistake of over promising with just the hope of delivering.  If you fail to deliver, it destroys trust and confidence people have in you.  You are far better off setting realistic expectations and meeting them.  Its fine to set a ‘stretch goal’ and be clear its not what you are committing to.  If you find that you are likely unable to meet your commitment – let everyone know as soon as possible to reset expectations.  You probably can do this once.

2. Don’t Go Dark


This easily goes in the top three delivery sins.  For some reason a consultant just disappears – they don’t tell the customer, their manager, or the project manager. Emergencies do happen, that is understandable, but I’m referring to someone who does this repeatedly.  I’ve not had a customer ever complain of over-communication in this scenario.  These days we have to manage multiple active projects, so its important to set expectations up front like availability, working hours, response time for emails, etc.  I can only guess why this happens – for me its usually due to an uncomfortable situation.  Not responding or being clear actually just makes matter worse.  People may not like your answer, but they will be much more upset if you don’t respond and they think you are in agreement.

3. Documentation

docuementsThe final tip is to document everything.  You never know what the future holds –  project owners change,  things fail well after the project ends, personality conflicts, and honest miscommunications. The only thing you will have to defend a decision or work you did will be a written record. As much as I hate to write status reports – these are critical to chronicle decisions, risks, work completed and other project information.  If you have a decision made over a phone call or in a meeting, follow up with an email summarizing and ask for confirmation this was what was said or agreed to.  Plan for the time it takes to deliver some form of documentation for any work you do.  Sometimes on really short engagements its easy to walk away without ever handing the customer any documentation. Even if it’s some up notes from meetings, clean them up and socialize them.

I learned this the hard way on a project that went sideways, and when they brought in “the Wolf” to fix things – I had no documentation or status reports.  I could have had all of my hours stripped away – which ultimately would have made me miss my delivery targets and put my job in jeopardy.

Get Started

That’s it.  If you can at least do these three things you will have established good habits that will serve you well.  Once you are consistently delivering – we’ll cover some other habits in a future post.

Bonus Homework

I just completed a course on Coursera: Presentation skills: Speechwriting and Storytelling (a great class by Alexei Kapterev, who authored ‘Death by PowerPoint’).  In one of the section resources was a link to a video of Mike Monteiro’s Keynote from a design conference (forewarned, Mike uses colorful language).  In his presentation he talks about the top mistakes designers make – and many of these are really applicable to consulting as well.  Once you make it past my top three – I would check his content for some more great tips.

Exchange Archives

archivesWhen it comes to Exchange, one of the confusing things for customers is the Exchange Archive feature – especially for customers coming from an existing 3rd party archiving solution.  When I work with customers who are upgrading to a newer on-premises version, or Exchange Online, and have a current archiving system in place, the first thing I ask is what is the current solution used for? Archives are used for either compliance reasons (e.g. retention, records, litigation, legal requirements, etc.) or to extend mailbox capacity (e.g. provide large mailboxes by using lower cost storage). Occasionally, the archive may serve both functions.

When planning for the new end-state design – the question is what do to?  Most customers assume they should just deploy Exchange Online archiving.  This post will give some reasons to reconsider that decision.  [Spoiler] Exchange’s online archive feature has nothing to do with compliance.

Exchange Archive: The Origin Story

in the beginning

The origin of the archive feature (the name has changed many times over the years) was first introduced in Exchange 2010.  One of the goals in Exchange 2010 was to provide support for large mailboxes (which in retrospect were not all that large compared to Office 365 today!)  The main problem was that Outlook 2010’s cached mode would cache the (whole) mailbox, so rather than rely on a change to the Outlook client, Exchange added the archive feature – which is an extention to your mailbox that would not be cached.  If you deployed an archive, you could enjoy a very large mailbox, and not need to cache all your mail. For on-premises deployements, you could even put the archive on seperate storage, or even seperate servers. This was great since really large mailboxes take a very long time on the initial download or if you had to recreate your profile (which for many customers is a standard troubleshooting step).  Also, many laptops were very limited on drive space.

What about compliance features and the online archive?  The online archive actually did not bring any new compliance features with it.  All the compliance features apply to the mailbox – the whole mailbox – not just the primary mailbox.  Any retention or legal hold applied to the person apply to both the primary and the archive, or just the primary mailbox if an archive was not used.  In other words – having an archive added no additional compliance capabilies.  This was true in Exchange 2010, and is still true today.

Why Deploy an Online Archive?

If we don’t get additional features, then why deploy an online archive?

  1. You exceed the capacity of your primary mailbox storage (currently at the time of writing this, Office 365 E3 includes a 100GB primary mailbox)
  2. You have Outlook 2010 (or older) clients and want to have large mailboxes. Given Outlook 2010 is out of support, customers should be doing everything possible to upgrade.

If you have deployed an archive product for addressing mailbox capacity issues, then I strongly recommend that you do not deploy the online archive by default. Why not?

  • Not all mail clients can access the online archive
  • Mobile clients cannot search the online archive
  • It more complex and can be confusing to people

In this scenario, just use a large primary mailbox as Outlook 2013 or newer have the option of setting the amount (based on time) of cached content.  This cache setting effectively works just like having the archive (since content not in your cache is only available while online).


If you deployed an archive product to meet compliance or records management needs, consider using the native Exchange features such as hold, retention, MRM, and labels.  Keeping all email within Exchange versus using an external archive product lets you easily perform content and eDiscovery searches.  Also, its much easier to manage your data lifecycle with the mail being on one solution.  I’ll reiterate – these compliance and records features work in Exchange regardless if you deploy the Exchange online archive or not.  In other words, you could retire your external archive, only use a primary mailbox, and enable retention policies to continue providing an immutable copy of the person’s mailbox data.

A very common scenario for customers as they move to Office 365 is to ingest all their 3rd party archive data, and PST (local / personal archives) in to Office 365.  Given this could be a lot of data, exceeding the 100GB limit, customer migrate this data directly into the online archive.  Exchange Online does offer an unlimited, auto-expanding archive.  Note that for migrations, the archive expansion takes time – so you cannot just import everything at once.  Once the content is in Exchange, retention policies can be applied to all content to start to control your enterprise data and limit risk exposure.

As long as the archive on the source system corresponds to a mailbox, this type of migration is straight forward.  If your archive solution is for journaled mail, typically the archive is not associated to specific mailboxes.  This is much harder to ingest in to Exchange, and a better strategy could be to just sunset the journal solution (let it age out) and moving forward implement retention and the other compliance features mentioned above.  A nice benefit of using retention over journaling is journaling only captured email sent and received.  There are scenarios where people shared folders to trade messages, which never actually go through transport!

Hopefully this sheds some light and helps you decide when to use Exchange online archives, how they work, and the benefits / drawbacks if you do plan to use them.

High Volume Mailbox Moves


One challenges of planning your migration to Office 365 is how fast can you go?  Office 365 migration performance and best practices covers some great information, but I’ll add to it here based on my experience with real world projects.

Spoilers Ahead

One of my most recent engagements is wrapping up and I have done some anaylsis on the summary statistics. Note this was a move from a legacy dedicated version of Office 365 – so the throughput can be a bit higher than coming from an on-premises Exchange deployment.  On average (throwing out the high and low values) we moved about 3,000 mailboxes per week. One of the most impressive things from this migration was actually that it included a deployment of Office Pro Plus.  There was only a couple of months for planning – and deploying to over 30,000 workstations with very little impact to the helpdesk was a great surprise.

Another project I’m working on we have just started pilot migrations coming from on-premises Exchange 2010 servers.  Initially, we saw pretty limited performance when routing traffic through typical network infrastructure (e.g. hardware load balancer).  When we changed the configuration, we more than doubled our throughput and continued to tune it resulting in our last test was over 50GB/hr (our initial test was closer to 4 GB/hr).  Not too bad!

Migration Architecture

How did we get this speed boost?  A typical architecture for accessing mail (in this case the /EWS endpoint) is done over https (443) from the client to the hardware load balancer (HLB).  You may have a reverse proxy in front of the HLB, and you may have an additional interior firewall.  Some customers do not allow external access to the EWS virtual directory, but as part of establishing hybrid connectivity with Office 365 this is required.


You may just reuse the same endpoint for the MRS traffic.  In this case your mailbox migrations will follow the same data path as the rest of your traffic.  A few additional constraints may need to be met: a publicly signed certificate, and you cannot bridge the SSL traffic (break encryption and re-encrypt).  If you meet this bar, then this design will meet the minimal requirements for MRS – however it may not perform very well as there are so many layers of infrastructure its traversing, plus it may impact the total available bandwidth of those devices.  Creating 1:1 MRS endpoints is a way to bypass all of this infrastructure, and ramp up throughput.


In this example, three new DNS names are created, each resolving to a specific server. The firewall must allow traffic only from Exchange Online to the on-premises servers (see Office 365 URLs and IP address ranges).  The certificate with the additional MRS names will have to be redeployed to all the infrastructure (e.g. HLB) and Exchange servers (unless using a wildcard certificate – e.g. *  Now when you create migration requests you can choose across the endpoints.  For most customers, the ACL on the firewall is enough security to allow this configuration – at least for the duration of the mailbox migrations.

Other Considerations

There is always a bottleneck in the system, the question is do you hit it before you achieve the velocity speeds you would like to hit.  I work with customers to walk through every point in the data flow and see what the bottleneck will be.  In the original architecture above, the first bottleneck is nearly always the HLB – either because of its network connection, or the load it’s already under.  After that, the source Exchange servers tend to be unable to keep up and cause migration stalls. Also be aware of things like running backups that could severely impact resources. Finally, other items like the day after helpdesk support capacity or network downloads (OAB, or maybe changing your offline cache value) may prove to also limit your velocity speeds.  MRS migrations usually have a very low failure rate, but other ancillary things like mobile clients, etc. that are coupled with the migration need to be considered.










Office 365 Consumption Analytics


One of the great things about Office 365 is that you can get great telemetry data to understand the actual consumption of your tenant.  The Office 365 admin portal has built-in usage reports that give some quick hight level stats (e.g. emails send/received, OneDrive for Business storage consumed, the number of Office activation, etc.).  But what if you want to slice the data differently, like by department or other properties?  The Office 365 Adoption pack, currently in preview, is a Power BI solution that is customizable to your organization’s specific needs.

Office 365 Adoption Pack Installation

The overall steps are detailed in this post.  I’ll walk through a summary of the steps here.  First, you must enable the usage reports to work with Power BI. To do this, open the Admin portal ( and open the Admin center.  Under Reports, Usage, you will see a section for enabling the content pack.  This is shown below on the bottom right pane.


Once you click the ‘Get started’ button – it will take a while (between 2 and 48 hours) before you can move on.  Eventually you will see that your data is ready and your tenant ID (needed for a later step).

usage 3usage 4

At this point the reports will not have anonymized data.  If this is required, in the Admin Center, on the left nav menu open Settings > Services & add-ins.  Find Reports and you will be able to toggle on anonymous identifies instead of names on all reports.  This applies to all usage reports, not just the Adoption pack.

Now that you have the infrastructure configured, you need to set up the Power BI dashboard.

Configuring the Office 365 Adoption Content Packconfigure

There are several ways to install the content pack, but I’m going to highlight one deployment option here.  In this scenario, I will create an app workspace and configure the content pack there.  The benefit with this model is it makes the deployment independent of a specific user account.  I could have  deployed the content pack into my workspace, and share it as needed.  But, what would happen if I leave the company or change job roles – this would break sharing for everyone.  Note, for each of these deployment scenarios you will need to check and ensure everyone is properly licensed. At this time of writing this, all internal users need a Power BI pro or greater license.

Open up Power BI in the app selector or from the Usage report page.  Under workspaces, there is a button to create a new workspace. Behind the scenes this creates an Office 365 group. There are several options for configuration, such as allowing members to have edit rights.  Open the workspace and under Microsoft AppSource, click ‘get’ under services.

usage 5

Search for the Office 365 Adoption Preview – click ‘get it now’

usage 6

There can only be one deployment of the content pack for the organization. You will then need to input the tenant ID (from the earlier step).  It will then have you authenticate and start to import the data.  Once loaded you can interact with the usage data and drill down to see the nitty-gritty details.  Other members can view and interact with the data as well.

That should get you running with the out of box dashboards and report.  In another post I may show some neat things you can do to extend the capabilities.  In the mean time, for more information on how to customize the solution, check out this web page.



One of the greatest benefits (in my humble opinion) of Office 365 is having all your data in the cloud unlocks new capabilities. Easier sharing and data insights are a couple of examples.  One of the early features that took advantage of this centralized information storage is called ‘Delve’.  ‘Delve’ – the application let you know what others were working on (documents) and feeds.  Delve also provided some analytics, Delve Analytics, and the feature has evolved over time and is now ‘MyAnalytics’.


On a weekly basis I get an email with highlights of my week (real example shown below) and it also is surfaced in my Outlook client.  At first, my reaction was like a lot of my customers – turn this off.  However, as the feature has matured and I spent some time reviewing it and understanding its value, I completely changed my mind.


“If it can be measured, it can be fixed”

-Lots of people

While there is a lot of data here are the top two things that jumped out at me. First, is rethinking how I do things. One challenge is understanding and drawing conclusions from the raw data.


Working during meetings probably means I’m not really paying attention to the meeting. I’m guilty on more than one occasion not paying attention, someone calls on me, and I have to ask them repeat their question. Maybe I’m not really required to be in the meeting and use this as an oppourtunity rethink which meetings I actually accept.

A second example, which probably hits home for many of you, is email overload. I do a lot of email.  This graphic is one snip from my data.  There is a great new feature that breaks this down by people, but to protect the innocent I won’t show it here.


This may look like a familiar pattern to some of you.  Once the family goes to bed, work can begin.  The system also give you some strategies for changing your behavior: