Crypto Miner


I recently became interested in learning more about crypto currencies and how  they work.  There is no better way to learn than to build out a miner (aka rig) and try it out myself.  I happen to have an old motherboard, power supply, and hard drive lying around (who doesn’t?) – the only thing I was missing, and most vital component is the GPU (a high-end graphics card). Note, due to the mining craze, the cost of graphics cards has shot up.

I opted for an ‘open air’ build. Literally, I bought some cheap plastic shelving and used zip ties to secure it down.  It’s not pretty, but functional, and also has great heat dispersal.


Once the hardware is build, you will need to load the mining software.  You need to pick a coin – certain coins are better mined with specific graphics cards.  Once you know what you want to mine (you can check, you’ll need a corresponding wallet to deposit currency in to, as well as a pool to join.  Unless you have significant hardware, solo mining isn’t possible, so pool mining allows small rigs to participate in the mining, while making profit from a share of any blocks that are collectively mined.  Once it was up and running, I tuned the hardware (called over clocking) to get the most processing power from my GPU’s.  Now I just sit back and wait..and burn electricity.

There are lots of good videos on YouTube, or websites that can get you going.  I learned a lot from Reddit, in the /Ethermining subreddit. Make sure you read the in-depth guide before posting, as it’s considered bad etiquette to post a question that is already covered.

Keep in mind, now may not be the best time to invest a lot of money in mining, as there are changes coming that may make it much harder for small miners to make any money.  If you still believe the coins will raise in value, you can just buy some coins directly.


While the crypto currency aspect of mining is interesting,  the real exciting game changer is what the crypto currency is based on – blockchain technology.  The blockchain is what it sounds like – a continuously growing list of blocks (chunks of data) that are linked to one another using cryptography.  In more general terms, its also known as a ledger.  Currency is an easy way to understand the model.  Like bank transactions, you can record the movement of value between individuals.  However, unlike a bank, there is not a central authority.  The blockchain is completely distributed, and works with an unknown number of nodes dropping in and out of the network.  In fact, the network is untrusted, and its possible some nodes may behave in a way that is negative.  Once a block is written to the chain, it becomes nearly impossible to go back and alter it.  You would need to alter every subsequent chain (since they are linked by information from the previous block) and the cost and compute power to make this type of change grows exponentially.  The fact that a secure, completely transparent, peer-to-peer system was created is quite mind-blowing.

Now, replace currency transactions with other things, like land ownership records or other contracts and you can start to understand the power of the blockchain.  In fact, one coin, Ethereum was created to allow people to put code in to the blocks and have self executing contracts.  The example I heard was around travel insurance.  If someone has a contract that if the weather disrupts a trip, the contract can self-execute and make a payment based on weather data.  This example just scratches the surface – there are many other uses like providing micro-payments directly from person to person (with low fees, and incredible speed), or potentially could be used for voting.  A more whimsical use is cryptokitties – a virtual  pet that is collectible and breedable.

My next venture is to learn more about the programming language (Solidity) and see how these contracts work. I predict this will be one of the most disruptive technologies to come along – and goes well beyond just making money off of a crypto currency.

Who’s the real MVP? The MVC!


Everyone knows the ‘Most Valuable Player’ acronym, but you may not have heard the term ‘Minimally viable candidate’.  At first, it doesn’t sound too good, but let’s explore.

The Minimally Viable Candidate


I was first introduced to this term a few years ago when I was asked to participate in creating some certification exams.  There was a professional company helping us (the test creators) understand the parameters of how to write good exam questions.  For example, you cannot make up stuff in the answers just to trip someone up – but you could write answers that didn’t apply to the scenario, but where correct (i.e. you can’t make up a PowerShell cmdlet, but you could use a real one that has nothing to do with the scenario).  The goal was to write it to weed out unqualified candidates.

The MVC is the person who is qualified, but just enough to pass the certification.  There has to be some line drawn, and that’s the target for the level of complexity in the exam.  In other words, they are qualified, but barely. This is true in all professional fields – you may have a doctor who just barely made it, but they are still qualified to be a doctor. Of course, we all like to think we’re working with the best and overlook the spectrum of quality that really exists.

Why do I bring this up?  Lately, I’ve been leading design discussions with customers and a have a similar conversation around a ‘minimally viable product’.

Minimally Viable Product

Image result for never settle oneplus logo

Aim for the moon, shoot for the stars, never settle..the list of idioms goes on and on.  I argue that this is great for having a lofty goal..but in reality when you need to get things done, this perfect state often becomes the  paralysis of the perfect.  When I lead design sessions, I often start with a simple framework of understanding current state, and envisioning a perfect end-state, then look at logical transition states.  Understand that we may never get to the perfect end-state, its more of a true north or a sounding board to measure progress.  It may be too costly, or more difficult than originally planned.  Or, as we get to a closer transition state, we re-evaluate the end-state and it may change altogether.

Like the MVC, the concept of a minimally viable product doesn’t always feel good when you first hear it.  We are conditioned to do our best, and planning for less feels like failure.  It is in fact a tool to get started and realize value sooner.  I’ve seen many projects that fail to launch since the scope cannot be locked in, and every detail must be accounted for.

In software projects, this is often the difference between a waterfall model and an agile model.  With waterfall, the envisioning and requirements phases are completed before moving to the next step.  By the time you get to implementation, often the benefits of long development times are lost, or the requirements have changed (or missed as its hard to know everything you want in the beginning).  The agile model seeks to use quick sprints to deliver value quickly, and implement a less than complete solution, using iterations to over time deliver a fully flushed out solution.  This is not just reserved for software project, most of my engagements are infrastructure or business transformation focused.  They also suffer from the same challenges.


I help customers think and plan for what is good enough to go ‘live’ with whatever it is we are working on and overcome the fear of deploying with something that is less than perfect.   I’m suggesting rather than the OnePlus motto of ‘Never Settle’, maybe be more like Nike and ‘Just Do It’.  This approach is useful in many other aspects of life. Looking to hire someone? Consider the MVC rather than waiting for the MVP.

Real World Solutions – The Case of DLP Event Tracking

the case if

In one of my projects, the customer is planning on using Office 365 DLP.  However, they have a third party company who manages the front-line investigation for violations.  The customer needs a way to allow a non-employee enough access to do their initial discovery and track it.

The first attempt was to use the out-of-box alerts in the Security & Compliance center.


There were a few challenges with this feature – the main one being no appearant way to restrict access to the DLP event only.  The other was no way to input comments or use this as a tracking system.

It got me thinking (or as we say in consulting doing ideation) on how to solve this. One good solution for tracking things is SharePoint.  So, we need a way to get the alert information (either in email, or through the event API) to SharePoint.  Not wanting to create a whole application to make this work – there must be a way for a power-user to wire up applications.  And of course, there is – Microsoft Flow.  Microsoft Flow is a cloud-based service that makes it practical and simple for line-of-business users to build workflows that automate time-consuming business tasks and processes across applications and services. It’s comparible to a service like IFTTT (If This Then That), but tightly integrated with Office 365.

With Flow being the glue – the overall solution is:

  1. Configure the DLP policy to send notifications to a mailbox
  2. Create a custom SharePoint list to track DLP events
  3. Configure Flow to populate the list with the DLP event information from email

Now I’ll walk through each step to understand the configuration.

Configuring the DLP Policy

The first step is to configure your DLP rule to send a notification email to a mailbox. In this example, in the Security & Compliance Center, I edited an existing DLP policy.

DLP notification

Note you can control the information that is included if you do not want some content to be in the alert.

Configure SharePoint

Next, we’ll configure the SharePoint list.  Again, I’m assuming you have basic knowledge of creating a SharePoint teamsite.  For our example, I only added a ‘status’ field – which is a choice of open, investigating, resolved, and closed.  I could see adding fields for comments, or more date fields for tracking time to resolution.  The point here is we’ll be able to pre-populate some of the field using flow. Additonally, you can setup the security and permissions for your analysts.

sharepoint list settings

Configure Flow

On the newly created list, click the ‘Flow’ button to create a new flow. I find it easiest to choose ‘See your flows’.  From the Manage your flows page, you can ‘create from blank’.

flow button

From there click on ‘search hundreds of connectors and triggers’.

I’ll break down the flow in to its parts.

  1. When new mail arrives (Outlook).  Ensure you change the Has Attachments and Include Attachments to ‘Yes’.

when new email arrives

2. Export email.  You would think we would be able to use the attachments flow functionality out of the box.  However, the item that is attached to the system generated notification is an embedded message (NOT an eml).  The attachment connector does not currently know how to parse this – so the workaround is to use the preview Export email feature.

export email

3. Create Item (SharePoint).  This step creates the list item in the custom list we defined.  It will recognize any custom properties you created – in this case ‘Status Value’.  I set the new list item to ‘Open’ by default.  You can also see in the Title property – we can combine functions with text as well.  For example, the utcNow() function could be used to set a date property…or you could set an SLA and calculate the estimated time for closure.

create item

4. Add Attachment (SharePoint)

The final step is adding the email attachment to the list item’s attachment.  The key is the File Content field – make sure you choose the Body from the Export flow.

add attachment

We need to include the Body coming from the Export Email, not the body coming from the new email trigger.

export body

Thats it, the next time the notification mailbox recieves an email, the Flow will tigger.

The Results

You can see in the screenshot someone sent an email with a DLP violation.  This results in a new item in my SharePoint list, with the status set to open, and the original attachment is included on the list item.


I’m excited that we’re able to solve this for the customer – this is a really elegant, and relatively easy solution that didn’t require custom code.

Consulting 101


Lately, I’ve been working with a lot of new hires – many are college hires with zero real world experience. I occasionally get an opportunity to mentor them, or sometimes it’s a consultant struggling on the job.  Mentoring people is something I have done on and off during my almost 20 year career at Microsoft and I enjoy it.  The good news is that I’ve made a lot of mistakes in my career, so you don’t have to, and I share them without hesitation.

Here are the top 3 things I tell new consultants to master first.

1. Deliver on What You Promise


This is easily my #1 rule.  If you make a promise to do something, do it.  Don’t try to make the mistake of over promising with just the hope of delivering.  If you fail to deliver, it destroys trust and confidence people have in you.  You are far better off setting realistic expectations and meeting them.  Its fine to set a ‘stretch goal’ and be clear its not what you are committing to.  If you find that you are likely unable to meet your commitment – let everyone know as soon as possible to reset expectations.  You probably can do this once.

2. Don’t Go Dark


This easily goes in the top three delivery sins.  For some reason a consultant just disappears – they don’t tell the customer, their manager, or the project manager. Emergencies do happen, that is understandable, but I’m referring to someone who does this repeatedly.  I’ve not had a customer ever complain of over-communication in this scenario.  These days we have to manage multiple active projects, so its important to set expectations up front like availability, working hours, response time for emails, etc.  I can only guess why this happens – for me its usually due to an uncomfortable situation.  Not responding or being clear actually just makes matter worse.  People may not like your answer, but they will be much more upset if you don’t respond and they think you are in agreement.

3. Documentation

docuementsThe final tip is to document everything.  You never know what the future holds –  project owners change,  things fail well after the project ends, personality conflicts, and honest miscommunications. The only thing you will have to defend a decision or work you did will be a written record. As much as I hate to write status reports – these are critical to chronicle decisions, risks, work completed and other project information.  If you have a decision made over a phone call or in a meeting, follow up with an email summarizing and ask for confirmation this was what was said or agreed to.  Plan for the time it takes to deliver some form of documentation for any work you do.  Sometimes on really short engagements its easy to walk away without ever handing the customer any documentation. Even if it’s some up notes from meetings, clean them up and socialize them.

I learned this the hard way on a project that went sideways, and when they brought in “the Wolf” to fix things – I had no documentation or status reports.  I could have had all of my hours stripped away – which ultimately would have made me miss my delivery targets and put my job in jeopardy.

Get Started

That’s it.  If you can at least do these three things you will have established good habits that will serve you well.  Once you are consistently delivering – we’ll cover some other habits in a future post.

Bonus Homework

I just completed a course on Coursera: Presentation skills: Speechwriting and Storytelling (a great class by Alexei Kapterev, who authored ‘Death by PowerPoint’).  In one of the section resources was a link to a video of Mike Monteiro’s Keynote from a design conference (forewarned, Mike uses colorful language).  In his presentation he talks about the top mistakes designers make – and many of these are really applicable to consulting as well.  Once you make it past my top three – I would check his content for some more great tips.

Learning Power BI and Free Stuff


I decided to use some of my time off over the holiday break to catch up on training (check out MS Ignite on-demand).  In particular, I’ve been working on a Power BI dashboard for a customer, and while I get the basics, wanted to start to learn some more advanced skills.  I took some Power BI courses through Lynda (check out my playlist) but wanted some more depth.

I typically like to read books to learn new things, especially ones that have a lot of hands on examples you can follow.  Then I stumbled on Reza Rad’s website.  He is a MVP that focuses on Power BI and related technologies.  What blew me away was that Reza is offering his book (Power BI: From Rookie to Rock Star).  To start, the book looks like it’s a compilation of various blog entries and some additional content (over 1,000 pages, and kept up to date).  Its broken in to topics that you can follow along fairly easily.  I did have to install SQL express and figure out a few things with a newer data set, but was impressed how much actually worked as-is.  Occasionally, I do wish he would go deeper or it seems like a few steps were skipped, but figuring out how to do things is what really makes you learn it. 

The fact that he gives it away is equally interesting.  Why free? He writes

I never write book for money, I write because I like to get a wider audience in front of me, and tell them about the great product, and best practices of doing things with that and so on. With famous publishers I would definitely get more audience. However when the content be available for free, and online then everyone would read it, search engines would direct audience to this content, and audience range will expand.

If you read his book, you will have no doubts on his experience and subject matter expertise. I do think this marketing strategy is a good way to get your audience hooked and builds credibility quickly – and may lead to spending more for his videos or live training. So, thanks Reza for sharing so much quality info to the Power BI community, this really helps those of us getting started!

The Journey

journeyMy daughter is currently applying to colleges and the experience has been interesting for the both of us.  While doing recon on admissions processes, I came across this blog post “Position vs. Disposition“, by Rick Clark.  While the message has been told many times in many ways – it’s the journey, not the destination – I though his version told through his experience was a powerful message.  His context of being rejected by a college should be celebrated (certainly after the sting has gone) – as the work you put into pushing yourself to reach a goal is ultimately a reward itself.

“..while you may not have been given a position in said college, you have earned something no admission letter will ever give you—a disposition formed through growth, maturity, and commitment.”

-Rick Clark

This life message that is true in all situations, in work and in personal matters. However sweet the reward is, such as a promotion to a role you’ve wanted for a lifetime, remember that it’s what you did to get there that is the true benefit and stays with you forever.

Exchange Archives

archivesWhen it comes to Exchange, one of the confusing things for customers is the Exchange Archive feature – especially for customers coming from an existing 3rd party archiving solution.  When I work with customers who are upgrading to a newer on-premises version, or Exchange Online, and have a current archiving system in place, the first thing I ask is what is the current solution used for? Archives are used for either compliance reasons (e.g. retention, records, litigation, legal requirements, etc.) or to extend mailbox capacity (e.g. provide large mailboxes by using lower cost storage). Occasionally, the archive may serve both functions.

When planning for the new end-state design – the question is what do to?  Most customers assume they should just deploy Exchange Online archiving.  This post will give some reasons to reconsider that decision.  [Spoiler] Exchange’s online archive feature has nothing to do with compliance.

Exchange Archive: The Origin Story

in the beginning

The origin of the archive feature (the name has changed many times over the years) was first introduced in Exchange 2010.  One of the goals in Exchange 2010 was to provide support for large mailboxes (which in retrospect were not all that large compared to Office 365 today!)  The main problem was that Outlook 2010’s cached mode would cache the (whole) mailbox, so rather than rely on a change to the Outlook client, Exchange added the archive feature – which is an extention to your mailbox that would not be cached.  If you deployed an archive, you could enjoy a very large mailbox, and not need to cache all your mail. For on-premises deployements, you could even put the archive on seperate storage, or even seperate servers. This was great since really large mailboxes take a very long time on the initial download or if you had to recreate your profile (which for many customers is a standard troubleshooting step).  Also, many laptops were very limited on drive space.

What about compliance features and the online archive?  The online archive actually did not bring any new compliance features with it.  All the compliance features apply to the mailbox – the whole mailbox – not just the primary mailbox.  Any retention or legal hold applied to the person apply to both the primary and the archive, or just the primary mailbox if an archive was not used.  In other words – having an archive added no additional compliance capabilies.  This was true in Exchange 2010, and is still true today.

Why Deploy an Online Archive?

If we don’t get additional features, then why deploy an online archive?

  1. You exceed the capacity of your primary mailbox storage (currently at the time of writing this, Office 365 E3 includes a 100GB primary mailbox)
  2. You have Outlook 2010 (or older) clients and want to have large mailboxes. Given Outlook 2010 is out of support, customers should be doing everything possible to upgrade.

If you have deployed an archive product for addressing mailbox capacity issues, then I strongly recommend that you do not deploy the online archive by default. Why not?

  • Not all mail clients can access the online archive
  • Mobile clients cannot search the online archive
  • It more complex and can be confusing to people

In this scenario, just use a large primary mailbox as Outlook 2013 or newer have the option of setting the amount (based on time) of cached content.  This cache setting effectively works just like having the archive (since content not in your cache is only available while online).


If you deployed an archive product to meet compliance or records management needs, consider using the native Exchange features such as hold, retention, MRM, and labels.  Keeping all email within Exchange versus using an external archive product lets you easily perform content and eDiscovery searches.  Also, its much easier to manage your data lifecycle with the mail being on one solution.  I’ll reiterate – these compliance and records features work in Exchange regardless if you deploy the Exchange online archive or not.  In other words, you could retire your external archive, only use a primary mailbox, and enable retention policies to continue providing an immutable copy of the person’s mailbox data.

A very common scenario for customers as they move to Office 365 is to ingest all their 3rd party archive data, and PST (local / personal archives) in to Office 365.  Given this could be a lot of data, exceeding the 100GB limit, customer migrate this data directly into the online archive.  Exchange Online does offer an unlimited, auto-expanding archive.  Note that for migrations, the archive expansion takes time – so you cannot just import everything at once.  Once the content is in Exchange, retention policies can be applied to all content to start to control your enterprise data and limit risk exposure.

As long as the archive on the source system corresponds to a mailbox, this type of migration is straight forward.  If your archive solution is for journaled mail, typically the archive is not associated to specific mailboxes.  This is much harder to ingest in to Exchange, and a better strategy could be to just sunset the journal solution (let it age out) and moving forward implement retention and the other compliance features mentioned above.  A nice benefit of using retention over journaling is journaling only captured email sent and received.  There are scenarios where people shared folders to trade messages, which never actually go through transport!

Hopefully this sheds some light and helps you decide when to use Exchange online archives, how they work, and the benefits / drawbacks if you do plan to use them.