Microsoft Flow: First Impressions

Over the last several weeks I’ve had my first experiences using Microsoft Flow in a real-world application. The client has dozens of old 2010-style SharePoint Designer workflows touching a number of business functions: Sales, Procurement, Change Management, and Human Resources, and they were looking for a way to modernize their development process and eliminate some of the quirks and irksome bugs that have been plaguing their users.  Hearing that the client was looking down the road at moving from on-premises SharePoint Server 2013 to the cloud, I recommended re-writing a number of these processes in Flow instead of SPD.


Flow is a part of Microsoft’s new cloud-based platform for process modeling, for lack of a better phrase. The idea is that non-developers can use Flow’s intuitive user interfaces to build robust integrations between their line of business applications with no code anywhere to be found.

At first glance, Flow seems to be a huge improvement over the experience of building workflows in SharePoint Designer. For starters, it’s web-based, so there’s nothing to install. Flow comes with an impressive array of standard integration points (“connectors”), a handful of entry points (“triggers”) and hundreds of pre-defined activities you can configure (“actions”).  By dragging and dropping widgets onto the control surface and setting up some basic properties, power users can create powerful applications without having to rely on developers or IT to set it up for them.

Here are a few quick takeaways from my experiences so far.

Low expectations

SharePoint Designer workflows come with so much baggage it’s difficult to imagine preferring it to any succeeding technology that comes along. So the bar here is low.

Wide range of capability

Flow’s range of out-of-the-box integration points is very impressive, and they keep adding new connectors and actions all the time. There’s even an extension model where you can create your own and submit them for inclusion in the platform.

More of a consumer focus

Many of the integration points, though, don’t seem to make a lot of sense in most enterprise scenarios. Twitter, Facebook, and Gmail are some such connectors. And many of the starter templates are more in the personal productivity realm. For example,

  • Text me when I get an email from my boss
  • Email me when a new item shows up in a SharePoint list
  • Start a simple approval process on a document when it’s posted
  • Save Tweets to an Excel file
  • Send me an email reminder every day


Easy to extend

It’s really simple to create extension points in Flow. Suppose you have a need to do something that isn’t supported by a Flow action. If you can code, you can write an API to do what you need, and call it via an HTTP action.  Azure Functions work really well for this. In fact, the HTTP action is the most powerful thing in Flow. You can even use it to trigger other Flows from within a Flow.

Approvals are not fully baked

If you’re building approval workflows and are expecting the way SharePoint Designer works, you’ll be disappointed. An Approval in Flow consists of an email and a two buttons, nothing more. There is no concept of setting a status on an item, no functionality for logging (unless you roll it yourself), and no notion of tasks. It changes the way you think about approvals in general, because the old model just doesn’t apply here.

The Designer does not scale

For a simple two- or three-step flow, the designer works great. Add a couple of nested if/else blocks (‘conditions’), or more than a half-dozen or so actions, and you’ll find that  the design surface is totally unsuited to the task. Scroll bars are in difficult-to-find places and it’s often next to impossible to maintain your context when trying to move around within a Flow.

Sometimes saving a Flow will trigger a phantom validation error, and you’ll have to expand every one of your actions until you find the offending statement, because the Flow team have not seen fit to provide any sort of feedback on where the failure occurred. In addition, sometimes, especially when working with variables, the validation will fail even though the variable is properly configured.

No Code view

As clunky as the designer gets, if you’re a developer you might be more comfortable just coding your Flow the old fashioned way – after all, it’s just JSON under the hood. But alas, code view is not available in Flow. The design view is all you have.

Another implication of this: If you have places in your Flows where there are large blocks of similar functionality, you have no option to copy blocks of code and modify to suit. You’re stuck having to re-create those similar blocks of functionality, manually, in the designer, every single time. Believe me, this gets old really fast.

No versioning

If you make a change to your Flow and somehow break it, well, that’s tough, you’d better figure it out because there’s no rolling back.

Clearly Flow is not the magic bullet in the Enterprise process modeling world. It has is quirks and its pitfalls. But remember, the bar is low due to the legacy application it replaces.  SharePoint Designer workflows share many of the same deficiencies as Flow: a clumsy design experience (check), an inability to edit code directly (for all practical purposes), and no rollback model (technically possible in SPD via version history but janky as hell).

Given that SPD has had its ten-plus years in the limelight, and Flow is a brand-new V1 product with an engaged product team, I’d say the future looks bright for Flow.


Using Azure VMs for long-running jobs

From time to time I have to perform long-running deployment jobs against remote environments (usually Office 365), and I’ve hit upon the idea of running these from virtual machines hosted in Azure.

For example, today I had to push new master pages to 800 site collections in SharePoint Online, and the client requested I begin the deployment after hours. Now, pushing 1600 master pages (2 per site) via remote code takes somewhere around four hours; there’s no way I’m staying at the office until 9PM just so my laptop could stay running while I babysit that deployment.

Using an Azure Virtual Machine allows me to begin the deployment process on the remote machine, shut down my laptop, drive home, and have dinner with my family, all the while the deployment is running happily in Azure. I thought I would share the process and specifications I use for my machine and also share some tips for using Azure VMs in this fashion.

As an MSDN subscriber (thanks Rightpoint!) I have access to an Azure subscription with $150 of credit per month.  That amount lets you do a TON of stuff in the PaaS space, but when using virtual machines the dollars add up really quick, so  it pays to be smart and careful about how to spin up and use these machines.

Creating the machine

Azure comes with a wide variety of pre-configured virtual machines with operating systems and certain software packages (“workloads” in cloud lingo) off the shelf.  I needed a VM with Visual Studio 2015, and happily, a number of Visual Studio-powered VMs are available out of the gate. A search for “Visual Studio” on the Azure Portal yields the selections:


In my case, I wanted the mature 2015 version of Visual Studio, I needed the Azure SDK, and I had no use for a server OS, so I chose:


On the Create Virtual Machine blade I set up some basic settings:




A few points here:

  • You’ll set up your admin credentials on this screen. You can’t use a “typical” admin user name like “administrator” or “admin”. However, the username “derek” works just fine. Passwords need to be at least 12 characters as well.
  • You’ll be prompted to select either an SSD or a normal hard drive “HDD”. Keep in mind that SSDs are going to cost more, but if your work requires a speedy disk, that option is available.


Click “View All” on the Size blade and you’ll see a bewildering array of choices, many of which seem startlingly similar, for you to select, along with an estimated cost per month of the compute resources this baby will consume, assuming it runs constantly.

There’s a lot of nuance between the different VM series, and frankly I’m not the best person to explain it all.  But I ended up choosing the D2_V2 Standard for my machine:


For more information on Azure VM size choices and what it all actually means, check out the documentation.

Now, 108 bucks per month is nearly 3/4 of my entire Azure allotment, and when combined with all the other stuff I have running, would easily put me over the spending limit on my account. But we have ways to mitigate this and bring the total spend to a small fraction of this, which we’ll get to momentarily.


I won’t go into screen-shotting the rest of the wizard, but on the following screens you get to configure a few other settings on your machine. There’s a bunch of stuff you probably don’t need to worry about for dev/deployment machines (like high availability and diagnostics) but you’ll want to understand the networking part at least. Your machine will be part of a (virtual) subnet on a (virtual) network, and you’ll have an NIC with a public IP address.  You can configure a dynamic or static public IP address, although keep in mind there is a cost associated with a public IP address.

Accessing your machine

Once your machine is finished provisioning, you can access it in the Azure portal, and you’ll see a “connect” button at the top of the Overview page.


Clicking the Connect button will download an RDP file to your computer, which you can use to remote into your machine.  If you’ve set up the IP address as dynamic, you’ll probably need to do this every time you access the machine as those IP addresses are not “sticky” in my experience, at least across reboots.

Managing your machine (and your money)

The machine I set up costs over a hundred dollars a month to run, and naturally I don’t want to use up all of my cloud bucks on a single resource.  Fortunately, Azure VMs are only billable when “allocated”, and we can de-allocate them when not in use. Note that a machine can be stopped but still “allocated” and this pertains to the method with which the machine was shut down. To be specific, DO NOT shut down the machine from within the machine itself (Start -> Shut Down). Instead, shut down the machine using the Azure Portal (or the Cloud Explorer, or Azure PowerShell).

In other words –


When your machine is fully de-allocated, you want to see this in your VMs listing:


This way you know it’s not accruing charges.

If your machine is only in an allocated state when it’s being used, you’re only going to get charged for those hours. That D2_V2 machine works out to about 20 cents per hour, so if I ran it ten hours a month, it would only cost me a couple bucks a month. If I were to use it as my everyday dev machine, eight hours per day, 20 days a month, it’s still around 32 dollars a month – easily manageable within my Azure subscription limit and the other PaaS stuff I have going on. Or, I could spend a little more and get access to a beefier machine, for example, maybe this one:


Sure, I’d run out of hours after four days (or 12 eight-hour days) but think of how productive I’d be!