Microsoft Flow: First Impressions

Over the last several weeks I’ve had my first experiences using Microsoft Flow in a real-world application. The client has dozens of old 2010-style SharePoint Designer workflows touching a number of business functions: Sales, Procurement, Change Management, and Human Resources, and they were looking for a way to modernize their development process and eliminate some of the quirks and irksome bugs that have been plaguing their users.  Hearing that the client was looking down the road at moving from on-premises SharePoint Server 2013 to the cloud, I recommended re-writing a number of these processes in Flow instead of SPD.

flow

Flow is a part of Microsoft’s new cloud-based platform for process modeling, for lack of a better phrase. The idea is that non-developers can use Flow’s intuitive user interfaces to build robust integrations between their line of business applications with no code anywhere to be found.

At first glance, Flow seems to be a huge improvement over the experience of building workflows in SharePoint Designer. For starters, it’s web-based, so there’s nothing to install. Flow comes with an impressive array of standard integration points (“connectors”), a handful of entry points (“triggers”) and hundreds of pre-defined activities you can configure (“actions”).  By dragging and dropping widgets onto the control surface and setting up some basic properties, power users can create powerful applications without having to rely on developers or IT to set it up for them.

Here are a few quick takeaways from my experiences so far.

Low expectations

SharePoint Designer workflows come with so much baggage it’s difficult to imagine preferring it to any succeeding technology that comes along. So the bar here is low.

Wide range of capability

Flow’s range of out-of-the-box integration points is very impressive, and they keep adding new connectors and actions all the time. There’s even an extension model where you can create your own and submit them for inclusion in the platform.

More of a consumer focus

Many of the integration points, though, don’t seem to make a lot of sense in most enterprise scenarios. Twitter, Facebook, and Gmail are some such connectors. And many of the starter templates are more in the personal productivity realm. For example,

  • Text me when I get an email from my boss
  • Email me when a new item shows up in a SharePoint list
  • Start a simple approval process on a document when it’s posted
  • Save Tweets to an Excel file
  • Send me an email reminder every day

 

Easy to extend

It’s really simple to create extension points in Flow. Suppose you have a need to do something that isn’t supported by a Flow action. If you can code, you can write an API to do what you need, and call it via an HTTP action.  Azure Functions work really well for this. In fact, the HTTP action is the most powerful thing in Flow. You can even use it to trigger other Flows from within a Flow.

Approvals are not fully baked

If you’re building approval workflows and are expecting the way SharePoint Designer works, you’ll be disappointed. An Approval in Flow consists of an email and a two buttons, nothing more. There is no concept of setting a status on an item, no functionality for logging (unless you roll it yourself), and no notion of tasks. It changes the way you think about approvals in general, because the old model just doesn’t apply here.

The Designer does not scale

For a simple two- or three-step flow, the designer works great. Add a couple of nested if/else blocks (‘conditions’), or more than a half-dozen or so actions, and you’ll find that  the design surface is totally unsuited to the task. Scroll bars are in difficult-to-find places and it’s often next to impossible to maintain your context when trying to move around within a Flow.

Sometimes saving a Flow will trigger a phantom validation error, and you’ll have to expand every one of your actions until you find the offending statement, because the Flow team have not seen fit to provide any sort of feedback on where the failure occurred. In addition, sometimes, especially when working with variables, the validation will fail even though the variable is properly configured.

No Code view

As clunky as the designer gets, if you’re a developer you might be more comfortable just coding your Flow the old fashioned way – after all, it’s just JSON under the hood. But alas, code view is not available in Flow. The design view is all you have.

Another implication of this: If you have places in your Flows where there are large blocks of similar functionality, you have no option to copy blocks of code and modify to suit. You’re stuck having to re-create those similar blocks of functionality, manually, in the designer, every single time. Believe me, this gets old really fast.

No versioning

If you make a change to your Flow and somehow break it, well, that’s tough, you’d better figure it out because there’s no rolling back.

Clearly Flow is not the magic bullet in the Enterprise process modeling world. It has is quirks and its pitfalls. But remember, the bar is low due to the legacy application it replaces.  SharePoint Designer workflows share many of the same deficiencies as Flow: a clumsy design experience (check), an inability to edit code directly (for all practical purposes), and no rollback model (technically possible in SPD via version history but janky as hell).

Given that SPD has had its ten-plus years in the limelight, and Flow is a brand-new V1 product with an engaged product team, I’d say the future looks bright for Flow.

Advertisements

Setting VisibilityTimeout using the Azure WebJobs SDK

The new 2.0 version of the Nuget Package for the Azure WebJobs SDK, released on February 28, contained a neat little item that I wish had existed six months ago: the ability to set a Visibility Timeout on a queue message without having to implement a custom Queue Processor.

What’s a Visibility Timeout?

A visibility timeout determines what happens when an Azure WebJob throws an unhandled exception while processing a queue message.  When such an event occurs, the message is thrown back onto the queue for a re-try, but is left in an “invisible” state for a period of time. The message will not be processed again by the WebJob until the timeout has elapsed.  When a WebJob fails to process a queue message and throws an unhandled exception five times (configurable), the message is thrown onto the “poison queue”.

By setting a value for the VisibilityTimeout, you are banking that whatever condition that caused the failure will be rectified by the time the job runs again.

In my case, I am creating Office 365 SharePoint subwebs beneath a root web, meaning the root web must be completely provisioned before the subweb can be created. Since it takes about ten minutes (more or less) to create a site collection, the ten minute value seemed about right. And the documentation seems to imply that the default timeout value is indeed ten minutes. But in practice, my job would just fail spectacularly five times in a row and shuttle off to the poison queue before I even knew what hit it.

Configuring the Timeout

Prior to v2.0 of the Azure WebJobs SDK, you needed to create a custom QueueProcessorFactory, create a class derived from QueueProcessor, and hook it all up in your WebJob’s configuration object.  I’d show you an example, but it’s pointless, because there is now an easier way.

To implement the timeout, first make sure your project’s Nuget package for Microsoft.Azure.WebJobs is updated to version 2.0.0. Then in your Main() method, just set a value for the VisibilityTimeout on the config.Queues object:

code

(Yes, I know I’ve committed the cardinal sin of displaying code in a screen shot, but hey, it’s one line.)

Now, create an Azure WebJob, process a message, and throw an unhandled exception. In your Queue Explorer in Visual Studio, you’ll see that your messages are there, but not visible. After the timeout elapses they’ll be picked up and processed again.

queue

 

Using Azure VMs for long-running jobs

From time to time I have to perform long-running deployment jobs against remote environments (usually Office 365), and I’ve hit upon the idea of running these from virtual machines hosted in Azure.

For example, today I had to push new master pages to 800 site collections in SharePoint Online, and the client requested I begin the deployment after hours. Now, pushing 1600 master pages (2 per site) via remote code takes somewhere around four hours; there’s no way I’m staying at the office until 9PM just so my laptop could stay running while I babysit that deployment.

Using an Azure Virtual Machine allows me to begin the deployment process on the remote machine, shut down my laptop, drive home, and have dinner with my family, all the while the deployment is running happily in Azure. I thought I would share the process and specifications I use for my machine and also share some tips for using Azure VMs in this fashion.

As an MSDN subscriber (thanks Rightpoint!) I have access to an Azure subscription with $150 of credit per month.  That amount lets you do a TON of stuff in the PaaS space, but when using virtual machines the dollars add up really quick, so  it pays to be smart and careful about how to spin up and use these machines.

Creating the machine

Azure comes with a wide variety of pre-configured virtual machines with operating systems and certain software packages (“workloads” in cloud lingo) off the shelf.  I needed a VM with Visual Studio 2015, and happily, a number of Visual Studio-powered VMs are available out of the gate. A search for “Visual Studio” on the Azure Portal yields the selections:

vm-1

In my case, I wanted the mature 2015 version of Visual Studio, I needed the Azure SDK, and I had no use for a server OS, so I chose:

vm-2

On the Create Virtual Machine blade I set up some basic settings:

Basics

vm-3

 

A few points here:

  • You’ll set up your admin credentials on this screen. You can’t use a “typical” admin user name like “administrator” or “admin”. However, the username “derek” works just fine. Passwords need to be at least 12 characters as well.
  • You’ll be prompted to select either an SSD or a normal hard drive “HDD”. Keep in mind that SSDs are going to cost more, but if your work requires a speedy disk, that option is available.

Size

Click “View All” on the Size blade and you’ll see a bewildering array of choices, many of which seem startlingly similar, for you to select, along with an estimated cost per month of the compute resources this baby will consume, assuming it runs constantly.

There’s a lot of nuance between the different VM series, and frankly I’m not the best person to explain it all.  But I ended up choosing the D2_V2 Standard for my machine:

vm-4

For more information on Azure VM size choices and what it all actually means, check out the documentation.

Now, 108 bucks per month is nearly 3/4 of my entire Azure allotment, and when combined with all the other stuff I have running, would easily put me over the spending limit on my account. But we have ways to mitigate this and bring the total spend to a small fraction of this, which we’ll get to momentarily.

Settings

I won’t go into screen-shotting the rest of the wizard, but on the following screens you get to configure a few other settings on your machine. There’s a bunch of stuff you probably don’t need to worry about for dev/deployment machines (like high availability and diagnostics) but you’ll want to understand the networking part at least. Your machine will be part of a (virtual) subnet on a (virtual) network, and you’ll have an NIC with a public IP address.  You can configure a dynamic or static public IP address, although keep in mind there is a cost associated with a public IP address.

Accessing your machine

Once your machine is finished provisioning, you can access it in the Azure portal, and you’ll see a “connect” button at the top of the Overview page.

VM-5.jpg

Clicking the Connect button will download an RDP file to your computer, which you can use to remote into your machine.  If you’ve set up the IP address as dynamic, you’ll probably need to do this every time you access the machine as those IP addresses are not “sticky” in my experience, at least across reboots.

Managing your machine (and your money)

The machine I set up costs over a hundred dollars a month to run, and naturally I don’t want to use up all of my cloud bucks on a single resource.  Fortunately, Azure VMs are only billable when “allocated”, and we can de-allocate them when not in use. Note that a machine can be stopped but still “allocated” and this pertains to the method with which the machine was shut down. To be specific, DO NOT shut down the machine from within the machine itself (Start -> Shut Down). Instead, shut down the machine using the Azure Portal (or the Cloud Explorer, or Azure PowerShell).

In other words –

vm-6

When your machine is fully de-allocated, you want to see this in your VMs listing:

vm-7

This way you know it’s not accruing charges.

If your machine is only in an allocated state when it’s being used, you’re only going to get charged for those hours. That D2_V2 machine works out to about 20 cents per hour, so if I ran it ten hours a month, it would only cost me a couple bucks a month. If I were to use it as my everyday dev machine, eight hours per day, 20 days a month, it’s still around 32 dollars a month – easily manageable within my Azure subscription limit and the other PaaS stuff I have going on. Or, I could spend a little more and get access to a beefier machine, for example, maybe this one:

vm-8

Sure, I’d run out of hours after four days (or 12 eight-hour days) but think of how productive I’d be!

I’m speaking at the Troy .NET User group tonight

Tonight I will be speaking at the Troy (MI) .NET User Group on the topic of Azure Resource Manager. I’ll be discussing ways to automate deployment of various types of resources into Azure with speed, repeatability, and consistency as primary factors.

Here are the technical prerequisites you’ll need to install in order to follow along with the demos:

Registration and directions here.

Looking forward to seeing you there!

Wireless networking adventures in Windows 8 and Hyper-V

After rebuilding several of my Hyper-V machines on a new machine I remembered what a pain it was to configure networking on them.  For some reason, Hyper-V and wireless network adapters just don’t get along together.

Steve Sinofsky has written the definitive post on this, but he leaves out a key detail, and so does everyone else who parrots posts like that.  Not even the Virtual PC Guy had any direction that proved to solve my problems.

Fortunately a guy in Australia named Simon Waight had the solution, and I’m posting it here for the next time I forget this important little detail.

There are a couple of standard approaches to make wireless networking work with Hyper-V.  Ben Armstrong, the Virtual PC guy, advocates using Internet Connection Sharing, while Sinofsky advised using an External Switch bound to a wireless adapter, which creates a network bridge.  I liked the external Switch approach because it seemed simpler and more flexible, but Sinofsky’s advice just didn’t work

The solution, as told by Waight:  Right-click the Network Bridge in the adapter settings, and in the Properties window, under ‘The connection uses the following items’, check every checkbox, and click OK.  After some churning both the host and client networking is back online.

Image

SharePoint Saturday Michigan – October 5

I am excited to announce that I will be presenting again at this year’s SharePoint Saturday Michgan, to be held on October 5, 2013 at Washtenaw Community College in Ann Arbor.

I will be presenting: “Creating Dynamic Charts on the Client Side with HTML5 and JavaScript”. We’ll discuss the technologies behind embedding charts in web pages, check out a few third party libraries, talk about techniques to pull data from SharePoint, and of course, show some demos.

We’ll also look at browser compatibility considerations and discuss differences between SharePoint versions.

Check out this blog between now and then as I will be posting some supplemental material as the date gets closer.

For more information go here.

Hope to see you there!