We’ve moved!

I’ve decided to hang up the WordPress blog and start hosting my blogs in Github. Github seems like an easier platform for developer-minded folks like myself, and I’d been getting annoyed with all the ads over here.

Check out my first post over there on patterns for responding to events in SharePoint Online: https://dgusoff.github.io/event-handling-sharepoint/

Email a SharePoint group from a Flow, Part 1

One of the greatest features of Flow is the ability to send emails. But there isn’t a native way to send emails to SharePoint groups. Anyone who’s done substantial work with workflows knows that emailing individual users is fraught with issues. People leave companies or change roles, and if a workflow explicitly names an individual as an email recipient, any personnel change will break existing processes, necessitating rework. The best practice in this situation is to email a group, and manage group membership as needed.

Flow doesn’t allow an action to do this, and based on this Tech Community conversation lots of people are asking for it. Recently Microsoft provided the ability to issue raw REST requests against SharePoint from a Flow, and indeed we can use this pattern to fetch users from a group. Once we have that list of users we can them email them using Flow.

Creating a reusable Flow

I’m going to do something a little different with this Flow. I have a number of situations where I need to email SharePoint groups, and I don’t want to have to do this work every time the requirement comes up. What I’m going to do instead is create a standalone Flow that only emails a group, that I can call from other Flows.

One way to do this is to author the Flow using an HTTP trigger. That is, the Flow will listen on an HTTP endpoint, and be invoked whenever a request is made to it. The advantage of this trigger is that it can be invoked from pretty much anywhere: another Flow, an Azure Function, a console app, a mobile app, Postman, anything that can issue web requests can take advantage of this service.

Because we want this Flow to be flexible and configurable, able to email any group on any site in our tenant, we’re going to pass in the name of the group and the site URL to the Flow via JSON. Here’s how the Flow trigger looks so far:

alt text

Use this sample payload to generate the JSON schema:

"siteUrl": "foo",
"groupName": "foo"

Setting up the REST Request

Like I mentioned earlier, Flow gives us the ability to issue REST requests against SharePoint. If you’ve never worked with REST or with web services in general, it might seem a little daunting. But in Flow, the most difficult part of the process, authentication, is already handled for you, so all you have to do is craft the requests and parse the responses. Flows run under the security context of the user who authored the Flow, and the authentication headers will be automatically provided by Flow. (Note – there are some security implications to consider when authoring Flows – I’ll discuss those at the end of this article.)

If you’re not already familiar with the SharePoint REST interface, take a few minutes to read up on the user, group, and role REST documentation from Microsoft: https://msdn.microsoft.com/en-us/library/office/dn531432.aspx.

We have some options around how to specify the group we’re using – either by its name or by its numeric ID. We’ll be using the group name in this example, because it seems like it would be a little more user friendly. Our REST request is going to query for all the users inside the group specified in the request, inside the site specified by the request. It’s going to look like this:

GET /_api/web/sitegroups/getbyname()/users

Now, drop a “Send an HTTP request to SharePoint” action onto the diesign view after the trigger. Set it up like this:

alt text

OK, so now we’ve set up out trigger and used the data sent to it to invoke a call to SharePoint’s REST interface, which will return the serialized user data as a string. Next we need to transform that into structured data the Flow can use.

Parse the JSON

Now we’ve retrieved the data from SharePoint representing the group users. But the Flow only sees this as a string, even though it’s JSON structured data. We need to tell the Flow to treat this as structured JSON, and to do this, we need the Parse JSON Action.

So, after the SharePoint HTTP call, drop a Parse JSON action onto your design surface. Set it up to use the Body from the SharePoint HTTP call as its content. For the schema, click the “use sample payload” link and paste this into it:

    "d": {
      "results": [
          "Email": "AdeleV@mytenant.OnMicrosoft.com"

So now your Parse JSON action looks like this:

alt text

Build the recipients string

A collection of recipients in a Flow Email action is represented by a semicolon delimited list of email addresses. Since we now have an JSON array of objects containing these addresses, we now need to loop through the results and add the delimited email addresses into a string variable.

First, let’s create the variable and initialize it to an empty string:

alt text

Next we need to loop through the results array. Do do this we add an “Apply to Each” action. This action is a little tough to find – you’ll find it in the “more” section when you add an action to the end of your Flow:

alt text

You’ll add as the input to this action the output from the Parse JSON action you set up earlier. It should be called “value”. Inside the loop we’ll put an “Append to String Variable” action, adding the “Email” property from the passed array, and adding a semicolon at the end.

alt text

Set up the email action

Now we’re getting close. We have our delimited recipient string pulled from a live SharePoint group, and we’re ready to wire up the Email action.

Add an “Office 365 Outlook – Send Email” action to the end of your Flow. Add your string variable on the “To” line. Fill out the values for Subject and Body (you can parameterize these as well if you need to. I’ll leave that implementation up to you).

alt text

Test the Flow

Now everything is wired up, and we can test this out. Since we have an HTTP-triggered Flow, we can use Fiddler or Postman to execute requests directly to the Flow. I like to use the VS Code “REST Client” extension (https://marketplace.visualstudio.com/items?itemName=humao.rest-client) since it’s easy to use and I almost always have VS Code up and running anyway. We can grab the URL from the HTTP Trigger definition:

alt text

And here’s how we wire up the request in VS Code (Clicking “send request” will do what it says):

alt text

If your Flow and request were wired up successfully you should get a 202 response back, and we should be able to see our executed Flow in the history section. Here we can see the inputs and outputs of each action and whether it secceeded or failed, and usually if we did something wrong it’ll be obvious here.

alt text

If your Flow succeeded your recipients ought to have the email in their inboxes.

Call from another Flow

OK, now we have a working Flow that emails a SharePoint group. Now we want to reuse this Flow by calling it from other Flows. To do this we use within that Flow a Request action. Set up the URL and JSON in that action, run it, and if everything is done right, you’ve got a solution to email a SharePoint group that you can use in any of your Flows.

alt text

About those security implications

You should be logged in using a “service account” when authoring Flows. If you create the Flow using your normal user account, three things will happen. First, the emails will appear to be coming from yout user account; second, the Flow will assume the security context of your account, which means it’ll break if your account’s permissions con’t perform the actions against the specified site. If you leave the company all your Flows will break. And third, it will be difficult for your colleagues to maintain or even find the Flows you’ve written.

So create a “Flow Author” licensed account for this purpose. You can name it whatever makes sense to your organization.

Thinking ahead

It would be great – and a lot easier to use – if we could wrap this Flow into a Custom Connector rather than manually wiring up the HTTP request. And that’s exactly what we’re going to do in Part 2. Stay tuned!

I’ll be speaking at the SE Michigan PowerApps/Flow User Group Sept 10

I’ll be speaking at the Southeast Michigan PowerApps/Flow user group, September 10 at 5:00, at the Rightpoint offices in Royal Oak.

I’ll be giving a brief introduction to connectors, triggers, and actions in Flow, and talk about how to create your own integrations using raw HTTP actions and converting those into custom actions and connectors. I’ll also demo some real-world implementations of things I’ve done in Flow using this pattern.

Hope to see you there!


SE Michigan PowerApps/Flow User Group meet up

Monday, Sep 10, 2018, 5:00 PM

909 South Main St, Royal Oak, MI Royal Oak, mi

7 Members Attending

Hello Everyone, Hope you are having a great summer! It’s time for our next meet up on Monday, September 10. We have some exciting topics to talk about. Agenda • Check in/Snacks • Welcome • PowerApps Customer Success story from Rightpoint – Sreeni A We will talk about an app with demo that helps field workers complete a report for the job they worke…

Check out this Meetup →


Copy Link in Modern SharePoint – non-obvious security implications you should know about

Recently I encountered a strange issue in a client’s Intranet during the content buildout phase. They’d given read-only access to a group of pilot users, and loaded up their site with pages and links to documents. Then they began to notice that these pilot users appeared to have the ability to delete the documents, and logged a bug with us.

We discovered that the document library had hundreds of files with broken permission inheritance, and that the Everyone principal had been granted Contribute permission on each one of these documents, meaning they could edit and even delete the documents.

Thinking that some rogue user had inadvertently (or “advertently”) shared those documents in error, we ran a script that looped through all the documents in each library and restored the permission inheritance on each one. Then we discovered that the several hundred or so hyperlinks to the documents throughout the system began returning 404s.

Eventually we tracked the issue down to a “feature” of the Copy Link action bar item in Modern SharePoint document libraries. We discovered that Copy Link does a bit more than merely return a link to the document to the user’s clipboard.

The Document Action Bar

My client had been using SharePoint’s Copy Link functionality to create those links, just as we had taught them to. But what we didn’t realize was that clicking Copy Link was actually breaking the security inheritance on the document, and sharing it to the entire company. This was because the tenant settings that drove this functionality were left in their default settings, which inexplicably default to the most permissive – the most insecure – setting.

Check out what happens when you click the button:

Copy Link Dialog

Once you see this dialog, permission inheritance has already been broken and the permission “Anyone with the link can edit” has already been applied. If you select another option, the permission will update – even to the point of reinstating permission inheritance if “People With Existing Access” is selected. Also, the link regenerates, and previously generated links become stale and return 404s.

Copy Link Options

The link structure will tell the sharing story

If a Copy Link operation results in broken inheritance, it will look different from a link that does not.

A Sharing Link looks like this:

..while a non-shared link will look like this:

Note that a sharing link shows the tenant followed by a long string of crap, and the non-sharing link, while also containing its share of trailing junk, also seems to incorporate a physical path as part of its structure. So using this pattern you should be able to tell if a Copy Link resulted in broken inheritance.

My thoughts on this

You have some options for setting the default behavior of this function, but like I said the default default is the most permissive. The decision to have it behave this way vexes me somewhat. In previous versions of SharePoint it’s been difficult and tedious to break permission inheritance through the UI, and I think it ought to be that way. Breaking inheritance should only be done with serious consideration as it’s difficult to support and also has performance implications – a Microsoft employee once told me that breaking inheritance “makes SQL cry”. Maybe in the cloud we care less about performance implications because all that stuff is abstracted away. But it’s still there and I’d have to believe Microsoft cares about its servers. Anyway…

Know your tenant settings

We can manage the tenant-wide default behavior for Copy Link by navigating directly to https://-admin.sharepoint.com/_layouts/15/online/ExternalSharing.aspx

There are a number of settings related to Sharing on this page but the ones we care about are under the headings “Default Link Type” and “Default Link Permission”. The defaults look like this.

Tenant Settings

Note that in the Copy Link dialog we had four options for how to share the link, and the tenant setting only allows for three, excluding, maddeningly, the “People with Existing Access” option, the one I think should be the default. If we select the “Direct – specific people” option, though, and just not actually specify any people, the result will be the same.

The “Use shorter links” option only substitutes the “guestaccess.aspx” url with the crypic sharing url we saw earlier, nothing really to see there. The Default Link Permission setting, if set to Read, will at least limit the damage done if files are inadvertently shared to the general population.

Manipulating the settings using PowerShell

Of course these settings can be set using PowerShell at both the Tenant and Site Collection level. The Site Collection level settings will override the Tenant level settings for the site in question. Check out the documentation for Set-SPOTenant and Set-SPOSite. The options you want to look into are, on both commands, DefaultSharingLinkType and DefaultSharingLinkType. Make sure to check out the other settings related to sharing just to get a feel for how they work.


The SharePoint Modernization Scanner

While spelunking through Github this week I came across a useful tool in the PNP Tools repo that can generate some pretty interesting data about your Office 365 tenant.

It’s called the SharePoint Modernization Scanner, and it claims to grease the skids for your movement to Modern and Group-ification of your existing sites.

The complete source code is there but they’ve also included a direct link to the executable if you’re not interested in building it and just want to run the darn thing, which is what I did against a few of my tenants.

Running the darn thing

The default configuration for the tool uses a Client ID and Secret for a tenant-scoped App to authenticate into the tenant, which is pretty smart because it’s not guaranteed that admin user accounts will have access to all sites, even with policies in place to enforce it. (It’s the real world, things happen)  So, before you can run this you’ll want to make sure you have such an app and have the client ID and secret. You can also use normal credentials, just be aware of the access issue.

In order to make it work you’ll need to grab a file called webpartmapping.xml from the source code and drop it into the same directory where you’ve downloaded the executable.  Then open a PowerShell session and CD into that directory and run something like this:

./SharePoint.Modernization.Scanner.exe -t tenantname -i {client_id} -s {client_secret}


The process will run for a while, depending on how much stuff is in your tenant. On one of my tenants with 400 site collections, it took about 15 minutes to run, and when it’s done, I got a nice collection of CSV files:


With this data we can see every site, its template, the deployed custom actions, and detailed information about every page and web part in the tenant.

Setting VisibilityTimeout using the Azure WebJobs SDK

The new 2.0 version of the Nuget Package for the Azure WebJobs SDK, released on February 28, contained a neat little item that I wish had existed six months ago: the ability to set a Visibility Timeout on a queue message without having to implement a custom Queue Processor.

What’s a Visibility Timeout?

A visibility timeout determines what happens when an Azure WebJob throws an unhandled exception while processing a queue message.  When such an event occurs, the message is thrown back onto the queue for a re-try, but is left in an “invisible” state for a period of time. The message will not be processed again by the WebJob until the timeout has elapsed.  When a WebJob fails to process a queue message and throws an unhandled exception five times (configurable), the message is thrown onto the “poison queue”.

By setting a value for the VisibilityTimeout, you are banking that whatever condition that caused the failure will be rectified by the time the job runs again.

In my case, I am creating Office 365 SharePoint subwebs beneath a root web, meaning the root web must be completely provisioned before the subweb can be created. Since it takes about ten minutes (more or less) to create a site collection, the ten minute value seemed about right. And the documentation seems to imply that the default timeout value is indeed ten minutes. But in practice, my job would just fail spectacularly five times in a row and shuttle off to the poison queue before I even knew what hit it.

Configuring the Timeout

Prior to v2.0 of the Azure WebJobs SDK, you needed to create a custom QueueProcessorFactory, create a class derived from QueueProcessor, and hook it all up in your WebJob’s configuration object.  I’d show you an example, but it’s pointless, because there is now an easier way.

To implement the timeout, first make sure your project’s Nuget package for Microsoft.Azure.WebJobs is updated to version 2.0.0. Then in your Main() method, just set a value for the VisibilityTimeout on the config.Queues object:


(Yes, I know I’ve committed the cardinal sin of displaying code in a screen shot, but hey, it’s one line.)

Now, create an Azure WebJob, process a message, and throw an unhandled exception. In your Queue Explorer in Visual Studio, you’ll see that your messages are there, but not visible. After the timeout elapses they’ll be picked up and processed again.



I’m speaking at the Troy .NET User group tonight

Tonight I will be speaking at the Troy (MI) .NET User Group on the topic of Azure Resource Manager. I’ll be discussing ways to automate deployment of various types of resources into Azure with speed, repeatability, and consistency as primary factors.

Here are the technical prerequisites you’ll need to install in order to follow along with the demos:

Registration and directions here.

Looking forward to seeing you there!

Wireless networking adventures in Windows 8 and Hyper-V

After rebuilding several of my Hyper-V machines on a new machine I remembered what a pain it was to configure networking on them.  For some reason, Hyper-V and wireless network adapters just don’t get along together.

Steve Sinofsky has written the definitive post on this, but he leaves out a key detail, and so does everyone else who parrots posts like that.  Not even the Virtual PC Guy had any direction that proved to solve my problems.

Fortunately a guy in Australia named Simon Waight had the solution, and I’m posting it here for the next time I forget this important little detail.

There are a couple of standard approaches to make wireless networking work with Hyper-V.  Ben Armstrong, the Virtual PC guy, advocates using Internet Connection Sharing, while Sinofsky advised using an External Switch bound to a wireless adapter, which creates a network bridge.  I liked the external Switch approach because it seemed simpler and more flexible, but Sinofsky’s advice just didn’t work

The solution, as told by Waight:  Right-click the Network Bridge in the adapter settings, and in the Properties window, under ‘The connection uses the following items’, check every checkbox, and click OK.  After some churning both the host and client networking is back online.


SharePoint Saturday Michigan – October 5

I am excited to announce that I will be presenting again at this year’s SharePoint Saturday Michgan, to be held on October 5, 2013 at Washtenaw Community College in Ann Arbor.

I will be presenting: “Creating Dynamic Charts on the Client Side with HTML5 and JavaScript”. We’ll discuss the technologies behind embedding charts in web pages, check out a few third party libraries, talk about techniques to pull data from SharePoint, and of course, show some demos.

We’ll also look at browser compatibility considerations and discuss differences between SharePoint versions.

Check out this blog between now and then as I will be posting some supplemental material as the date gets closer.

For more information go here.

Hope to see you there!