Send and Receive Text Messages from Microsoft Flow

As the software development landscape matures, levels of abstraction keep getting higher. Every generation of technology raises the starting point a notch, enabling developers to create more and more interesting things while “boilerplate” or setup tasks become increasingly automated.

Nowadays, with tools like Flow and Azure Logic Apps, we can create some pretty cool integrated solutions without even having to write code.

Microsoft Flow is becoming a go-to no-code tool for authoring business process automations in the Microsoft ecosystem, replacing the old SharePoint Designer tool we all hated so much. In this blog post we’ll explore sending and receiving text messages via Microsoft Flow and integrating them into other systems to create fast, mobile-ready business applications.

What you’ll need

You’ll need a few things set up in order to follow along:

  • You’ll need an Office 365 license that includes Flow and Email
  • You’ll need a Twilio account. You can create a free trial account that includes a set amount of free credit to play around with. Twilio is a paid service, so you’ll need to pay for using it in real applications, but the free trial comes with (I think) $15 dollars worth of credit, and you can send and receive a couple thousand text messages with that amount, so you don’t have to commit to anything until you’re sure about the solution.

Building the solution

We’ll follow four steps to build out our sample Flow:
1. Create a shell HTTP-triggered Flow
2. Set up a webhook for incoming texts against our Twilio number
3. Send ourselves an email with the text content
4. Send ourselves a text message repeating the incoming message back to us

Create an HTTP-Triggered Flow

Go to Flow and create a new Flow with an HTTP trigger. Use the Create From Blank template and search using “HTTP” until you find the “When an HTTP request is received” trigger, and select it.

alt text

Our trigger will populate on the design surface. We won’t be able to see the trigger’s URL until after we save the Flow, and we don’t know what the post body schema is going to be, so we’ll leave that blank for now. Also, we need at least one action before we can save the Flow, so let’s just create a variable to hold the POST body of the trigger request. We’ll use it to generate our schema after we invoke our webhook for the first time. I used the Initialize Variable action, named the variable “Body”, set its data type as Object, and initialized it to the “Body” element from the trigger request.

alt text

Now give your Flow a name and save it. I named mine “Twilio Trigger” and note that once the Flow saves, the URL is populated in the action. Put this on your clipboard as we’ll use it in the next step.

alt text

Set up our Webhook

Once you’ve set up your Twilio trial account, you’ll need to create a SMS Project and a phone number. I do not find Twilio’s user interface easy to use, and I usually have to stumble around a bit before I can get anything done. But you’ll need to create a new project of type “Programmable SMS” and create a new phone number inside that project. To create a new number, go to https://www.twilio.com/console/phone-numbers/incoming and click the plus sign to obtain a new number. Once you have the number, go ahead and click it.

Note: for trial accounts, you will only be able to send and receive texts between your Twilio number and the phone you use to set up the trial.

Once you’ve clicked the number, look for the “Messaging” section and look for the “A message comes in” line. Paste your Flow URL into the text box and leave the defaults on the two dropdowns (“Webhook” and “Post”), and click Save.

alt text

Your webhook is now pointing to your new Flow. Send a text message to the Twilio number from the phone you used to set up your trial, and navigate to the Flow you built earlier. If you’ve done everything correctly, you should see one successful run of your Flow.

alt text

If you do not see any runs, then your webhook is misconfigured. If you see a run and it’s failed, then something has gone wrong with the Flow.

Complete the Flow

But assuming all went well, if you drill into the Flow run you can see the POST data sent to the Flow in the Body variable we set up.

alt text

Next we’ll copy that text and use it to generate our expected body schema for the HTTP Trigger. Put the Flow in edit mode, open up the HTTP trigger, click “Use sample payload to generate schema”, and paste in the text from the Body variable:

alt text

That body JSON looks something like this:

{
"$content-type": "application/x-www-form-urlencoded",
"$content": "VG9Db3VudHJ5PVVTJlRvU3RhdGU9REMmUU2MDlhN2EyM2IwOWQ4MjQxNmU4NTg3OCZBY2NvdW50U2lkPUFDOGNlOTQwOTM1ZDU5MDBjY2YxNzdmMDQzZjc4NDgyYWEmRnJvbT0lMkIxNTE3Mjk1OTgwNiZBcGlWZXJzaW9uPTIwMTAtMDQtMDE=",
"$formdata": [
{
"key": "ToCountry",
"value": "US"
},
{
"key": "ToState",
"value": "DC"
},
{
"key": "SmsMessageSid",
"value": "SM11d18181e323232323d82416e85234"
},
{
"key": "NumMedia",
"value": "0"
},
{
"key": "ToCity",
"value": ""
},
{
"key": "FromZip",
"value": ""
},
{
"key": "SmsSid",
"value": "SM11d18181e609a74244242416e85987"
},
{
"key": "FromState",
"value": "MI"
},
{
"key": "SmsStatus",
"value": "received"
},
{
"key": "FromCity",
"value": ""
},
{
"key": "Body",
"value": "Hello there"
},
{
"key": "FromCountry",
"value": "US"
},
{
"key": "To",
"value": "+12028738201"
},
{
"key": "ToZip",
"value": ""
},
{
"key": "NumSegments",
"value": "1"
},
{
"key": "MessageSid",
"value": "SM11d18181e21212121212416e85456"
},
{
"key": "AccountSid",
"value": "ACxxxxxxxxxxxxxxxxxxa"
},
{
"key": "From",
"value": "+15172953322"
},
{
"key": "ApiVersion",
"value": "2010-04-01"
}
]
}

Notice that all the data we care about – namely the sender and the message itself – are buried in an array of key-value pairs. To get at this data from our Flow, we need to loop through all the pairs and check until we find what we’re looking for. It’s not ideal but it’ll work. What we’ll do is loop through the keys, look for the one named “Body”, store the value of Body in a variable.

We’ll add a condition step, and inside the “choose a value” box we’ll select “key” from the Dynamic Content selector. When we do this, Flow will automatially wrap that condition inside an Apply To Each iterator. When “Key” is equal to “Body” we’ll store the value in a string variable, and then outside the loop we’ll email the text message to ourselves, completing the Flow.

Here are the actions to complete the Flow:

alt text

alt text

alt text

Testing

Now let’s send another text to the Twilio number. If all goes well, the Flow trigger again and light up green, and you’ll have the text content in your Email inbox.

Sending Text Messages

Sending a text message is much more straightforward. There is a standard Twilio connector in Flow, and you need to set up your initial connection parameters before fleshing out the action. To do this we need to grab our account SID and your auth token. You can get these from the Twilio dashboard at http://www.twilio.com/console

alt text

Setting up the Twilio connector

The first time you set up any Twilio integrations in Flow you’ll be prompted to flesh out your connector with the Account ID and auth token. Add a new action, search for “Twilio” and select the “Send Text Message” action.

alt text

If this is the first Twilio integration you’ve done on your tenant, Flow will prompt you to set up your connector. Give it a name and enter your Account ID and Auth Token from the Twilio portal.

alt text

After configuring the connection you’ll be able to fill out the action details. The ‘From’ phone number will be a dropdown list of all the phone numbers associated with the account. The ‘To’ number is a plain text number, but if you have a trial account you’ll only be able to successfully message the number linked to your account, and the text is also a plain text field. Of course you can use Flow dynamic data and functions inside these text fields.

alt text

Possibilities

Having the ability to interact with text messaging in Flow opens up a wide range of possibilities. Flow is a connection machine, and enables us to integrate with a wide array of services. Throw in some Azure functions and the power of Azure service offerings and we can implement anything we can imagine: AI-driven chat bots, reservation systems, bulk reminders and notifications, social media monitoring, critical IT systems monitoring, just to name a few off the top of my head.

Conclusion

We can easily create mobile-ready integrations with Microsoft Flow that can send and receive text messages. Receiving text messages involves a webhook and a bit of setup, and sending text messages is a simple action out of the box.

Remote Event Receivers – you’re all doing it wrong

Remote Event Receivers are a powerful way to integrate custom code into your SharePoint Online environment.  Essentially a Remote Event Receiver is a hook that allows you to execute your code in response to an event that occurs in SharePoint.  There are several techniques for responding to events in SharePoint Online, but the Remote Event Receiver is the most powerful. It offers dozens of different events to attach to and allows you to configure synchronously or asynchronously. It is also very easy to attach, develop, deploy, test, and maintain. Is also very misunderstood.

Remote Event Receivers were introduced along with SharePoint 2013 and the arrival of the App model. Microsoft provided tooling with Visual Studio to create Remote Event Receivers, but unfortunately the only way to expose this tooling was in the context of a Provider-Hosted App.

This is unfortunate because Remote Event Receivers have nothing to do with Provider-Hosted Apps. In order to develop a Remote Event Receiver using the Microsoft-provided tooling, a developer had to create a Provider-Hosted App project and deploy their RER along with it. This added a great deal of complexity to the development effort, and made packaging and deployment a painful and tedious experience.

To further complicate things, Microsoft decided to wire up a WCF service as the endpoint in its RER tooling. This is sheer lunacy, even back in 2013. A web API project would have been simpler and would be more in line with Microsoft development tooling efforts. The opacity and complexity of WCF made RER development even more cumbersome. Actually, I believe they decided to use the WCF service so the development experience would be similar to that of old-school Event Receivers, with the ability to use a deserialized Event Properties object.

Building a Remote Event Receiver is Easy

In truth, it is remarkably simple to configure, develop, deploy, and maintain a Remote Event Receiver, but in order to do so you must completely abandon the Microsoft tooling and just set up the pieces yourself. Luckily there are really only two components to a Remote Event Receiver:

  • The endpoint
  • The registration

The registration is where you tell SharePoint, “call this endpoint every time this event occurs”. The CSOM provides mechanisms for adding Remote Event Receivers, but the details depend somewhat on the type of event receiver being deployed. The  PnP PowerShell library provides the ability to register a Remote Event Receiver in a one-liner. For example, to set up an RER that is invoked every time an item is updated on a list, execute:

Add-PnPEventReceiver -List "Tasks" -Name "TasksRER" -Url https://my-rer.azurewebsites.net/Service1.svc -EventReceiverType ItemAdded -Synchronization Asynchronous

More about registering Remote Event Receivers

The endpoint is just a web service listening at a certain URL, and you have lots of options for this. A Web API project would work great for this. Azure Functions are also a very compelling option. You are also free to write services in Java, Node or whatever other technology you can think of. In the example below, we’ll use the canonical WCF Service you’d get with the Visual Studio item template, but we’re going to sidestep the template and wire things up ourselves. It’s actually easier this way.

 

Creating the Remote Event Receiver shell

In Visual Studio,

  1.  create an empty ASP.NET web application
  2. Add the Nuget Package ”AppForSharePointOnlineWebToolkit”.
  3. Add a new item of type WCF Service to the web app.
  4. Get rid of the IService reference and set your service to implement IRemoteEventService, which lives in the Microsoft.SharePoint.Client.EventReceivers namespace. This namespace came into the project with the Nuget packages we added earlier. Resolve the squiggly to implement the interface stubs.

Your service class should look something like this:

RER-stub

F5 your project and navigate to the service to make sure it’s accepting requests. Take note of the port number.  We’ll need that to set up our proxy for local debugging.

Locally debugging your remote event receiver

To test this event receiver locally, we’ll use ngrok. According to its documentation, “ngrok is a reverse proxy that creates a secure tunnel from a public endpoint to a locally running web service.”  We will use it to map an Internet endpoint to our local machine so we can intercept and debug requests coming from SharePoint Online.

Assuming you have installed Node.JS and ngrok, create your proxy by executing the following. 56754 is the port number hosting my local WCF service.

ngrok-1

Once it connects it’ll output some data, including the public URL of our proxy connection:

ngrok-2

Next, open up a browser and navigate to the ngrok URL, append the service endpoint, and you should be able to see that the ngrok URL is returning your service.

ngrok-3

 

Next we’ll attach our event receiver to a SharePoint list. Using the PnP PowerShell cmdlet shown above, we’ll add a Remote Event Receiver to our site. Make sure to open a new PowerShell session for this and leave the ngrok session running in its window. The proxy will be released when the window is closed.

Now, set a breakpoint in your ProcessOneWayEvent method, add a task to the list and make an edit to it.  If all goes well your local web service will be called and the breakpoint will hit:

rer-2

Make sure you closely inspect the properties object in the debugger, and get a feel for all the data that’s in there. For this particular event we’ll want to check out ItemEventProperties and ItemEventProperties.AfterProperties for some useful metadata that gets passed into the service.

Deploying your Remote Event Receiver to Azure

When you are ready to deploy to the Internet you can deploy to Azure just like a normal web application. You’ll want to run Add-PnPEventReceiver using the Internet URL to register your event receiver for real, of course.

 

Create a Yammer Group with Microsoft Flow

Microsoft Flow is a fantastic enterprise tool and comes with hundreds of default actions, which allow you to easily perform integrations to different services, including Yammer.

Flow gives us several actions out of the box that we can use to perform integration activities against Yammer:

1

These actions pretty much revolve around fetching and creating messages. True, this is the core thing we do in Yammer, but sometimes our requirements force us to step outside the default capabilities of our platforms and think of creative ways to solve complex problems.

We do a lot of Yammer integrations in our solutions at Rightpoint, and creating a Yammer group is something we do all the time. As we’ve seen, Flow doesn’t give us an action to create Yammer groups, but there is a Yammer API that will do this, and it can be very easily executed via Flow. (Technically speaking, the API to create Yammer groups is an undocumented API. We’ll discuss what that means a little later.)

We’re going to achieve this by POST-ing the necessary data to a Yammer API endpoint, using Flow’s HTTP action.  The HTTP action is, in my opinion, the most powerful and flexible Flow component. It’s so powerful because we can literally do anything we want.  At a low enough level of abstraction, every action is an HTTP action anyway.

Authenticating to Yammer

The Yammer API uses Oauth tokens to authenticate, so before we start our Flow we’ll need to create an app in Yammer. Navigate to https: //www.yammer.com/YOUR_ORG_NAME/client_applications, and click the green “Register New App” button.

On the “Register New App” screen, fill out the required fields. Absolutely none of the fields in this form have any bearing whatsoever on what we’re doing. If you’re planning to build a web app and publish it in the global Yammer app marketplace, these fields will be needed, but 99.9% of the time they’re pointless. But they’re required fields, so put in whatever will allow the validation to pass.

2

What we’re really after is the Oauth token. When we save our app, we’ll be redirected to a configuration page where we’ll see the Client ID and Client Secret. By clicking the “Generate a developer token” link, we will expose the app’s token. Don’t worry about the text claiming that this is for testing purposes; they’re still assuming you’re building web apps for the global store. Auth tokens in that case are generated on the client side for each user.

3

Won’t this Auth token expire?

Yammer app tokens last a long time, although Yammer won’t disclose exactly what the expiration terms are. I can say that I’ve had Yammer integrations authenticating in this fashion since about 2014, and I’ve never seen one expire. They can be revoked, however, and the account that created the app has the ability to do this.

Yammer Apps – other considerations

There are a few things you’ll want to know as you develop Yammer apps in this fashion:

  • Anyone can create a Yammer App. You don’t need to be an administrator to do this.
  • Yammer Apps execute under the security context of the creating user account. Content created by the app will appear to have been created by the user that created the app. Think of it this way: through the auth token the app author is delegating their access to anyone who holds it.
  • If your token is compromised you can invalidate it by navigating to yammer.com/YOUR_ORG_NAME/account/applications and click the “Revoke Access” link next to your app.
  • Consider using a dedicated “Service Account” for creation of Yammer Apps. Not only does this protect your user account should the token get compromised, it ensures the token continues to work should your account get disabled – for example, if you leave the organization.

Understanding the Yammer group creation API

If you do any work with the Yammer API, you’ll want to check out the Yammer API documentation.

There you’ll find more detailed information about using the API, and you’ll see a listing of all the supported endpoints exposed via the API.

Notice that there is no listed endpoint to create groups. This is because the group creation endpoint is undocumented. Something to consider when working with undocumented endpoints is that Yammer is not bound to provide support for it, nor will they feel obliged to maintain backward compatibility should they ever decide to update their APIs. So there’s an element of risk to working with these endpoints, and you should be prepared to accept that one morning you might wake up to find that all your stuff is broken.  I would counter that by saying Office 365 often imposes breaking changes even on supported stuff, and these APIs have been stable for a number of years running.

The endpoint we use to create groups in Yammer looks like this:

https://www.yammer.com/api/v1/groups.json?name=GROUP_NAME&private=false&show_in_directory=true

We will POST to this URL, add a content-type of application/json, and add the Yammer Auth token as a bearer token, and leave the body empty.  The parameters should be self-evident, and the last two are optional, and they default to a public, listed group.

Let’s test this out. We can use Fiddler or Postman for this, but I’ve recently discovered the Visual Studio Code REST Client extension, and that’s what I’ll be using here. You can read more about it here.

Set your request to look like this:

4

It’s a simple as that. If you did everything right, you should get a 202 response back and your new group will be shown in Yammer, along with a notification sent to All Company:

5

Remember, you’ll want to use a service identity, which I didn’t do in this case.

Creating our Flow

Now that we’ve figured out how to post to Yammer using the raw API, let’s incorporate that into a Flow. In my Flow I’m going to use an HTTP trigger, so I can call this as a service from other applications or even from other Flows.  We’re going to pass three parameters into our Flow trigger, groupName, isPrivate, and showInDirectory.  We’ll use the sample JSON option to generate the request body our trigger will be expecting:

6

Next we’ll create a variable to construct our REST API URL. Its configuration, using the same URL structure we discussed before, using the Trigger Body JSON to flesh out the parameters, will look like this:

7.png

Now we can create and configure our HTTP action:

8

Now, assuming we’ve hooked everything up properly, we can call our Flow. I’ll be using the VS Code extension again just to keep things simple:

9

If all goes well we’ll get a 201 response and our group will be present in Yammer:

10

Note that I passed in parameters to create a private, unlisted group, and you can see this group is private and won’t be listed in the Groups list on Yammer. Also, it doesn’t create the notification message in All Company. Note that the creation of public unlisted groups is unsupported and will cause an error response to be thrown.

Wrapping it up

In this post I showed a technique for integrating the Yammer API with Microsoft Flow, and used it to create groups in Yammer.  Using Flow’s HTTP actions we can do just about anything that can be done over HTTP.  For more info on the Yammer REST APIs, check out their official documentation.

Microsoft Flow: First Impressions

Over the last several weeks I’ve had my first experiences using Microsoft Flow in a real-world application. The client has dozens of old 2010-style SharePoint Designer workflows touching a number of business functions: Sales, Procurement, Change Management, and Human Resources, and they were looking for a way to modernize their development process and eliminate some of the quirks and irksome bugs that have been plaguing their users.  Hearing that the client was looking down the road at moving from on-premises SharePoint Server 2013 to the cloud, I recommended re-writing a number of these processes in Flow instead of SPD.

flow

Flow is a part of Microsoft’s new cloud-based platform for process modeling, for lack of a better phrase. The idea is that non-developers can use Flow’s intuitive user interfaces to build robust integrations between their line of business applications with no code anywhere to be found.

At first glance, Flow seems to be a huge improvement over the experience of building workflows in SharePoint Designer. For starters, it’s web-based, so there’s nothing to install. Flow comes with an impressive array of standard integration points (“connectors”), a handful of entry points (“triggers”) and hundreds of pre-defined activities you can configure (“actions”).  By dragging and dropping widgets onto the control surface and setting up some basic properties, power users can create powerful applications without having to rely on developers or IT to set it up for them.

Here are a few quick takeaways from my experiences so far.

Low expectations

SharePoint Designer workflows come with so much baggage it’s difficult to imagine preferring it to any succeeding technology that comes along. So the bar here is low.

Wide range of capability

Flow’s range of out-of-the-box integration points is very impressive, and they keep adding new connectors and actions all the time. There’s even an extension model where you can create your own and submit them for inclusion in the platform.

More of a consumer focus

Many of the integration points, though, don’t seem to make a lot of sense in most enterprise scenarios. Twitter, Facebook, and Gmail are some such connectors. And many of the starter templates are more in the personal productivity realm. For example,

  • Text me when I get an email from my boss
  • Email me when a new item shows up in a SharePoint list
  • Start a simple approval process on a document when it’s posted
  • Save Tweets to an Excel file
  • Send me an email reminder every day

 

Easy to extend

It’s really simple to create extension points in Flow. Suppose you have a need to do something that isn’t supported by a Flow action. If you can code, you can write an API to do what you need, and call it via an HTTP action.  Azure Functions work really well for this. In fact, the HTTP action is the most powerful thing in Flow. You can even use it to trigger other Flows from within a Flow.

Approvals are not fully baked

If you’re building approval workflows and are expecting the way SharePoint Designer works, you’ll be disappointed. An Approval in Flow consists of an email and a two buttons, nothing more. There is no concept of setting a status on an item, no functionality for logging (unless you roll it yourself), and no notion of tasks. It changes the way you think about approvals in general, because the old model just doesn’t apply here.

The Designer does not scale

For a simple two- or three-step flow, the designer works great. Add a couple of nested if/else blocks (‘conditions’), or more than a half-dozen or so actions, and you’ll find that  the design surface is totally unsuited to the task. Scroll bars are in difficult-to-find places and it’s often next to impossible to maintain your context when trying to move around within a Flow.

Sometimes saving a Flow will trigger a phantom validation error, and you’ll have to expand every one of your actions until you find the offending statement, because the Flow team have not seen fit to provide any sort of feedback on where the failure occurred. In addition, sometimes, especially when working with variables, the validation will fail even though the variable is properly configured.

No Code view

As clunky as the designer gets, if you’re a developer you might be more comfortable just coding your Flow the old fashioned way – after all, it’s just JSON under the hood. But alas, code view is not available in Flow. The design view is all you have.

Another implication of this: If you have places in your Flows where there are large blocks of similar functionality, you have no option to copy blocks of code and modify to suit. You’re stuck having to re-create those similar blocks of functionality, manually, in the designer, every single time. Believe me, this gets old really fast.

No versioning

If you make a change to your Flow and somehow break it, well, that’s tough, you’d better figure it out because there’s no rolling back.

Clearly Flow is not the magic bullet in the Enterprise process modeling world. It has is quirks and its pitfalls. But remember, the bar is low due to the legacy application it replaces.  SharePoint Designer workflows share many of the same deficiencies as Flow: a clumsy design experience (check), an inability to edit code directly (for all practical purposes), and no rollback model (technically possible in SPD via version history but janky as hell).

Given that SPD has had its ten-plus years in the limelight, and Flow is a brand-new V1 product with an engaged product team, I’d say the future looks bright for Flow.

Using Azure VMs for long-running jobs

From time to time I have to perform long-running deployment jobs against remote environments (usually Office 365), and I’ve hit upon the idea of running these from virtual machines hosted in Azure.

For example, today I had to push new master pages to 800 site collections in SharePoint Online, and the client requested I begin the deployment after hours. Now, pushing 1600 master pages (2 per site) via remote code takes somewhere around four hours; there’s no way I’m staying at the office until 9PM just so my laptop could stay running while I babysit that deployment.

Using an Azure Virtual Machine allows me to begin the deployment process on the remote machine, shut down my laptop, drive home, and have dinner with my family, all the while the deployment is running happily in Azure. I thought I would share the process and specifications I use for my machine and also share some tips for using Azure VMs in this fashion.

As an MSDN subscriber (thanks Rightpoint!) I have access to an Azure subscription with $150 of credit per month.  That amount lets you do a TON of stuff in the PaaS space, but when using virtual machines the dollars add up really quick, so  it pays to be smart and careful about how to spin up and use these machines.

Creating the machine

Azure comes with a wide variety of pre-configured virtual machines with operating systems and certain software packages (“workloads” in cloud lingo) off the shelf.  I needed a VM with Visual Studio 2015, and happily, a number of Visual Studio-powered VMs are available out of the gate. A search for “Visual Studio” on the Azure Portal yields the selections:

vm-1

In my case, I wanted the mature 2015 version of Visual Studio, I needed the Azure SDK, and I had no use for a server OS, so I chose:

vm-2

On the Create Virtual Machine blade I set up some basic settings:

Basics

vm-3

 

A few points here:

  • You’ll set up your admin credentials on this screen. You can’t use a “typical” admin user name like “administrator” or “admin”. However, the username “derek” works just fine. Passwords need to be at least 12 characters as well.
  • You’ll be prompted to select either an SSD or a normal hard drive “HDD”. Keep in mind that SSDs are going to cost more, but if your work requires a speedy disk, that option is available.

Size

Click “View All” on the Size blade and you’ll see a bewildering array of choices, many of which seem startlingly similar, for you to select, along with an estimated cost per month of the compute resources this baby will consume, assuming it runs constantly.

There’s a lot of nuance between the different VM series, and frankly I’m not the best person to explain it all.  But I ended up choosing the D2_V2 Standard for my machine:

vm-4

For more information on Azure VM size choices and what it all actually means, check out the documentation.

Now, 108 bucks per month is nearly 3/4 of my entire Azure allotment, and when combined with all the other stuff I have running, would easily put me over the spending limit on my account. But we have ways to mitigate this and bring the total spend to a small fraction of this, which we’ll get to momentarily.

Settings

I won’t go into screen-shotting the rest of the wizard, but on the following screens you get to configure a few other settings on your machine. There’s a bunch of stuff you probably don’t need to worry about for dev/deployment machines (like high availability and diagnostics) but you’ll want to understand the networking part at least. Your machine will be part of a (virtual) subnet on a (virtual) network, and you’ll have an NIC with a public IP address.  You can configure a dynamic or static public IP address, although keep in mind there is a cost associated with a public IP address.

Accessing your machine

Once your machine is finished provisioning, you can access it in the Azure portal, and you’ll see a “connect” button at the top of the Overview page.

VM-5.jpg

Clicking the Connect button will download an RDP file to your computer, which you can use to remote into your machine.  If you’ve set up the IP address as dynamic, you’ll probably need to do this every time you access the machine as those IP addresses are not “sticky” in my experience, at least across reboots.

Managing your machine (and your money)

The machine I set up costs over a hundred dollars a month to run, and naturally I don’t want to use up all of my cloud bucks on a single resource.  Fortunately, Azure VMs are only billable when “allocated”, and we can de-allocate them when not in use. Note that a machine can be stopped but still “allocated” and this pertains to the method with which the machine was shut down. To be specific, DO NOT shut down the machine from within the machine itself (Start -> Shut Down). Instead, shut down the machine using the Azure Portal (or the Cloud Explorer, or Azure PowerShell).

In other words –

vm-6

When your machine is fully de-allocated, you want to see this in your VMs listing:

vm-7

This way you know it’s not accruing charges.

If your machine is only in an allocated state when it’s being used, you’re only going to get charged for those hours. That D2_V2 machine works out to about 20 cents per hour, so if I ran it ten hours a month, it would only cost me a couple bucks a month. If I were to use it as my everyday dev machine, eight hours per day, 20 days a month, it’s still around 32 dollars a month – easily manageable within my Azure subscription limit and the other PaaS stuff I have going on. Or, I could spend a little more and get access to a beefier machine, for example, maybe this one:

vm-8

Sure, I’d run out of hours after four days (or 12 eight-hour days) but think of how productive I’d be!