Sunday, 19 June 2016

Running Rebus Service Bus in an Azure Web App

What’s all this?

Rebus is a lean service bus running in .NET that is also free and open source. It’s created and is maintained by Mogens Heller Grabe, and it even has stickers! It’s a whole pile of awesome and I warn you that it may change the way you build software forever. You can find more delights in the Rebus wiki. Mogens also wrote an article about using Rebus in a web application, which you can find here:

Rebus is similar to other .NET Service Buses, like NServiceBus, MassTransit and EasyNetQ. You can also host these buses in Azure web apps (note that NServiceBus is commercial and EasyNetQ only supports RabbitMQ for message transport and does not support long running processes (sagas)).

Azure Web Apps are fully managed service in Azure that lets you easily deploy, run and scale your apps, APIs and websites. They are beautifully simple things to work with and to manage. They really do just work!

Rebus is typically run as a Windows service on a server or VM. Can I really run a server-side service in an Azure Web App, I hear you ask? Yes you can, and that’s just what we’re going to do!

Is there code?

Yes there is! You can get it from here:

Choose your toppings

A quick read through the Rebus docs shows that it supports a whole bunch of message transports, types and storage mechanisms and it has support for most of the current IoC containers. I’m going to use Autofac as our DI container and Azure Service Bus as our message transport medium. I’ll also be using Visual Studio 2015 with Web API 2 (because Rebus doesn’t support ASP.Net Core at time of writing).

Why Autofac?

You don’t need a DI container when working with Rebus as it has one baked right in. I always use Autofac (because it rocks) and I’m going to use to show a how little configuration you need once you’re set up.

Why Azure Service Bus?

Like other message queues, Azure Service Bus has built in pub/sub mechanisms and can do lots of wonderful things for you at huge scale. We’re just going to be using it as a message queue for and let Rebus will deal with all the message handling, subscriptions, orchestration and all that malarkey.

Why Azure Web Apps?

These beasties are super easy to develop, debug, deploy, manage and scale. You can scale your service to huge capacity using Azure, or keep it pared down to a minimum (free!). The Azure infrastructure takes care of all the maintenance patching for you and will make sure your service is restarted if it stops. In my view, this is probably one of the best developer experiences around!

Guess this has to be SBaaS: Service Bus as a Service!

Create the Web API project

Start by creating a new ASP.NET Web API project:


Select the Empty project template and add references for Web API:


You should end up with something like this:


Add Rebus and Autofac packages

Install the packages by opening the Nuget Package Manager Console and running this:

install-package Rebus.Autofac

install-package Rebus.AzureServiceBus

This will bring in both Rebus and Autofac along with the Rebus package for interop with Autofac and Azure Service Bus. Then run the following command to add the package that allows Autofac to work with ASP.Net Web API 2:

install-package Autofac.WebApi2

Configure the Autofac container

I’m going to use Autofac to start up the Rebus bus when the app starts. Start by creating the following a configuration class for Autofac:


This sets up the Autofac container and then registers it with ASP.Net Web API on line 21. You know – if you’re just going to use Rebus and you’re not going to inject dependencies into any controllers, then won’t need that line.

Take a look at line 15. This tells Autofac to {scan the assembly]( and register all Autofac modules it finds. I’m using that as I’m going to be putting the rest of my container configuration into a module which we’ll see later.

Line 19 is where we pass the Autofac container into our Rebus configuration, which we’ll see later. The last thing is to call the configuration when the app starts up:


That’s all the Autofac configuration needed. Let’s configure Rebus next.

Configure the Rebus bus

The following class is responsible for the bus configuration:


The container parameter on line 12 is an Autofac container passed in from the Autofac configuration earlier. On line 14 we pass that container into the Rebus Autofac adapter which will make Rebus use our Autofac container and will also register Rebus’ types into our container so we can use them later.
Read more in the Rebus docs here.

You’ll need to add a connection string for your Azure Service Bus – or you can remove lines 18 – 21 and uncomment line 21 to use your file system as the message transport to run locally.

Line 22 tells Rebus where to send messages that you give it. Read more about that in the docs here.

The lines 25 and 26 start a timer to give us a simple heartbeat so we can see that the bus is running when it’s in Azure.

Registering handlers and getting the heart beating

Our heartbeat is going to work by having a simple timer running that publishes a message to the bus every few seconds. Then we have a handler that receives that message and updates a simple counter to increase the heart beat count. This is going to show us the magic of our bus running in the Azure Web App.

So, the next part is to set up the and also register any and all Rebus message handlers into our Autofac container.


Lines 15 to 17 registers all the Rebus message handlers in this assembly by using Autofac’s assembly scanning. A message handler is a class than received a message from the bus and does things with it.

Rebus has a lot of rich support for controlling how and when this happens – see the docs for more.

Starting on line 20 we add a simple timer that sends a message to the bus every few seconds. I’m using Autofac’s named registration so that I know which timer I’m getting back from the container when I start it up.

The message

In Rebus, messages are just simple POCO types. I am using this one for our heartbeat:


The handler

The Rebus message handler is the class that receives the message. Again, it’s just a very simple class which looks like this:


All it does is to tell the heart to beat! It’s worth noting in case you are new to Rebus – you can have more than one handler for this message type and Rebus will take care of getting the message to all your handlers.

The heart

The heart is just a static class that keeps a count of not many times it has beaten and tells us when it was last refreshed.


The controller

Just so that we can see the heartbeat, we’re going to add a simple Web API controller to allow access to the internal state.


Bring it to life

Run the project in Visual Studio and it will open a browser. Navigate to /heartbeat and you’ll see something like this:


Refresh the browser and you’ll see the number of heart beats increases every 5 seconds, which means our Rebus bus is sending and receiving messages. Woohoo – happy days!

An easy way to see the message content (which is JSON serialised by default), swap the Rebus transport to use the file transport in your RebusConfig class:


Getting an error?

You might get an error about not being to resolve the AzureServiceBusTransport. That will be because you haven’t set up the connection string in RebusConfig.


What’s all this doing?

Let’s review how all this is working.

  • We have an ASP.NET Web API app that configures an Autofac container when it starts up.
  • The Autofac container scans the assembly for Autofac modules and it finds a module that does two things.
  • The Autofac module it finds scans the current assembly and then registers all the Rebus handlers that it finds in that assembly.
  • The Autofac module then registers a timer that is configured in the module to send heart beat messages to the bus every few seconds.
  • The Autofac container is then built and passed into our Rebus configuration.
  • The Rebus configuration sets up a Rebus bus that uses Azure Service Bus as its message transport.
  • The Rebus configuration then starts the Rebus bus and makes it available from inside the Autofac container.
  • Then Rebus configuration resolves our timer from the Autofac container and starts it.
  • It’s worth noting here that when the timer starts inside the Autofac container, it resolves the Rebus bus directly from the Autofac container.
  • The timer send a message to the Rebus bus, which puts it into the Azure Service Bus queue.
  • The Rebus bus is monitoring the Azure Service Bus queue and it detects the message, opens it and passes it to the handler that we’ve built to handle those messages.
  • The message handler then tells the heart to beat.

Deploy to Azure Web Apps

The last part is to publish all this goodness to an Azure Web App. There are plenty of ways to publish an Azure Web App, including publishing from Git. I’m just going to use Visual Studio for that.

Right click the solution and choose Publish:


You might need to sign into your Microsoft Account at this point. Select the Azure App Service target:


Select an App Service, or click New to create one:


Enter the details and click Create:


Then click Publish in the next screen to publish to Azure:


You’ll see something like this (with links for how to set up Git deploy if you’re interested!):


Navigate to /heartbeat to see Rebus running in an Azure Web App:


Getting an error?

Did you set the Azure Service Bus connection string?
Scaling out and keeping it running

If your use case is to have a service bus running in the cloud, then you’ll most likely want it to stay on. Azure Service Bus will unload an app if it’s idle for a period of time, but you can configure it to keep the app on at all times.


Note that you always on is not available in the free Azure plan, you need Basic or Standard mode for this. You can read more about that here:

As for scaling, it’s super easy to scale to giddy heights. Read more here:


Saturday, 15 August 2015

Using PowerShell as the Visual Studio Developer Command Prompt


The Visual Studio Developer Command Prompt is a command window that loads up a bunch of useful environmental enhancements when it starts.  This let's you access things like sn.exe from your command window.  However, it loads up in cmd.exe and I prefer using PowerShell 'cos it's proper useful!  This article shows one way making PowerShell act like the Developer Command Prompt.

Adjust your PowerShell Profile

When PowerShell loads it looks for a profile file under your user profile and loads that if found.  By default, the profile file is:
Note that you can access your documents folder path from PowerShell like so:
If that file isn't there, create it, add the following code and save it:

Adjust the environment in the last line above according to your VS installation.
Load in PowerShell
Open up a new PowerShell window (existing windows will not have loaded the profile we've just saved).  Type sn and to show that you don't have access to sn.exe:
Then type As-VSPrompt and hit .  This will load up the Developer Command Prompt profile.  Type sn and again and you see sn.exe reporting for duty:

Sunday, 25 January 2015

Installing a Service using Topshelf with OctopusDeploy

I was using OctopusDeploy to install a Rebus service endpoint as a Windows Service from a console application created using Topshelf.

Octopus has built-in support for Windows Services that uses sc.exe under the hood.  I'd used this many times previously without hitch, but for some reason the install seemed to start before the service was fully removed.  As Topshelf can do all the service set up, I decided to try using that instead.

My deployment step in Octopus was configured to use the standalone PowerShell scripts.  Installing using Topshelf is simple - just call the executable using the commandline install --autostart.  Removing a Topshelf service is just as simple - call the executable using the uninstall commandline.

The only hard part is finding the executable path for the service so that you can remove the service before re-deploying.  PowerShell doesn't have built-in support for this at time of writing, but you can do that pretty easily using WMI.  The following function is based on David Martin's SO answer and finds the executable path from the service name:

From that point, it's pretty easy to remove the service using Topshelf:

An alternative approach would be to use a known installation location each time and derive the executable path using Octopus' powerful variables.

The code for the pre-deployment and  post-deployment tasks can be found in this gist.
The full Topshelf command line reference is here.

Friday, 31 October 2014

Using IIS URL Rewrite

This article is about creating IIS URL Rewrite Rules using Powershell. The Powershell functions that we use apply generally to reading and writing to .NET configuration file (web.config, app.config, etc.), and so can be applied to tasks like updating connection strings.
The functions used are:

Although these are powerful, being well documented is not their strongest feature! There are also a couple of simple pitfalls with the URL Rewrite Rules, which I’ll point out. Note that for the following to work, you’ll need to have the IIS URL Rewrite extension installed in IIS. You can install this using the Web Platform Installer, or using Chocolatey

Files for this article can be found here

The test website

So we can test our rewrite rules we’ll use a simple website structured as follows:

Each leaf folder has two HTML files, 1.html and 2.html and there’s a page of links at the root:

To help test the rewrite rules, the HTML files indicate where they are in the structure:

Introduction to rewrite rules

The rules can be maintained in the IIS Manager by double clicking URL Rewrite in features view:

There is a rich GUI that lets you maintain the rules and which does validation against your input:

The above rule redirects any URL section matching “1.html” to “2.html”. You’ll see this in the site’s config file:

The rules are stored in the config file. The config file that’s changed depends on whether you apply the rules at the level of the virtual directory, website or the server. If the rules apply at the server level they are stored in the main IIS config file here:
After this rule has been applied, recycle the app pool or reset IIS and then when accessing http://localhost/simple-rewrite/1.html you’ll be taken to http://localhost/simple-rewrite/2.html.

Using Powershell

Firstly, make sure you have the WebAdministration module loaded:
Import-Module WebAdministration
To create the above rule using Powershell, run the following script (see CreateRule1.ps1):
Add-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `-filter '/system.webserver/rewrite/rules' `-name "." `-value @{name='Rule 1'; patternSyntax='ECMAScript'; stopProcessing='True'}
Set-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `-filter '/system.webserver/rewrite/rules/rule[@name="Rule 1"]/match' `-name url `-value '1.html'
Set-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `-filter '/system.webserver/rewrite/rules/rule[@name="Rule 1"]/action' `-name . `-value @{ type="Redirect"; url='2.html'; redirectType="SeeOther" }

The pspath parameter is the website path in the format used by the WebAdministration module. The filter parameter is the XPath to select the element we’re interested in. Here it’s under rules/rule and has a name attribute with the value “Rule 1”.

To remove the rule, run the following script (see RemoveRule1.ps1):
Clear-WebConfiguration `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `-filter '/system.webserver/rewrite/rules/rule[@name="Rule 1"]'

Note that Clear-WebConfiguration removes the rule using the name selector and will raise an error if the rule is not found. W If you want to test whether the rule exists, use Clear-WebConfiguration as follows:
$existing = Get-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `-filter '/system.webserver/rewrite/rules/rule[@name="Rule 1"]' -name *
if ($existing) {
Clear-WebConfiguration `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `-filter '/system.webserver/rewrite/rules/rule[@name="Rule 1"]'

Rule processing order and redirect exclusions

The rules are processed in the order they appear in IIS Manager and the config file. A rule can be set to stop processing of subsequent rules. These two combine to make an effective way to create an exception to a redirect. Say you wanted to redirect all “1.html” URLs to “2.html” except for “~b/c/1.html”. To achieve this add in the following rule above the more general redirect:

Your configuration will look something like:

Using rule conditions

Let’s say you want a rule that applies to POSTs to a particular URL but which only contain certain parameters in the query string. Such as a POST to:

Let’s say we want to detect parameter values of X, Y or Z. You can do this using rule conditions. The completed rule looks like this in IIS Manager:

In the configuration, this rule looks like:

And can be scripted as follows:
Add-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `
-filter '/system.webserver/rewrite/rules' `-name "." `-value @{name='Rule 1 Post Exception'; patternSyntax='ECMAScript'; stopProcessing='True'}
Set-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `
-filter '/system.webserver/rewrite/rules/rule[@name="Rule 1 Post Exception"]/match' `-name url `-value "b/2.html"
Add-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `
-filter '/system.webserver/rewrite/rules/rule[@name="Rule 1 Post Exception"]/conditions' `-name "." `-value @{input="{REQUEST_METHOD}"; pattern='POST'}
Add-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `
-filter '/system.webserver/rewrite/rules/rule[@name="Rule 1 Post Exception"]/conditions' `-name "." `-value @{input="{QUERY_STRING}"; pattern='paramA=(X|Y|Z)'}

Notes and gotchas

Rules are relative to the site root

Once you realize that the redirect URLs are relative to the site root, it’s all very obvious. Gaining that realisation can take some time! So for the example site above, the following rule will do nothing:

IIS Manager has code completion for conditions

When you add a condition in IIS Manager, you get code completion after typing the first “{“ (such as in the POST exception rule above):

Files get locked and browsers cache pages

When you’re developing the rules config file is being touched by TIIS, IIS Manager, Notepad++, Powershell and goodness know what else. Your super-modern browser is being super-modern helpful and caching the HTML pages. Sometimes you just need to recycle the app pool to see the updated rule. Other times, you need to do an IIS reset, clear your browser cache and restart your tool windows.

Default values are not written

When the rule’s action type is None and it stops processing of other rules, the configuration written by the IIS Manager GUI is like this:

Following the same pattern for creating the rule in a script, you can run something like this (see CreateExceptionRule.ps1):
Add-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `-filter '/system.webserver/rewrite/rules' `-name "." `-value @{name='Rule 1 Exception'; patternSyntax='ECMAScript'; stopProcessing='True'}
Set-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `-filter '/system.webserver/rewrite/rules/rule[@name="Rule 1 Exception"]/match' `-name url `-value 'b/c/1.html'
Set-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `-filter '/system.webserver/rewrite/rules/rule[@name="Rule 1 Exception"]/action' `-name . `-value @{ type="None" }

The above script writes the following to the configuration file.

Note that the action element is not written. It seems that the Powershell command knows that <action type=”None”> is the default and doesn’t bother to write it, but the IIS Manager GUI writes it anyway. If you didn’t know that you may spend a lot of time trying to work out why your Powershell isn’t doing what you think it should!

Saturday, 11 October 2014

Simple UI testing with Nightwatch.js

This week I was introduced to Nightwatch.js by my colleague Matt Davey.  Nightwatch provides a clear and simple syntax for writing your UI tests and uses NodeJS and Selenium to run the tests. 

Having played with Canopy for UI testing, the high readability of the test syntax impressed me from the start.  We used it to nail down a reproduction of a bug we were working in our test environment.

Test scenario

Our test did something like this
  • Logs into a site using known credentials.
  • Clicks through to a place in the UI.
  • Changes and saves a value.
  • Asserts that the new value is being displayed in another area of the UI.

The test syntax

The test syntax looks like this:

The syntax is clean, easy to read and all in a one file. One of the CSS selectors is pretty evil, but you can get that from the browser’s context menu in the inspector (which I learnt this week too!):


Setting up the test runner

Setting up to run the test is very simple to do on any development or test machine. You the components are:
  • NodeJS
  • NPM
  • Nightwatch
  • A Selenium Server
You can install Java, NodeJS and Nightwatch using Chocolatey by opening a command window and running:

choco install javaruntime
choco install nodejs.install

npm install -g NightWatch

Note that Chocolatey seems to install the Node packages under Chocolatey's folders, and NPM then places the Nightwatch module into the nodejs.commandline folder under \tools.  In there is a batch called Nightwatch.cmd in there that we need to access. So, we need to add the following to our path:

C:\ProgramData\chocolatey\lib\ nodejs.commandline.X.Y.Z\tools

(where X.Y.Z is the NodeJS version we’ve installed.)

Then we need to start up a Selenium server.  It would be nice to use Chocolatey here too, but the version of Selenium from the package at time of writing is too old.  So, download the latest Selenium driver (JAR) from here (which is 2.43 at time of writing).

Rename the Selenium JAR file to selenium-server-standalone.jar and save it to C:\selenium\. Then start up a Selenium Server for opening a new command window (as admin) and typing:

java -jar "C:\selenium\selenium-server-standalone.jar"

That’s the setup done, now let’s run the test.

Running the test

Save your test (similar to the code above) to a file, let’s say repro.js, into a folder.  Create a sub-folder called “examples\custom-commands” (which Nightwatch wants to find, not sure why!).
Then open another command window and change to the folder containing the test file and run the following:

nightwatch --test repro.js

Job done!

Monday, 11 August 2014

Testing TypeScript with Jasmine in Visual Studio

This post is about setting up testing for TypeScript with Jasmine in Visual Studio, but it should be pretty much the same using QUnit

To set this up, start a new MVC project and let’s get Knockout and Jasmine installed into the project, along with their TypeScript definitions:

Install-Package knockoutjs
Install-Package knockout.typescript.DefinitelyTyped
Install-Package JasmineTest
Install-Package jasmine.typescript.DefinitelyTyped

We’re using the Jasmine Nuget package called JasmineTest.  There is a similar one on Nuget called Jasmine-JS, the difference being that JasmineTest will add a controller and view to let you run your tests.  We’ll be using that in a second to let us debug the tests.  Now run and browse to ~/jasmine/run and you’ll see this:


This is the view added by the JasmineTest package.  It is an HTML page (SpecRunner.cshtml)  and has the references for the example Jasmine specs that ship with Jasmine.  We are going to be using to let us debug out tests.


So, let’s write some tests.  For simplicity we’re going to put the model and the tests into the same file.  So we add a TypeScript file called Tests.ts to your /Scripts folder and add the following lines to that file, giving this:


Running tests with ReSharper

If you have ReSharper installed, then the R# Test Runner will recognise the Jasmine test and allow you to run them.  To set this up for a smooth experience, first install PhantomJS and set the path to phantomjs.exe in Resharper | Options | Tools | Unit Testing | JavaScript Tests.

R# will need to find the .JS files for the .TS references.  R# will look for and use a file that is named the same as the Definitely Typed TS file, but ending with .JS.  For us, this means copying knockout-3.1.0.js from /Scripts to /Scripts/typings/knockout and renaming it to knockout.d.js:


After that you can just run the tests as you would in C#:


Running tests with Chutzpah

You can also use the Chutzpah test runner to run Jasmine tests in Visual Studio.  You can install that from Tools | Extensions and Updates here:


Once installed, you just right click the TypeScript file that contains the tests and select the item Run JS Tests.  This puts the test results into the Visual Studio Output window by default, but there is a Visual Studio extension that gives integration with the Visual Studio Test Explorer.

If you go ahead and run the tests as shown above you’ll get an error.  This is because Chutzpah resolves referenced JS files differently to R#.  For this to run, we need to add a reference to the JS file in the TS test for Chutzpah to pick up.  This is documented here and looks like this:

/// <chutzpah_reference path="knockout-3.1.0.js" />

Debugging tests in Visual Studio

As far as I can tell, there is currently no support from either R# or Chutzpah for debugging TS tests from either test runner in the IDE.  However, we can do that by going back to our SpecRunner.cshtml that was installed with the JasmineTest Nuget package.

Just add the JS references along with a reference to our test file to the HTML shipped with the package:


Note that we are referencing Tests.js instead of the TS file.  Then place a break point in the test in the TS file, run a browse to ~/Jasmine/Run:


Job done!  Source code is here:

Friday, 29 March 2013

Does Windows Azure comply with UK/EU Data Privacy Law?

Update, November 2016

Times have changed and the content of my article from 2013 may no longer be correct due to a recent ruling by the European Court of Justice.  Read more here:


Yes, it does.  The Information Commissioner’s Office in the UK considers that Windows Azure provides adequate protection for the rights of individuals with regard to their data that is held on Windows Azure. 
Given that you have the other relevant measures in place to comply with the Data Protection Act, this means that you go ahead and use Windows Azure to store your customers’ personal information.  As Microsoft comply with the EU/US Safe Harbor Framework, the data can be considered to remain solely within the EU geographic region.


As developers we often need to store information about individuals, such as their name and email address.  When people share this information with us, they expect that we will treat the information properly and respect their privacy. 
In the UK, the Data Protection Act provides a legal framework to define how companies capture and process information about individuals.  For instance, they may choose to let you send them an email newsletter, but tell you that they do not want to receive then from other companies that you work with.
In the UK, the Information Commissioner’s Office (ICO) is the body responsible for upholding information rights and privacy for individuals.  The ICO can and do fine companies who do not treat individuals information properly.  These fines are quite large too!

To the Cloud

As developers we increasingly want to be able to use hosted cloud services to build our web sites and internet enabled applications.  There are a number of choices out there that include large offerings such as Windows Azure and Amazon Web Services, but also more targeted offerings such as MongoHQ.
If we store any information about an individual (in the UK the law relates to living individuals), then any cloud service that we use as developers must ensure that the data about the individuals is protected.

Keep it local

Other member states of the European Union have similar data protection laws and there is a common legal framework to protect the rights of individuals in the EU.
To be able to benefit from the protection that this legal framework gives, the individual’s data has to remain physically within the EU.  If the data were to be held in another country outside of the EU, then the laws of that country would apply to that data.  For example, the US has a very different approach to the protection of individuals data than we do in the EU.

Back to the Cloud

Let’s look at how Amazon AWS and Microsoft Azure - two popular US cloud providers - handle this.
Amazon make the statement that any data hosted in one of their EU regions will remain within that region.  You can read that here.  Okay, not much detail in that, but it sounds fine.
Azure talk a little more about this than Amazon and you can read about that here.  If you are eagle eyed, then you will notice that data in Azure EU regions will generally remain in the EU, but may be shipped to the US for disaster recovery purposes.
Oh dear, that sounds like it breaks the condition that data has to remain physically within the EU.

Can I use Azure then?

Yes, you can.  The reasons for this - also stated on Azure Trust Centre Privacy page – is that Microsoft are certified as complying with the EU/US Safe Harbor Framework
This is a legal framework between the EU and the US Department of Commerce that essentially provides EU compliant data protection for data that is moved to the US under the Safe Harbor Framework.  The ICO deem that the Safe Harbor Framework provides adequate protection for the rights of the individual whose data may be transferred to the US by Microsoft.  You can read about that here.

That’s simple then – why the doubt?

So, if it’s that easy, why am I writing this article in the first place?  Well, I’ve been looking at Azure for a while now and wanting to use it for some of my applications.  The advice I had received in the past was basically that once the data was in the US, other US laws, such as the Patriot Act, could override the Safe Harbor Framework and remove the legal protections provided by the Safe Harbor Framework. 
If that was the case, then I would need treat it as if it were being shipped outside of the EU and under the jurisdiction of different data protection laws.  Not something that my customers would want to hear!  Also, the reason why I’d been using Amazon AWS.
I recently spun up a build server on an Azure VM and I absolutely loved the experience of using Azure.  I was thinking what a shame it was that I couldn’t use it more and so I got in touch with the venerable Mr Hanselman, Microsoft developer evangelist extraordinaire, to say “please can we use Azure in Europe?” (I had previously tried other routes, but without getting any answers).
Scott kindly took my question back to the Azure team and then came back with a bunch of challenges for me.  The summary of those being that all I needed was already on the Azure Trust Centre Privacy page.  And, he was quite right too! 
I got onto the phone to the ICO and asked them about it, and they confirmed that this provided “adequate protection for the rights of individuals” and that there are exceptions to data protection law to allow circumstances such as the Courts or the Police to have access as part of their investigations – both in the EU as well as the US – and that I could now just focus on complying with the Data Protection Act requirements. 
Awesome – my future is now on Azure!

A note of caution

It’s worth remembering that the location of the data is just one part of the process of protecting your customers’ data.  You need to make sure that you comply with all other aspects of the relevant data protection laws. 
Generally, the more sensitive the information and the more people about whom you hold that sort of information, the less likely you will be able to host it externally.
In the UK, if you’re not sure about how to go about things, contact the ICO for advice – they’re very helpful people!