Friday 15 June 2018

Using Rebus Service Bus from F# with DotNet Core

UPDATE December 2020

There is an improved and updated version of this article here:

This post is about how to use Rebus from F# using DotNet Core.  Code for the below is here.

What is Rebus?

Rebus is a free, open-source service bus written in .NET.  Source code is in GitHub and, whilst it’s absolutely free to use, there are also professional support services provided by if you need them.  It is very similar to other .NET service busses like NServiceBus (commercial) or MassTransit (free).

Why not Functions as a Service?

You can use something like Azure Functions or AWS Lambda and easily knock up a simple message-based architecture. 

Azure Functions give you a lot of integration points and management tooling out of the box and you can install on your own box so you don’t even need to be tied to the cloud.  They’ve even got durability and orchestration now.

It’s all super cool and could well be all you need for most scenarios.  Services busses, like Rebus, NServiceBus or MassTransit have a whole lot of additional things that are not available out of the box with FaaS today, such as:

  • Publish/Subscribe (with more than one subscriber!)
  • Support for complex workflows (sagas)
  • Can be used in anywhere: web apps, desktop apps, services, the cloud…
  • Groovy stuff like deferred messages

That’s not a full list and I’m not saying FaaS > Service Bus or FaaS < Service Bus.  My reasons for choosing Rebus here are:

  • I’ve been using it for years and trust it.
  • I’m already deploying a .NET Core Web API and want to host my bus in that.

A Service Bus Inside a .NET Core Web API?

Really?  WTF???  I’ve been handling complex, long-running workflows inside an Azure Web App for a couple of years now (read my previous article here).  That implementation is using Rebus with C# on .NET 4.x and has been running like a dream for years.  As it’s in an Azure Web App, you can scale it up or down flicking a switch.  I love it because it’s so simple! 

So, given that I am writing a web API, I thought I’d use Rebus inside that.  My API is written using F#, .NET Core and Giraffe.  I’ve never seen anything about Rebus with F# before, but F# is just .NET and has excellent interoperability with C#

By the way, I’m using F# because it’s expressive, concise and is a beautiful language for modelling domains.  Read more from the venerable Scott Wlaschin here:

Scott’s book, Domain Modelling Made Functional is also an excellent read.  If you’re coming to F# from C# then I recommend  Get Programming with F# by Isaac Abraham as well as Scott’s website.

Anyway, let’s get started!

Create .NET Core API with Giraffe

Instructions on how to set up a Giraffe app using dotnet new are in Giraffe’s docs here.  The command line I used was: dotnet new giraffe –lang F# –V none.

This will give you a basic, functional .NET Core web API project.  To that, add the following Rebus Nuget packages: Rebus & Rebus.ServiceProvider.

After that, you should be able to run your API and get a response from ~/api/hello.  Here’s the routing configuration at this stage.  Cool, eh?


Configure Rebus

We’re loosely following Rebus’ .NET Core configuration that you can find in their GitHub here.  Instead of using Rebus’ in-memory transport as in the previous link, I’m going to use the file system transport.  This is a great transport to use when hacking on Rebus as you can see your messages as files, and failed messages will be dropped into the “error queue”, which is a folder named “error”.  By default, Rebus serialises messages to JSON with Base64 encoding of the message body, so they’re pretty easy to read. Error messages are added to the message as headers.  Sweet!

Create a Message

Firstly, we need a message that we will send to Rebus.  In Rebus, messages are modelled using .NET classes.  Our message is going to be a very simple command to tell the bus to say hello.  Here is is:


So, this is an F# record type that holds a single string field called Name.  The CLIMutable attribute tells the compiler to generate IL that’s more friendly for the .NET serialisers – read about that on Mark Seeman’s blog here.

Create a Message Handler

Next, we need something to handle those messages.  In Rebus, that’s a class that implements IHandleMessates<MyMessageType>, these are then registered into the bus.  Rebus gives you fine-grained control over these are handled, but we only need a very simple handler.  Here is what the interface looks like


Here’s what our F# implementation to handle our messages of type Commands.SayHello looks like:


This is a class in F# with a default constructor that implements the IHandleMessages<Commands.SayHello> interface.  (A good place to start reading about classes is here.) 

Rebus, being written in C# uses System.Threading.Tasks.Task for concurrency.  F# has it’s own support for concurrency using Async workflow, and these are not the same things.  You can, however, convert between the two models. 

It’s worth noting that Giraffe works with Task/Task<T> and not F#’s Async. It does this to avoid having to always convert.

If you look at the code above, we are using F#’s Async and then piping that to RunAsRebusTask to convert back to a Task.  Here’s how I’m doing that (using code from Veikko Eeva's comment here):


Configure and Start the Bus

Lastly, we need to configure and start Rebus’ bus.  We do that like this:


This is nearly identical to the C# equivalent.  The two notable changes are when we pipe output to ignore, which is needed because only the last expression in a block can return a value in F#, and the use of the different F# lambda syntax fun x –> …..

By default, Rebus will retry handling a message 5 times before it fails.  If you’re hacking on Rebus, you may want only one exception in your message.  You can configure the number of retries like you see on line 47 (again, a slightly different syntax to what you’d use in C#)


Then just hook that method up in your .NET Core service configuration as follows


Send a Message to the Bus

All good, we’ve got our Rebus bus running and waiting to handle messages.  We just need to send it a message.

In my last article, I used a timer inside the web app to send a message.  This was because I wanted to be able to leave it running for a day or so and come back to see that the bus had stayed running over that time.  This time, I’m going to send a message to the bus as a result of a POST call to the API.

Set the Routing

Change the routing section to add a POST call, like this:


Note that there is no “fish” between the route and the handler when you use routef.

Create an HTTP Handler

We’re using Giraffe’s routef function to configure a route that passes off to sayHello. The implementation of which looks like this:


This function receives the string parameter passed from the routing and returns an HttpHandler.  You can see it uses task { … } starting on line 23, which is Giraffe’s Task implementation mentioned earlier.

Line 25 does the business and sends a new message to the bus.  We then return OK with a message saying the world is good.

On line 24 you can see that it uses an extension method on the ctx parameter (which is of type HttpContext).  This is how you retrieve registered services from inside your HTTP handlers.  If you’re coming from a C# background you may be expecting to see some sort of dependency injection.  Dependencies are more explicit in a functional language – here they come from being passed in as a parameter.  Read more about dependency management in functional programming here.  In an OO codebase, line 24 would be called a Service Locator and lots of people would get very hot and bothered if they saw that.  Don’t worry, it’s fine.  The world isn’t going to end and your codebase will stay nice and maintainable!

All Done – Run it!

Run the project, and then post to our API and you get back a 200 OK with a nice message:


If you look in the console output from the app, you’ll see that our message has been handled by our message handler, which has printed out a nice message for us:


And there you have it – an F# implementation of a Rebus service bus running inside a Giraffe .NET Core Web API app!

Closing Thoughts

You can easily adapt the code above to run in other hosts, such as a console app, a service or a desktop app. 

Remember that you’re not limited to using the file system to transport your messages..  Rebus provides a whole load of transports that you can use, ranging from RabbitMQ, Azure Storage Queues to databases like RavenDB, Postgres and SQL Server.  You can see a list of what’s supported in .NET Core on this thread.

If you’re going to host in an Azure Web App, don’t forget to set the app to be “always-on”, as shown at the end of my previous article.  You can also read a little about how to scale your bus in there too.

Code for the above is here.

Wednesday 23 August 2017

Creating a new Fable React/Redux project

React and Redux

React is Facebook’s JavaScript library for building user interfaces.  It’s a joy to develop with and has a “learn once, write anywhere” philosophy that allows you to use your React skills to build not just JS based UI, but also Node-based server-side apps and mobile apps using React Native.

Redux is a predictable state container for JavaScript apps that is commonly used with React apps.  It brings a whole pile more awesome to the table with things like time travelling debugging.  There’s a bunch of free tutorial videos linked from the project’s home page.

All in all, the developer experience for using React is very joyful.  The only downside is that setting things up can be difficult.  I think this comment applies to a lot of the JavaScript world at the moment, but this is recognised and there is a lot of effort to make getting started easier.  For example, check out the create-react-app project.

I’ve got to say thanks here to @nicklanng for introducing me to the joys of React and Redux – your lunchtime lightning talk changed my dev-life, mate! :)


The Fable Compiler compiles F# into JavaScript.  It’s tag line is:

The compiler that emits JavaScript you can be proud of!

Or, in the words of Tomas Petricek,

JavaScript you can take home to meet your mother.

You’ve gotta love that description!

So, basically you can write beautiful, type safe, immutable F# code and Fable will translate it into JavaScript for you.  That’s another level of awesome and I encourage you to take a look at the amazing samples to see what can be done with Fable.  You can even set a breakpoint in Chrome and you’ll see the F# code in the Chrome dev tools!  Oh my…

Whilst things like TypeScript and Facebook Flow are very good to help you write complex JavaScript apps, having the F# compiler behind takes that support to a whole new level. 

Why not Elm?  Elm also brings the benefits of a functional type-checked language to web development. and it’s certainly an awesome language.  For me, the choice of Fable comes down to the fact that it can keep growing my F# skills whilst doing web development (the used key is the brightest, right?), and I can also leverage the wonderful “learn once, write many” approach of React to write native mobile apps.  (Sure, there’s some talk about using Elm with React Native if that floats your boat.)

You even stop at breakpoints in Chrome in your F# code - tell me that that’s not just totally amazing!



But setting up a Fable React/Redux project isn’t the easiest thing to do, particularly when you’re starting out.  So, if you’d like to take Fable with React and Redux out for a spin, but didn’t know where to get started, read on.

Getting Started

Firstly, head over to Fable’s page about getting set up and install all the shizzle you’ll need (.Net Core, Yarn, etc.).  You can find that here: 

Then follow the steps below to create your first Fable React/Redux project and start hacking:

1 – Create a Project

We’re going to create a project called “fableredux”.  If you want, change that to something else, but make sure you use your name throughout the rest of the instructions.  (You’ll be working mainly in your console.)

dotnet new fable -n fableredux
cd fableredux

2 - React and Redux

Install the NPM modules we’ll need:

yarn add react react-dom fable-core fable-react redux
yarn install (might not be needed)

3 - Configure webpack

Locate the file webpack.config.js.  We’re going to add the externals section to tell webpack about the external dependencies.  If the section is already present, just add the entries for react and redux, as below:

    devServer: {
        contentBase: resolve('./public'),
        port: 8080

    // This section needs React and Redux
    externals: {
        "react": "React",
        "redux": "Redux"

    module: {

4 - Update References

The Fable sample app loads the React and Redux JS dependencies from disk in in it’s index.html file.  We’ll stick with that approach.

Copy index.html and the supporting files in the \lib folder from the Fable Redux sample and replace the files in the project’s \public folder.  You can get those from here:

5 - Add Fable.React

Use Paket to add the Fable.React Nuget package to the project (assumes Paket is on the path).

paket add fable.react

Also, make sure there is an entry for Fable.React in src\paket.references.  Add it if it doesn’t already exist.

6 – Add Some Fable Code

Let’s get the Fable F# code in now.  Replace the contents of App.fs with the contents of the App.fs from the Redux-ToDoMVC from the Fable samples.  You can get that here:

Make sure to change the line that says:

module ReduxTodoMVC

To the following (or whatever you named your project at the start):

module fableredux

7 - Restore and Build

cd src
dotnet restore
dotnet build

You should see a message that the build has succeeded.

8 - Run it!

Now you’re ready to run the project (sometimes I needed to close and re-open my console here as it complained about not finding an executable matching dotnet-fable.  YMMV!):

dotnet fable yarn-start

Then open http://localhost:8080 and enjoy!

Getting Help

The Fable community is awesome so drop by their Gitter room (and thanks to @rkosafo for assistance getting me over some bumps along the way!):

Sunday 19 June 2016

Running Rebus Service Bus in an Azure Web App

What’s all this?

Rebus is a lean service bus running in .NET that is also free and open source. It’s created and is maintained by Mogens Heller Grabe, and it even has stickers! It’s a whole pile of awesome and I warn you that it may change the way you build software forever. You can find more delights in the Rebus wiki. Mogens also wrote an article about using Rebus in a web application, which you can find here:

Rebus is similar to other .NET Service Buses, like NServiceBus, MassTransit and EasyNetQ. You can also host these buses in Azure web apps (note that NServiceBus is commercial and EasyNetQ only supports RabbitMQ for message transport and does not support long running processes (sagas)).

Azure Web Apps are fully managed service in Azure that lets you easily deploy, run and scale your apps, APIs and websites. They are beautifully simple things to work with and to manage. They really do just work!

Rebus is typically run as a Windows service on a server or VM. Can I really run a server-side service in an Azure Web App, I hear you ask? Yes you can, and that’s just what we’re going to do!

Is there code?

Yes there is! You can get it from here:

Choose your toppings

A quick read through the Rebus docs shows that it supports a whole bunch of message transports, types and storage mechanisms and it has support for most of the current IoC containers. I’m going to use Autofac as our DI container and Azure Service Bus as our message transport medium. I’ll also be using Visual Studio 2015 with Web API 2 (because Rebus doesn’t support ASP.Net Core at time of writing).

Why Autofac?

You don’t need a DI container when working with Rebus as it has one baked right in. I always use Autofac (because it rocks) and I’m going to use to show a how little configuration you need once you’re set up.

Why Azure Service Bus?

Like other message queues, Azure Service Bus has built in pub/sub mechanisms and can do lots of wonderful things for you at huge scale. We’re just going to be using it as a message queue for and let Rebus will deal with all the message handling, subscriptions, orchestration and all that malarkey.

Why Azure Web Apps?

These beasties are super easy to develop, debug, deploy, manage and scale. You can scale your service to huge capacity using Azure, or keep it pared down to a minimum (free!). The Azure infrastructure takes care of all the maintenance patching for you and will make sure your service is restarted if it stops. In my view, this is probably one of the best developer experiences around!

Guess this has to be SBaaS: Service Bus as a Service!

Create the Web API project

Start by creating a new ASP.NET Web API project:


Select the Empty project template and add references for Web API:


You should end up with something like this:


Add Rebus and Autofac packages

Install the packages by opening the Nuget Package Manager Console and running this:

install-package Rebus.Autofac

install-package Rebus.AzureServiceBus

This will bring in both Rebus and Autofac along with the Rebus package for interop with Autofac and Azure Service Bus. Then run the following command to add the package that allows Autofac to work with ASP.Net Web API 2:

install-package Autofac.WebApi2

Configure the Autofac container

I’m going to use Autofac to start up the Rebus bus when the app starts. Start by creating the following a configuration class for Autofac:


This sets up the Autofac container and then registers it with ASP.Net Web API on line 21. You know – if you’re just going to use Rebus and you’re not going to inject dependencies into any controllers, then won’t need that line.

Take a look at line 15. This tells Autofac to {scan the assembly]( and register all Autofac modules it finds. I’m using that as I’m going to be putting the rest of my container configuration into a module which we’ll see later.

Line 19 is where we pass the Autofac container into our Rebus configuration, which we’ll see later. The last thing is to call the configuration when the app starts up:


That’s all the Autofac configuration needed. Let’s configure Rebus next.

Configure the Rebus bus

The following class is responsible for the bus configuration:


The container parameter on line 12 is an Autofac container passed in from the Autofac configuration earlier. On line 14 we pass that container into the Rebus Autofac adapter which will make Rebus use our Autofac container and will also register Rebus’ types into our container so we can use them later.
Read more in the Rebus docs here.

You’ll need to add a connection string for your Azure Service Bus – or you can remove lines 18 – 21 and uncomment line 21 to use your file system as the message transport to run locally.

Line 22 tells Rebus where to send messages that you give it. Read more about that in the docs here.

The lines 25 and 26 start a timer to give us a simple heartbeat so we can see that the bus is running when it’s in Azure.

Registering handlers and getting the heart beating

Our heartbeat is going to work by having a simple timer running that publishes a message to the bus every few seconds. Then we have a handler that receives that message and updates a simple counter to increase the heart beat count. This is going to show us the magic of our bus running in the Azure Web App.

So, the next part is to set up the and also register any and all Rebus message handlers into our Autofac container.


Lines 15 to 17 registers all the Rebus message handlers in this assembly by using Autofac’s assembly scanning. A message handler is a class than received a message from the bus and does things with it.

Rebus has a lot of rich support for controlling how and when this happens – see the docs for more.

Starting on line 20 we add a simple timer that sends a message to the bus every few seconds. I’m using Autofac’s named registration so that I know which timer I’m getting back from the container when I start it up.

The message

In Rebus, messages are just simple POCO types. I am using this one for our heartbeat:


The handler

The Rebus message handler is the class that receives the message. Again, it’s just a very simple class which looks like this:


All it does is to tell the heart to beat! It’s worth noting in case you are new to Rebus – you can have more than one handler for this message type and Rebus will take care of getting the message to all your handlers.

The heart

The heart is just a static class that keeps a count of not many times it has beaten and tells us when it was last refreshed.


The controller

Just so that we can see the heartbeat, we’re going to add a simple Web API controller to allow access to the internal state.


Bring it to life

Run the project in Visual Studio and it will open a browser. Navigate to /heartbeat and you’ll see something like this:


Refresh the browser and you’ll see the number of heart beats increases every 5 seconds, which means our Rebus bus is sending and receiving messages. Woohoo – happy days!

An easy way to see the message content (which is JSON serialised by default), swap the Rebus transport to use the file transport in your RebusConfig class:


Getting an error?

You might get an error about not being to resolve the AzureServiceBusTransport. That will be because you haven’t set up the connection string in RebusConfig.


What’s all this doing?

Let’s review how all this is working.

  • We have an ASP.NET Web API app that configures an Autofac container when it starts up.
  • The Autofac container scans the assembly for Autofac modules and it finds a module that does two things.
  • The Autofac module it finds scans the current assembly and then registers all the Rebus handlers that it finds in that assembly.
  • The Autofac module then registers a timer that is configured in the module to send heart beat messages to the bus every few seconds.
  • The Autofac container is then built and passed into our Rebus configuration.
  • The Rebus configuration sets up a Rebus bus that uses Azure Service Bus as its message transport.
  • The Rebus configuration then starts the Rebus bus and makes it available from inside the Autofac container.
  • Then Rebus configuration resolves our timer from the Autofac container and starts it.
  • It’s worth noting here that when the timer starts inside the Autofac container, it resolves the Rebus bus directly from the Autofac container.
  • The timer send a message to the Rebus bus, which puts it into the Azure Service Bus queue.
  • The Rebus bus is monitoring the Azure Service Bus queue and it detects the message, opens it and passes it to the handler that we’ve built to handle those messages.
  • The message handler then tells the heart to beat.

Deploy to Azure Web Apps

The last part is to publish all this goodness to an Azure Web App. There are plenty of ways to publish an Azure Web App, including publishing from Git. I’m just going to use Visual Studio for that.

Right click the solution and choose Publish:


You might need to sign into your Microsoft Account at this point. Select the Azure App Service target:


Select an App Service, or click New to create one:


Enter the details and click Create:


Then click Publish in the next screen to publish to Azure:


You’ll see something like this (with links for how to set up Git deploy if you’re interested!):


Navigate to /heartbeat to see Rebus running in an Azure Web App:


Getting an error?

Did you set the Azure Service Bus connection string?
Scaling out and keeping it running

If your use case is to have a service bus running in the cloud, then you’ll most likely want it to stay on. Azure Service Bus will unload an app if it’s idle for a period of time, but you can configure it to keep the app on at all times.


Note that you always on is not available in the free Azure plan, you need Basic or Standard mode for this. You can read more about that here:

As for scaling, it’s super easy to scale to giddy heights. Read more here:


Saturday 15 August 2015

Using PowerShell as the Visual Studio Developer Command Prompt


The Visual Studio Developer Command Prompt is a command window that loads up a bunch of useful environmental enhancements when it starts.  This let's you access things like sn.exe from your command window.  However, it loads up in cmd.exe and I prefer using PowerShell 'cos it's proper useful!  This article shows one way making PowerShell act like the Developer Command Prompt.

Adjust your PowerShell Profile

When PowerShell loads it looks for a profile file under your user profile and loads that if found.  By default, the profile file is:
Note that you can access your documents folder path from PowerShell like so:
If that file isn't there, create it, add the following code and save it:

Adjust the environment in the last line above according to your VS installation.
Load in PowerShell
Open up a new PowerShell window (existing windows will not have loaded the profile we've just saved).  Type sn and to show that you don't have access to sn.exe:
Then type As-VSPrompt and hit .  This will load up the Developer Command Prompt profile.  Type sn and again and you see sn.exe reporting for duty:

Sunday 25 January 2015

Installing a Service using Topshelf with OctopusDeploy

I was using OctopusDeploy to install a Rebus service endpoint as a Windows Service from a console application created using Topshelf.

Octopus has built-in support for Windows Services that uses sc.exe under the hood.  I'd used this many times previously without hitch, but for some reason the install seemed to start before the service was fully removed.  As Topshelf can do all the service set up, I decided to try using that instead.

My deployment step in Octopus was configured to use the standalone PowerShell scripts.  Installing using Topshelf is simple - just call the executable using the commandline install --autostart.  Removing a Topshelf service is just as simple - call the executable using the uninstall commandline.

The only hard part is finding the executable path for the service so that you can remove the service before re-deploying.  PowerShell doesn't have built-in support for this at time of writing, but you can do that pretty easily using WMI.  The following function is based on David Martin's SO answer and finds the executable path from the service name:

From that point, it's pretty easy to remove the service using Topshelf:

An alternative approach would be to use a known installation location each time and derive the executable path using Octopus' powerful variables.

The code for the pre-deployment and  post-deployment tasks can be found in this gist.
The full Topshelf command line reference is here.

Friday 31 October 2014

Using IIS URL Rewrite

This article is about creating IIS URL Rewrite Rules using Powershell. The Powershell functions that we use apply generally to reading and writing to .NET configuration file (web.config, app.config, etc.), and so can be applied to tasks like updating connection strings.
The functions used are:

Although these are powerful, being well documented is not their strongest feature! There are also a couple of simple pitfalls with the URL Rewrite Rules, which I’ll point out. Note that for the following to work, you’ll need to have the IIS URL Rewrite extension installed in IIS. You can install this using the Web Platform Installer, or using Chocolatey

Files for this article can be found here

The test website

So we can test our rewrite rules we’ll use a simple website structured as follows:

Each leaf folder has two HTML files, 1.html and 2.html and there’s a page of links at the root:

To help test the rewrite rules, the HTML files indicate where they are in the structure:

Introduction to rewrite rules

The rules can be maintained in the IIS Manager by double clicking URL Rewrite in features view:

There is a rich GUI that lets you maintain the rules and which does validation against your input:

The above rule redirects any URL section matching “1.html” to “2.html”. You’ll see this in the site’s config file:

The rules are stored in the config file. The config file that’s changed depends on whether you apply the rules at the level of the virtual directory, website or the server. If the rules apply at the server level they are stored in the main IIS config file here:
After this rule has been applied, recycle the app pool or reset IIS and then when accessing http://localhost/simple-rewrite/1.html you’ll be taken to http://localhost/simple-rewrite/2.html.

Using Powershell

Firstly, make sure you have the WebAdministration module loaded:
Import-Module WebAdministration
To create the above rule using Powershell, run the following script (see CreateRule1.ps1):
Add-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `-filter '/system.webserver/rewrite/rules' `-name "." `-value @{name='Rule 1'; patternSyntax='ECMAScript'; stopProcessing='True'}
Set-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `-filter '/system.webserver/rewrite/rules/rule[@name="Rule 1"]/match' `-name url `-value '1.html'
Set-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `-filter '/system.webserver/rewrite/rules/rule[@name="Rule 1"]/action' `-name . `-value @{ type="Redirect"; url='2.html'; redirectType="SeeOther" }

The pspath parameter is the website path in the format used by the WebAdministration module. The filter parameter is the XPath to select the element we’re interested in. Here it’s under rules/rule and has a name attribute with the value “Rule 1”.

To remove the rule, run the following script (see RemoveRule1.ps1):
Clear-WebConfiguration `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `-filter '/system.webserver/rewrite/rules/rule[@name="Rule 1"]'

Note that Clear-WebConfiguration removes the rule using the name selector and will raise an error if the rule is not found. W If you want to test whether the rule exists, use Clear-WebConfiguration as follows:
$existing = Get-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `-filter '/system.webserver/rewrite/rules/rule[@name="Rule 1"]' -name *
if ($existing) {
Clear-WebConfiguration `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `-filter '/system.webserver/rewrite/rules/rule[@name="Rule 1"]'

Rule processing order and redirect exclusions

The rules are processed in the order they appear in IIS Manager and the config file. A rule can be set to stop processing of subsequent rules. These two combine to make an effective way to create an exception to a redirect. Say you wanted to redirect all “1.html” URLs to “2.html” except for “~b/c/1.html”. To achieve this add in the following rule above the more general redirect:

Your configuration will look something like:

Using rule conditions

Let’s say you want a rule that applies to POSTs to a particular URL but which only contain certain parameters in the query string. Such as a POST to:

Let’s say we want to detect parameter values of X, Y or Z. You can do this using rule conditions. The completed rule looks like this in IIS Manager:

In the configuration, this rule looks like:

And can be scripted as follows:
Add-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `
-filter '/system.webserver/rewrite/rules' `-name "." `-value @{name='Rule 1 Post Exception'; patternSyntax='ECMAScript'; stopProcessing='True'}
Set-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `
-filter '/system.webserver/rewrite/rules/rule[@name="Rule 1 Post Exception"]/match' `-name url `-value "b/2.html"
Add-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `
-filter '/system.webserver/rewrite/rules/rule[@name="Rule 1 Post Exception"]/conditions' `-name "." `-value @{input="{REQUEST_METHOD}"; pattern='POST'}
Add-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `
-filter '/system.webserver/rewrite/rules/rule[@name="Rule 1 Post Exception"]/conditions' `-name "." `-value @{input="{QUERY_STRING}"; pattern='paramA=(X|Y|Z)'}

Notes and gotchas

Rules are relative to the site root

Once you realize that the redirect URLs are relative to the site root, it’s all very obvious. Gaining that realisation can take some time! So for the example site above, the following rule will do nothing:

IIS Manager has code completion for conditions

When you add a condition in IIS Manager, you get code completion after typing the first “{“ (such as in the POST exception rule above):

Files get locked and browsers cache pages

When you’re developing the rules config file is being touched by TIIS, IIS Manager, Notepad++, Powershell and goodness know what else. Your super-modern browser is being super-modern helpful and caching the HTML pages. Sometimes you just need to recycle the app pool to see the updated rule. Other times, you need to do an IIS reset, clear your browser cache and restart your tool windows.

Default values are not written

When the rule’s action type is None and it stops processing of other rules, the configuration written by the IIS Manager GUI is like this:

Following the same pattern for creating the rule in a script, you can run something like this (see CreateExceptionRule.ps1):
Add-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `-filter '/system.webserver/rewrite/rules' `-name "." `-value @{name='Rule 1 Exception'; patternSyntax='ECMAScript'; stopProcessing='True'}
Set-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `-filter '/system.webserver/rewrite/rules/rule[@name="Rule 1 Exception"]/match' `-name url `-value 'b/c/1.html'
Set-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `-filter '/system.webserver/rewrite/rules/rule[@name="Rule 1 Exception"]/action' `-name . `-value @{ type="None" }

The above script writes the following to the configuration file.

Note that the action element is not written. It seems that the Powershell command knows that <action type=”None”> is the default and doesn’t bother to write it, but the IIS Manager GUI writes it anyway. If you didn’t know that you may spend a lot of time trying to work out why your Powershell isn’t doing what you think it should!