Saturday, 15 August 2015

Using PowerShell as the Visual Studio Developer Command Prompt


The Visual Studio Developer Command Prompt is a command window that loads up a bunch of useful environmental enhancements when it starts.  This let's you access things like sn.exe from your command window.  However, it loads up in cmd.exe and I prefer using PowerShell 'cos it's proper useful!  This article shows one way making PowerShell act like the Developer Command Prompt.

Adjust your PowerShell Profile

When PowerShell loads it looks for a profile file under your user profile and loads that if found.  By default, the profile file is:
Note that you can access your documents folder path from PowerShell like so:
If that file isn't there, create it, add the following code and save it:

Adjust the environment in the last line above according to your VS installation.
Load in PowerShell
Open up a new PowerShell window (existing windows will not have loaded the profile we've just saved).  Type sn and to show that you don't have access to sn.exe:
Then type As-VSPrompt and hit .  This will load up the Developer Command Prompt profile.  Type sn and again and you see sn.exe reporting for duty:

Sunday, 25 January 2015

Installing a Service using Topshelf with OctopusDeploy

I was using OctopusDeploy to install a Rebus service endpoint as a Windows Service from a console application created using Topshelf.

Octopus has built-in support for Windows Services that uses sc.exe under the hood.  I'd used this many times previously without hitch, but for some reason the install seemed to start before the service was fully removed.  As Topshelf can do all the service set up, I decided to try using that instead.

My deployment step in Octopus was configured to use the standalone PowerShell scripts.  Installing using Topshelf is simple - just call the executable using the commandline install --autostart.  Removing a Topshelf service is just as simple - call the executable using the uninstall commandline.

The only hard part is finding the executable path for the service so that you can remove the service before re-deploying.  PowerShell doesn't have built-in support for this at time of writing, but you can do that pretty easily using WMI.  The following function is based on David Martin's SO answer and finds the executable path from the service name:

From that point, it's pretty easy to remove the service using Topshelf:

An alternative approach would be to use a known installation location each time and derive the executable path using Octopus' powerful variables.

The code for the pre-deployment and  post-deployment tasks can be found in this gist.
The full Topshelf command line reference is here.

Friday, 31 October 2014

Using IIS URL Rewrite

This article is about creating IIS URL Rewrite Rules using Powershell. The Powershell functions that we use apply generally to reading and writing to .NET configuration file (web.config, app.config, etc.), and so can be applied to tasks like updating connection strings.
The functions used are:

Although these are powerful, being well documented is not their strongest feature! There are also a couple of simple pitfalls with the URL Rewrite Rules, which I’ll point out. Note that for the following to work, you’ll need to have the IIS URL Rewrite extension installed in IIS. You can install this using the Web Platform Installer, or using Chocolatey

Files for this article can be found here

The test website

So we can test our rewrite rules we’ll use a simple website structured as follows:

Each leaf folder has two HTML files, 1.html and 2.html and there’s a page of links at the root:

To help test the rewrite rules, the HTML files indicate where they are in the structure:

Introduction to rewrite rules

The rules can be maintained in the IIS Manager by double clicking URL Rewrite in features view:

There is a rich GUI that lets you maintain the rules and which does validation against your input:

The above rule redirects any URL section matching “1.html” to “2.html”. You’ll see this in the site’s config file:

The rules are stored in the config file. The config file that’s changed depends on whether you apply the rules at the level of the virtual directory, website or the server. If the rules apply at the server level they are stored in the main IIS config file here:
After this rule has been applied, recycle the app pool or reset IIS and then when accessing http://localhost/simple-rewrite/1.html you’ll be taken to http://localhost/simple-rewrite/2.html.

Using Powershell

Firstly, make sure you have the WebAdministration module loaded:
Import-Module WebAdministration
To create the above rule using Powershell, run the following script (see CreateRule1.ps1):
Add-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `-filter '/system.webserver/rewrite/rules' `-name "." `-value @{name='Rule 1'; patternSyntax='ECMAScript'; stopProcessing='True'}
Set-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `-filter '/system.webserver/rewrite/rules/rule[@name="Rule 1"]/match' `-name url `-value '1.html'
Set-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `-filter '/system.webserver/rewrite/rules/rule[@name="Rule 1"]/action' `-name . `-value @{ type="Redirect"; url='2.html'; redirectType="SeeOther" }

The pspath parameter is the website path in the format used by the WebAdministration module. The filter parameter is the XPath to select the element we’re interested in. Here it’s under rules/rule and has a name attribute with the value “Rule 1”.

To remove the rule, run the following script (see RemoveRule1.ps1):
Clear-WebConfiguration `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `-filter '/system.webserver/rewrite/rules/rule[@name="Rule 1"]'

Note that Clear-WebConfiguration removes the rule using the name selector and will raise an error if the rule is not found. W If you want to test whether the rule exists, use Clear-WebConfiguration as follows:
$existing = Get-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `-filter '/system.webserver/rewrite/rules/rule[@name="Rule 1"]' -name *
if ($existing) {
Clear-WebConfiguration `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `-filter '/system.webserver/rewrite/rules/rule[@name="Rule 1"]'

Rule processing order and redirect exclusions

The rules are processed in the order they appear in IIS Manager and the config file. A rule can be set to stop processing of subsequent rules. These two combine to make an effective way to create an exception to a redirect. Say you wanted to redirect all “1.html” URLs to “2.html” except for “~b/c/1.html”. To achieve this add in the following rule above the more general redirect:

Your configuration will look something like:

Using rule conditions

Let’s say you want a rule that applies to POSTs to a particular URL but which only contain certain parameters in the query string. Such as a POST to:

Let’s say we want to detect parameter values of X, Y or Z. You can do this using rule conditions. The completed rule looks like this in IIS Manager:

In the configuration, this rule looks like:

And can be scripted as follows:
Add-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `
-filter '/system.webserver/rewrite/rules' `-name "." `-value @{name='Rule 1 Post Exception'; patternSyntax='ECMAScript'; stopProcessing='True'}
Set-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `
-filter '/system.webserver/rewrite/rules/rule[@name="Rule 1 Post Exception"]/match' `-name url `-value "b/2.html"
Add-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `
-filter '/system.webserver/rewrite/rules/rule[@name="Rule 1 Post Exception"]/conditions' `-name "." `-value @{input="{REQUEST_METHOD}"; pattern='POST'}
Add-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `
-filter '/system.webserver/rewrite/rules/rule[@name="Rule 1 Post Exception"]/conditions' `-name "." `-value @{input="{QUERY_STRING}"; pattern='paramA=(X|Y|Z)'}

Notes and gotchas

Rules are relative to the site root

Once you realize that the redirect URLs are relative to the site root, it’s all very obvious. Gaining that realisation can take some time! So for the example site above, the following rule will do nothing:

IIS Manager has code completion for conditions

When you add a condition in IIS Manager, you get code completion after typing the first “{“ (such as in the POST exception rule above):

Files get locked and browsers cache pages

When you’re developing the rules config file is being touched by TIIS, IIS Manager, Notepad++, Powershell and goodness know what else. Your super-modern browser is being super-modern helpful and caching the HTML pages. Sometimes you just need to recycle the app pool to see the updated rule. Other times, you need to do an IIS reset, clear your browser cache and restart your tool windows.

Default values are not written

When the rule’s action type is None and it stops processing of other rules, the configuration written by the IIS Manager GUI is like this:

Following the same pattern for creating the rule in a script, you can run something like this (see CreateExceptionRule.ps1):
Add-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `-filter '/system.webserver/rewrite/rules' `-name "." `-value @{name='Rule 1 Exception'; patternSyntax='ECMAScript'; stopProcessing='True'}
Set-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `-filter '/system.webserver/rewrite/rules/rule[@name="Rule 1 Exception"]/match' `-name url `-value 'b/c/1.html'
Set-WebConfigurationProperty `-pspath 'IIS:\Sites\Default Web Site\simple-rewrite' `-filter '/system.webserver/rewrite/rules/rule[@name="Rule 1 Exception"]/action' `-name . `-value @{ type="None" }

The above script writes the following to the configuration file.

Note that the action element is not written. It seems that the Powershell command knows that <action type=”None”> is the default and doesn’t bother to write it, but the IIS Manager GUI writes it anyway. If you didn’t know that you may spend a lot of time trying to work out why your Powershell isn’t doing what you think it should!

Saturday, 11 October 2014

Simple UI testing with Nightwatch.js

This week I was introduced to Nightwatch.js by my colleague Matt Davey.  Nightwatch provides a clear and simple syntax for writing your UI tests and uses NodeJS and Selenium to run the tests. 

Having played with Canopy for UI testing, the high readability of the test syntax impressed me from the start.  We used it to nail down a reproduction of a bug we were working in our test environment.

Test scenario

Our test did something like this
  • Logs into a site using known credentials.
  • Clicks through to a place in the UI.
  • Changes and saves a value.
  • Asserts that the new value is being displayed in another area of the UI.

The test syntax

The test syntax looks like this:

The syntax is clean, easy to read and all in a one file. One of the CSS selectors is pretty evil, but you can get that from the browser’s context menu in the inspector (which I learnt this week too!):


Setting up the test runner

Setting up to run the test is very simple to do on any development or test machine. You the components are:
  • NodeJS
  • NPM
  • Nightwatch
  • A Selenium Server
You can install Java, NodeJS and Nightwatch using Chocolatey by opening a command window and running:

choco install javaruntime
choco install nodejs.install

npm install -g NightWatch

Note that Chocolatey seems to install the Node packages under Chocolatey's folders, and NPM then places the Nightwatch module into the nodejs.commandline folder under \tools.  In there is a batch called Nightwatch.cmd in there that we need to access. So, we need to add the following to our path:

C:\ProgramData\chocolatey\lib\ nodejs.commandline.X.Y.Z\tools

(where X.Y.Z is the NodeJS version we’ve installed.)

Then we need to start up a Selenium server.  It would be nice to use Chocolatey here too, but the version of Selenium from the package at time of writing is too old.  So, download the latest Selenium driver (JAR) from here (which is 2.43 at time of writing).

Rename the Selenium JAR file to selenium-server-standalone.jar and save it to C:\selenium\. Then start up a Selenium Server for opening a new command window (as admin) and typing:

java -jar "C:\selenium\selenium-server-standalone.jar"

That’s the setup done, now let’s run the test.

Running the test

Save your test (similar to the code above) to a file, let’s say repro.js, into a folder.  Create a sub-folder called “examples\custom-commands” (which Nightwatch wants to find, not sure why!).
Then open another command window and change to the folder containing the test file and run the following:

nightwatch --test repro.js

Job done!

Monday, 11 August 2014

Testing TypeScript with Jasmine in Visual Studio

This post is about setting up testing for TypeScript with Jasmine in Visual Studio, but it should be pretty much the same using QUnit

To set this up, start a new MVC project and let’s get Knockout and Jasmine installed into the project, along with their TypeScript definitions:

Install-Package knockoutjs
Install-Package knockout.typescript.DefinitelyTyped
Install-Package JasmineTest
Install-Package jasmine.typescript.DefinitelyTyped

We’re using the Jasmine Nuget package called JasmineTest.  There is a similar one on Nuget called Jasmine-JS, the difference being that JasmineTest will add a controller and view to let you run your tests.  We’ll be using that in a second to let us debug the tests.  Now run and browse to ~/jasmine/run and you’ll see this:


This is the view added by the JasmineTest package.  It is an HTML page (SpecRunner.cshtml)  and has the references for the example Jasmine specs that ship with Jasmine.  We are going to be using to let us debug out tests.


So, let’s write some tests.  For simplicity we’re going to put the model and the tests into the same file.  So we add a TypeScript file called Tests.ts to your /Scripts folder and add the following lines to that file, giving this:


Running tests with ReSharper

If you have ReSharper installed, then the R# Test Runner will recognise the Jasmine test and allow you to run them.  To set this up for a smooth experience, first install PhantomJS and set the path to phantomjs.exe in Resharper | Options | Tools | Unit Testing | JavaScript Tests.

R# will need to find the .JS files for the .TS references.  R# will look for and use a file that is named the same as the Definitely Typed TS file, but ending with .JS.  For us, this means copying knockout-3.1.0.js from /Scripts to /Scripts/typings/knockout and renaming it to knockout.d.js:


After that you can just run the tests as you would in C#:


Running tests with Chutzpah

You can also use the Chutzpah test runner to run Jasmine tests in Visual Studio.  You can install that from Tools | Extensions and Updates here:


Once installed, you just right click the TypeScript file that contains the tests and select the item Run JS Tests.  This puts the test results into the Visual Studio Output window by default, but there is a Visual Studio extension that gives integration with the Visual Studio Test Explorer.

If you go ahead and run the tests as shown above you’ll get an error.  This is because Chutzpah resolves referenced JS files differently to R#.  For this to run, we need to add a reference to the JS file in the TS test for Chutzpah to pick up.  This is documented here and looks like this:

/// <chutzpah_reference path="knockout-3.1.0.js" />

Debugging tests in Visual Studio

As far as I can tell, there is currently no support from either R# or Chutzpah for debugging TS tests from either test runner in the IDE.  However, we can do that by going back to our SpecRunner.cshtml that was installed with the JasmineTest Nuget package.

Just add the JS references along with a reference to our test file to the HTML shipped with the package:


Note that we are referencing Tests.js instead of the TS file.  Then place a break point in the test in the TS file, run a browse to ~/Jasmine/Run:


Job done!  Source code is here:

Friday, 29 March 2013

Does Windows Azure comply with UK/EU Data Privacy Law?

Update, November 2016

Times have changed and the content of my article from 2013 may no longer be correct due to a recent ruling by the European Court of Justice.  Read more here:


Yes, it does.  The Information Commissioner’s Office in the UK considers that Windows Azure provides adequate protection for the rights of individuals with regard to their data that is held on Windows Azure. 
Given that you have the other relevant measures in place to comply with the Data Protection Act, this means that you go ahead and use Windows Azure to store your customers’ personal information.  As Microsoft comply with the EU/US Safe Harbor Framework, the data can be considered to remain solely within the EU geographic region.


As developers we often need to store information about individuals, such as their name and email address.  When people share this information with us, they expect that we will treat the information properly and respect their privacy. 
In the UK, the Data Protection Act provides a legal framework to define how companies capture and process information about individuals.  For instance, they may choose to let you send them an email newsletter, but tell you that they do not want to receive then from other companies that you work with.
In the UK, the Information Commissioner’s Office (ICO) is the body responsible for upholding information rights and privacy for individuals.  The ICO can and do fine companies who do not treat individuals information properly.  These fines are quite large too!

To the Cloud

As developers we increasingly want to be able to use hosted cloud services to build our web sites and internet enabled applications.  There are a number of choices out there that include large offerings such as Windows Azure and Amazon Web Services, but also more targeted offerings such as MongoHQ.
If we store any information about an individual (in the UK the law relates to living individuals), then any cloud service that we use as developers must ensure that the data about the individuals is protected.

Keep it local

Other member states of the European Union have similar data protection laws and there is a common legal framework to protect the rights of individuals in the EU.
To be able to benefit from the protection that this legal framework gives, the individual’s data has to remain physically within the EU.  If the data were to be held in another country outside of the EU, then the laws of that country would apply to that data.  For example, the US has a very different approach to the protection of individuals data than we do in the EU.

Back to the Cloud

Let’s look at how Amazon AWS and Microsoft Azure - two popular US cloud providers - handle this.
Amazon make the statement that any data hosted in one of their EU regions will remain within that region.  You can read that here.  Okay, not much detail in that, but it sounds fine.
Azure talk a little more about this than Amazon and you can read about that here.  If you are eagle eyed, then you will notice that data in Azure EU regions will generally remain in the EU, but may be shipped to the US for disaster recovery purposes.
Oh dear, that sounds like it breaks the condition that data has to remain physically within the EU.

Can I use Azure then?

Yes, you can.  The reasons for this - also stated on Azure Trust Centre Privacy page – is that Microsoft are certified as complying with the EU/US Safe Harbor Framework
This is a legal framework between the EU and the US Department of Commerce that essentially provides EU compliant data protection for data that is moved to the US under the Safe Harbor Framework.  The ICO deem that the Safe Harbor Framework provides adequate protection for the rights of the individual whose data may be transferred to the US by Microsoft.  You can read about that here.

That’s simple then – why the doubt?

So, if it’s that easy, why am I writing this article in the first place?  Well, I’ve been looking at Azure for a while now and wanting to use it for some of my applications.  The advice I had received in the past was basically that once the data was in the US, other US laws, such as the Patriot Act, could override the Safe Harbor Framework and remove the legal protections provided by the Safe Harbor Framework. 
If that was the case, then I would need treat it as if it were being shipped outside of the EU and under the jurisdiction of different data protection laws.  Not something that my customers would want to hear!  Also, the reason why I’d been using Amazon AWS.
I recently spun up a build server on an Azure VM and I absolutely loved the experience of using Azure.  I was thinking what a shame it was that I couldn’t use it more and so I got in touch with the venerable Mr Hanselman, Microsoft developer evangelist extraordinaire, to say “please can we use Azure in Europe?” (I had previously tried other routes, but without getting any answers).
Scott kindly took my question back to the Azure team and then came back with a bunch of challenges for me.  The summary of those being that all I needed was already on the Azure Trust Centre Privacy page.  And, he was quite right too! 
I got onto the phone to the ICO and asked them about it, and they confirmed that this provided “adequate protection for the rights of individuals” and that there are exceptions to data protection law to allow circumstances such as the Courts or the Police to have access as part of their investigations – both in the EU as well as the US – and that I could now just focus on complying with the Data Protection Act requirements. 
Awesome – my future is now on Azure!

A note of caution

It’s worth remembering that the location of the data is just one part of the process of protecting your customers’ data.  You need to make sure that you comply with all other aspects of the relevant data protection laws. 
Generally, the more sensitive the information and the more people about whom you hold that sort of information, the less likely you will be able to host it externally.
In the UK, if you’re not sure about how to go about things, contact the ICO for advice – they’re very helpful people!

Wednesday, 23 May 2012

Embedded RavenDB Indexes with Obfuscation

I ran into an issue today with using RavenDB from an obfuscated assembly.  RavenDB started giving some nasty looking errors like:

System.InvalidOperationException: Could not understand query:
-- line 2 col 43: invalid NewExpression

After bashing my head against this for too long, I had run out of ideas and so I posted the problem on the RavenDB group.  One of the really good things about RavenDB is the amazing support from both the Hibernating Rhinos team and the community on this group.  So, pretty soon it was problem solved!

The problem: the obfuscation was renaming the anonymous types that are used when defining indexes from code in RavenDB.

The solution: either put your types into a non-obfuscated assembly or tell the obfuscator to stop renaming anonymous types.  Let’s look at the second option a bit more.

Okay, so how do you detect an anonymous type?  One way is to look at their names.  The compiler gives anonymous types names and puts the text “AnonymousType” into the name. For example:

new { Name = "Sean"}.GetType().Name // Gives: <>f__AnonymousType0`1

Simple enough, but there is a caveat:  the naming of anonymous types is an implementation detail and may vary with different compiler implementations.  You cannot rely in this working with different compilers. 

So, with that in mind, let’s look at a solution…

Different obfuscators allow you to add obfuscation rules in various ways.  In this repo there is an example of using Eazfuscator with a RavenDB index.  (Note that you will need to install Eazfuscator to be able to run the code.)  All that is needed is to use an assembly scoped ObfuscationAttribute to prevent renaming of any type whose name contains the text “AnonymousType”:

[assembly: Obfuscation(Feature = "Apply to type *AnonymousType*: renaming", Exclude = true, ApplyToMembers = true)]

Bingo, everything works as it should again…happy days!