Simple catch-all AWS budgets

We got caught out recently by significantly high usage of AWS CloudWatch, and realised we’d been spending $1000/month more than expected. After tracking down the cause (one of the team had turned on detailed instance monitoring) – I wanted to ensure we had a bit more of a heads up next time. We had budgets set for all the major services, but not the ‘small’/insigificiant ones.

The really simple (and in hindsight, obvious!) solution was just to create a ‘catch all’ budget for all the services we didn’t have a seperate budget for, even if we’re not currently using them:

I’d probably move this to terraform next, but this works well for us so far.

BeyondCorp proxy possibilities on AWS, Google Cloud, Azure

It appears there’s now another tool in the arsenal for those looking at implementing BeyondCorp style security model, with the arrival of OIDC authentication support in AWS’s application load balancer. It adds to a growing list of possiblities, at least for HTTP-based services. Who needs VPN anyway?

The options I’m aware of now include:

  • Bitly’s oAuth2 proxy – a simple open source reverse proxy with OAuth support, written in Go
  • Amazon Application load balancer – will allow you to offload authentication to a seperate IdP, and then passes claims via HTTP headers to the proxied application.
  • Google Identity-aware proxy – though this only works if the services you are securing live within the Google cloud
  • Azure AD application proxy – Microsoft’s answer to the zero-trust model, with a lightweight proxy that sits within your internal network enabling outbound connectivity to the proxy rather than inbound.
  • CloudFlare access – hosted reverse-proxy with support for major identity providers like Azure AD and Okta
  • ScaleFT – commercial zero-trust platform for securing HTTP based web and SSH based server access, with a high entry cost (starts at $500/month)
  • Pritunl Zero – a freemium SaaS service offering HTTP and SSH based proxying.
  • DuoBeyond

Any others I’m missing? Would love to hear of folks experiences of these.

Licensing SQL Server in AWS? It’s up to twice as expensive as Azure or Rackspace Cloud.

… and regardless of cloud provider, it’s (probably) costing you 2x what it would on dedicated kit. So AWS could be costing you 4x what it would cost to license on dedicated hardware.

Disclaimer: I am certainly not a SQL Server licensing expert, nor that much of a cloud expert. The purpose of this post is to hopefully prove that I am, in fact, wrong. Please help with this!

Anyone that’s ever head to deal with SQL Server licensing (or indeed any kind of Microsoft licensing) knows what a minefield it is. In the public cloud, all your worries go away (ahem) and you can just wrap the license fee in the monthly cost via the hosting providers “service provider license agreement” with Microsoft.

Looking into it further, however, I realised there are actually big discrepancies between how much the different cloud providers charged you for licensing SQL Server itself.

Microsoft Azure: $75–80/core/month
Rackspace Cloud: around $100/core/month
AWS: anything from $136 to a $236/core/month

In other words, your costs to license SQL Server could be 2x higher than Azure or Rackspace Cloud, and 4x higher moving to AWS cloud from dedicated hosting (see dedicated vs cloud).

At the moment, I have no good explanation for this. I’m still waiting for a response from AWS, and I’m hoping that I’m missing something here.

To head off the obvious — I realise there are databases with more cloud-friendly licensing. Unfortunately real life gets in the way, so we’re stuck with SQL Server until we can migrate away from it.

AWS, Rackspace Cloud & Azure compared

If you’ve got this far, I’m assuming you want some detail, so here goes.

Assumptions

  • I am ignoring platform as a service offerings entirely (so SQL Azure, RDS you’re out)
  • I am only looking at SQL Server Standard
  • I am only considering SQL licensing costs, not the cost of the underlying hardware. For cloud providers that don’t explicitly state a licensing cost (only Azure does this), I’m going to take the cost of a Windows VM with SQL Server on it and subtract the cost of a Windows VM.
  • SQL Server is licensed per core, so I’m going to group these calculations by core
  • I’m just looking at monthly pricing with no commitment, as that’s how we’re currently paying for our dedicated kit too.
  • Rackspace has different UK pricing in GBP, but I’ve spot checked the numbers and there was no material difference from their USD pricing, so we’re just going with that.
  • I am looking at prices for Europe-based data centres, in USD. I haven’t checked but I don’t expect this varies significantly from the US
  • I am assuming 730 hours in a month

Data sources

I have taken the prices from these locations: http://www.rackspace.com/cloud/public-pricing http://azure.microsoft.com/en-gb/pricing/details/virtual-machines/#Windows, http://azure.microsoft.com/en-gb/pricing/details/virtual-machines/#Sql, and http://aws.amazon.com/ec2/pricing/ (as of 22 Jan 2015).

Medium isn’t the best place for displaying tabular data, so I’ve included an image below, and you can download the spreadsheet here.

Azure is the cheapest, working out around $75–80 per core (regardless of VM, as they list their SQL prices separately). I’m assuming this is essentially cost price, as I’ve been told elsewhere that a 4 core license via the SPLA scheme is generally $300/month. Not surprising, given it’s Microsoft licensing their own product.

Rackspace Cloud is more expensive, but very consistent, working out between $100 per core (regardless of VM), so a 25% markup. As a note of comparison, on our dedicated kit, we pay £420/month for a 6 core license, which works out at £70/$106 USD so this is consistent.

AWS is the big surprise here, with prices ranging anything from $136 at it’s cheapest to $236 (m series followed by r series, followed by c series). That’s a whopping 70%-195% markup.

Cloud vs Dedicated

I mentioned earlier another price differential moving to cloud. Regardless of which cloud provider you’re using, SQL Server will (probably) cost you 2x what you’re paying for dedicated kit. That’s because SQL Server licensing ignores hyper threaded cores (PDF) on dedicated kit, but not on virtualized kit, assuming you are only licensing the VM and not the host itself. So if you’re running in a virtualized environment with hyperthreading turned on (Azure, AWS, Rackspace Cloud, for instance), you have two virtual cores for each real processing core, and so you’ll need to pay for twice the number of cores to get the (vaguely) equivalent performance.

Wrapping up

Like I said at the start, I’m no licensing expert, but given the already painful SQL licensing costs, seeing the huge differences we’d be paying in AWS vs other cloud providers is a hard pill to swallow. Please let me know if you spot any mistakes in my calculations!

It would certainly push us away from SQL Server even faster than we were planning, but in the meantime… would be interested to hear any insights as to why there are such big variations and such a huge markup at AWS!

Integrating NDepend metrics into your Build using F# Make & TeamCity

NDepend is an analysis tool giving you all kinds of code quality metrics, but also tools to drill down into dependencies, and query and enforce rules over your code base.

There’s a version that integrates with Visual Studio, but there’s also a version that runs on the console to generate static reports, and enforce any code rules you might have written.

I wanted to see how easy it would be to combine all of this and use NDepend to generate actionable reports and metrics on our code base – not just now, but how it shifts over time.

To do this, you need to

  1. Run your unit tests via a code coverage tool, such as dotCover. This has a command line version bundled with TeamCity which you are free to use directly.
  2. Run NDepend with your code coverage files and NDepend project file
  3. Store NDepend metrics from build-to-build so it can track trends over time

I’ve covered step 1 in my previous post on generating coverage reports using dotCover . I recommend you read that first!

We can then extend this code to feed the code coverage output into NDepend.

Downloading your dependencies

I’ve already covered this in the previous post I mentioned, so using the same helper method, we can also download our NDepend executables from a HTTP endpoint, and ensure we have the appropriate license key.

<pre>Target "EnsureDependencies" (fun _ ->
    ensureToolIsDownloaded "dotCover" "https://YourLocalDotCoverDownloadUrl/dotCoverConsoleRunner.2.6.608.466.zip"
    ensureToolIsDownloaded "nDepend" "https://YourLocalDotCoverDownloadUrl/NDepend_5.2.1.8320.zip"
    CopyFile (toolsDir @@ "ndepend" @@ "NDependProLicense.xml") (toolsDir @@ "NDependProLicense.xml")
)

Generating the NDepend coverage file

Before running NDepend itself, we also need to generate a coverage file in the correct format. I’ve simply added another step to the “TestCoverage” target I defined previously:

DotCoverReport (fun p -> { p with
Source = artifactsDir @@ "DotCover.snapshot"
Output = artifactsDir @@ "DotCover.xml"
ReportType = DotCoverReportType.NDependXml })

Generating the NDepend report

Now we can go and run NDepend itself. You’ll need to have already generated a sensible NDepend project file, which means the command line arguments are pretty straightforward:

Target "NDepend" (fun _ ->

NDepend (fun p -> { p with
ProjectFile = currentDirectory @@ "Rapptr.ndproj"
CoverageFiles = [artifactsDir @@ "DotCover.xml" ]
})
)

I’m using an extension I haven’t yet submitted to F# Make yet, but you can find the code here, and just reference it at the top of your F# make script using

#load @"ndepend.fsx"
open Fake.NDepend

After adding a dependency between the NDepend and EnsureDependencies target, then we’re all good to go!

Recording NDepend trends using TeamCity

To take this one step further, and store historical trends with NDepend, we need to persist a metrics folder across analysis runs. This could be a shared network drive, but in our case we actually just “cheat” and use TeamCity’s artifacts mechanism.

Each time our build runs, we store the NDepend output as an artifact – and restore the artifacts from the previous successful build the next time we run. Before this was a bit of a pain but as of TeamCity 8.1 you can now reference your own artifacts to allow for incremental-style builds.

In our NDepend configuration in TeamCity, ensure the artifacts path (under general settings for that build configuration) includes the NDepend output. For instance

artifacts/NDepend/** => NDepend.zip

go to Dependencies, and add a new artifact dependency. Select the same configuration in the drop down (so it’s self referencing), and select “Last Finished Build”. Then add a rule to extract the artifacts and place them to the same location that NDepend will run in the build, for instance

NDepend.zip!** => artifacts/NDepend

TeamCity Report tabs

Finally, you can configure TeamCity to display an NDepend report tab for the build. Just go to “Report tabs” in the project (not build configuration) settings, and add NDepend using the start page “ndepend.zip!NDependReport.html” (for instance).

Hope that helps someone!

Code coverage using dotCover and F# make

I’ve previously depended a little too much on TeamCity to construct our build process, but have been increasingly shifting everything to our build scripts (and therefore source control).

We’ve been using F# make – an awesome cross platform build automation tool like make & rake.

As an aside (before you ask): The dotCover support in TeamCity is already excellent – as you’d expect – but if you want to use these coverage files elsewhere (NDepend, say), then you can’t use the out-of-the-box options very easily.

Downloading your dependencies

We’re using NUnit and MSpec to run our tests, and so in order to run said tests, we need to ensure we have the test runners available. Rather than committing them to source control, we can use F# make’s support for restoring NuGet packages.

RestorePackageId (fun p -> { p with OutputPath = "tools"; ExcludeVersion = true; Version = Some (new Version("2.6.3")) }) "NUnit.Runners"

DotCover is a little trickier, as there’s no NuGet package available (the command line exe is bundled with TeamCity). So, we use the following helper and create an F# make target called “EnsureDependencies” to download our dotCover and NDepend executables from a HTTP endpoint:

let ensureToolIsDownloaded toolName downloadUrl =
    if not (TestDir (toolsDir @@ toolName)) then
        let downloadFileName = Path.GetFileName(downloadUrl)
        trace ("Downloading " + downloadFileName + " from " + downloadUrl)
        let webClient = new System.Net.WebClient()
        webClient.DownloadFile(downloadUrl, toolsDir @@ downloadFileName)
        Unzip (toolsDir @@ toolName) (toolsDir @@ downloadFileName)

Target "EnsureDependencies" (fun _ ->
    ensureToolIsDownloaded "dotCover" "https://YourLocalDotCoverDownloadUrl/dotCoverConsoleRunner.2.6.608.466.zip"
    <code>RestorePackageId (fun p -> { p with OutputPath = "tools"; ExcludeVersion = true; Version = Some (new Version("2.6.3")) }) "NUnit.Runners"

Generating the coverage reports

Next up is creating a target to actually run our tests and generate the coverage reports. We’re using the DotCover extensions in F# Make that I contributed a little while back. As mentioned, we’re using NUnit and MSpec which adds a little more complexity – as we must generate each coverage file separately, and then combine them.

Target "TestCoverage" (fun _ ->

  let filters = "-:*.Tests;" # exclude test assemblies from coverage stats
  # run NUnit tests via dotCover
  !! testAssemblies
      |> DotCoverNUnit (fun p -> { p with
                                      Output = artifactsDir @@ "NUnitDotCover.snapshot"
                                      Filters = filters }) nunitOptions
  # run the MSpec tests via dotCover
  !! testAssemblies
      |> DotCoverMSpec (fun p -> { p with
                                      Output = artifactsDir @@ "MSpecDotCover.snapshot"
                                      Filters = filters }) mSpecOptions
  # merge the code coverage files
  DotCoverMerge (fun p -> { p with
                                Source = [artifactsDir @@ "NUnitDotCover.snapshot";artifactsDir @@ "MSpecDotCover.snapshot"]
                                Output = artifactsDir @@ "DotCover.snapshot" })
  # generate a HTML report
  # you could also generate other report types here (such as NDepend)
  DotCoverReport (fun p -> { p with
                                Source = artifactsDir @@ "DotCover.snapshot"
                                Output = artifactsDir @@ "DotCover.htm"
                                ReportType = DotCoverReportType.Html })
)

All that’s left is to define the dependency hierarchy in F# make:

"EnsureDependencies"
==> "TestCoverage"

And off you go – calling your build script with the “TestCoverage” target should run all your tests and generate the coverage reports.

SSL Termination and Secure Cookies/requireSSL with ASP.NET Forms Authentication

If you’re running a HTTPS-only web application, then you probably have requireSSL set to true in your web.config like so:

<httpCookies requireSSL="true" httpOnlyCookies="true"

With requireSSL set, any cookies ASP.NET sends with the HTTP response – in particular, the forms authentication cookies – will have the “secure” flag set. This ensures that they will only be sent to your website when being accessed over HTTPS.

What happens if you put your web application behind a load balancer with SSL termination? In this case, ASP.NET will see the request coming in as non-HTTPS (Request.IsSecureConnection always returns false) and refuse to set your cookies:

“The application is configured to issue secure cookies. These cookies require the browser to issue the request over SSL (https protocol). However, the current request is not over SSL.”

Fortunately, we have a few tricks up our sleeve:

  1. If the HTTPS server variable is set to ‘on’, ASP.NET will think we are over HTTPS
  2. The HTTP_X_FORWARDED_PROTO header will contain the original protocol running at the load balancer (so we can check that the end connection is in fact HTTPS)

With this knowledge, and the rewrite module available in IIS 7 upwards, we can set up the following:

    <rewrite>
        <rules>
            <rule name="HTTPS_AlwaysOn" patternSyntax="Wildcard">
                <match url="*" />
                <serverVariables>
                    <set name="HTTPS" value="on" />
                </serverVariables>
                <action type="None" />
                <conditions>
                    <add input="{HTTP_X_FORWARDED_PROTO}" pattern="https" />
                </conditions>
            </rule>
        </rules>
    </rewrite>

You’ll also need to add HTTPS to the list of allowedServerVariables in the applicationHost.config (or through the URL Rewrite config)

        <rewrite>
            <allowedServerVariables>
                <add name="HTTPS" />
            </allowedServerVariables>
        </rewrite>

With thanks to Levi Broderick on the ASP.NET team who sent me in the right direction to this solution!

AppData location when running under System user account

As it took far too much Googling to find this, if you need to access the AppData folder for the System account, go here:

C:\Windows\System32\config\systemprofile\AppData\Local
C:\Windows\SysWOW64\config\systemprofile\AppData\Local
I hit this because we needed to clear the NuGet package cache for a TeamCity build agent which was running as a service under the System account.

Get ASP.NET auth cookie using PowerShell (when using AntiForgeryToken)

At FundApps we run a regular SkipFish scan against our application as one of our tools for monitoring for security vulnerabilities. In order for it to test beyond our login page, we need to provide a valid .ASPXAUTH cookie (you’ve renamed it, right?) to the tool.

Because we want to prevent Cross-site request forgeries to our login pages, we’re using the AntiForgeryToken support in MVC. This means we can’t just post our credentials to the login url and fetch the cookie that is returned. So here’s the script we use to fetch a valid authentication cookie before we call SkipFish with its command line arguments:

Using Gulp – packaging files by folder

GulpJS is a great Node-based build system following in the footsteps of Grunt but with (in my opinion) a much simpler and more intuitive syntax. Gulp takes advantage of the streaming feature of NodeJs which is incredibly powerful, but means in order for you to get the most out of Gulp, you certainly need some understanding of what is going on underneath the covers.

As I was getting started with Gulp, I had a set of folders, and wanted to minify some JS files grouped by folder. For instance:

/scripts
/scripts/jquery/*.js
/scripts/angularjs/*.js

and want to end up with

/scripts
/scripts/jquery.min.js
/scripts/angularjs.min.js

and so on. This wasn’t immediately obvious at the time (I’ve now contributed this example back to the recipes), as it requires some knowledge of working with underlying streams.

To start with, I had something like this:

var gulp = require('gulp');
var concat = require('gulp-concat');
var rename = require('gulp-rename');
var uglify = require('gulp-uglify');

var scriptsPath = './src/scripts/';

gulp.task('scripts', function() {
    return gulp.src(path.join(scriptsPath, 'jquery', '*.js'))
      .pipe(concat('jquery.all.js'))
      .pipe(gulp.dest(scriptsPath))
      .pipe(uglify())
      .pipe(rename('jquery.min.js'))
      .pipe(gulp.dest(scriptsPath));
});

Which gets all the JS files in the /scripts/jquery/ folder, concatenates them, saves them to a /scripts/jquery.all.js file, then minifies them, and saves it to a /scripts/jquery.min.js file.

Simple, but how can we do this for multiple folders without manually modifying our gulpfile.js each time? Firstly, we need a function to get the folders in a directory. Not pretty, but easy enough:

function getFolders(dir){
    return fs.readdirSync(dir)
      .filter(function(file){
        return fs.statSync(path.join(dir, file)).isDirectory();
      });
}

This is JavaScript after all, so we can use the map function to iterate over these.


   var tasks = folders.map(function(folder) {

The final part of the equation is creating the same streams as before. Gulp expects us to return the stream/promise from the task, so if we’re going to do this for each folder, then we need a way to combine these. The concat function in the event-stream package will combine streams for us, and end only once all it’s combined streams have completed:

var es = require('event-stream');
...
return es.concat(stream1, stream2, stream3);

The catch is it expects streams to be listed explicitly in it’s arguments list. If we’re using map then we’ll end up with an array, so we can use the JavaScript apply function :

return es.concat.apply(null, myStreamsInAnArray);

Putting this all together, and we get the following:

var fs = require('fs');
var path = require('path');
var es = require('event-stream');
var gulp = require('gulp');
var concat = require('gulp-concat');
var rename = require('gulp-rename');
var uglify = require('gulp-uglify');

var scriptsPath = './src/scripts/';

function getFolders(dir){
    return fs.readdirSync(dir)
      .filter(function(file){
        return fs.statSync(path.join(dir, file)).isDirectory();
      });
}

gulp.task('scripts', function() {
   var folders = getFolders(scriptsPath);

   var tasks = folders.map(function(folder) {
      return gulp.src(path.join(scriptsPath, folder, '/*.js'))
        .pipe(concat(folder + '.js'))
        .pipe(gulp.dest(scriptsPath))
        .pipe(uglify())
        .pipe(rename(folder + '.min.js'))
        .pipe(gulp.dest(scriptsPath));
   });

   return es.concat.apply(null, tasks);
});

Hope this helps someone!

Forms Authentication loginUrl ignored

I hit this issue a while back, and someone else just tripped up on it so thought it was worth posting here. If you’ve got loginUrl in your Forms Authentication configuration in web.config set, but your ASP.NET Forms or MVC app has suddenly started redirecting to ~/Account/Login for no apparent reason, then the new simpleMembership(ish) provider is getting in the way. This seems to happen after updating the MVC version, or installing .NET 4.5.1 at the moment.

Try adding the following to your appSettings in the web.config file:

<add key="enableSimpleMembership" value="false"/>

which resolved the issue for me. Still trying to figure out with Microsoft why this is an issue.