Change a blank project into a web project

© Microsoft
Project Selection in Microsoft Visual Studio ©

Whilst I was writing a WebApi OWIN tutorial, I started in an empty Solution with a blank Project and then wanted to change the blank project into a web project. There’s no way to do this via the GUI and I really wanted all the nice F5 tools. Rather than having to deploy to a website or IIS host each time, I’d rather start up a server at the press of a button. That way I can gain confidence in the startup of the app if I haven’t written tests yet.

There’s a few ways to do this:

  1. Modify the project metadata. You change the a blank solution to a web project. Visual Studio looks for the Guids that indicate the project is a web project and hooks up the tools.
  2. Create a new web project and drag all the files over to the new project. I much prefer this, as the project Guids and extra data (e.g., whether to use IIS, IIS Express and the hostname) is setup for you.

There’s a Stack Overflow around this which includes a list of the guids you can specify for the different types of Project:

On becoming AWS Certified

AWS Certified Developer Associate

There are a few reasons people take exams and try to gain professional certifications:

  • To prove to themselves they can do something
  • To add something to their CV
  • To prove to others that they once knew a few things about something
  • To get a promotion
  • To get noticed

I decided to take the AWS Certified Developer exam last week because I wanted to see what the exam was like, and it’s also part of my training budget at JustEat.

I’m happy to say I passed and I gained the basic AWS Certified Developer Associate qualification. This is the foundation level, and there are a few routes I can take to become an AWS certified professional. Amazon have recently updated their AWS Certification tracks on their site, so I’ll be following one of those.

I can recommend using some of the resources at CloudGuru.com and PluralSight.com as they lay out the topics you need to revise for better than the exam guidelines.

I’ve collected a top level set of exam subjects you need to revise for (.zip) in markdown format (.md).

BuildStuff 2014

I’ve just come back from an awesome conference in Lithuania – BuildStuff 2014.

This was the first full week remote conference I’ve been to and we had a lot of fun. We worked hard during the day, and explored the city at night.

JustEat sponsored one of the social events, and it was great to see so many familiar faces from Europe and further afield.

I don’t think I’d do a full week of conference again though unless I was particularly stuck for ideas or needing inspiration. I think twitter and other social media channels are good enough to keep up to date but I think conferences are great for socialising and are a good platform for announcements of new technologies – and sometimes you need that.

The best part of the week for me was the debugging .NET applications talk and then full day workshop by Sasha Goldshtein. I’d recommend it to any .NET developer. We worked our way through how .NET allocates, where items are stored, and onto the tools like procdump and a deep dive into Visual Studios debugging capabilities which turn out to be great once you get to grips with them.

Here’s a link to Sasha’s Build Stuff 2014 .NET Garbage Collection tips from his talk.

Here’s a link to Sasha’s Build Stuff 2014 Debugging .NET Applications Workshop on his blog on MSDN.

Leaving easyJet and starting at JustEat

It’s my final week at easyJet. I’ve worked on one of the web team now for over a year and a half – and I’ve learnt a lot.

I’ve learnt a lot about great teamwork. I’ve met a lot of great people. I’ve helped deliver real software and real features to one of the UK’s busiest e-commerce platforms.

If you were ever considering going to work for an airline, I can recommend it, though you’ll have a few challenges:

  • Massive code bases
  • Monolithic architectures
  • Giant databases with change control
  • Risk management (it’s a big website)
  • Agile Scrum slowness

Niggles aside, it’s a really great place to work. easyJet invest in themselves and their IT – after all, their tech is what got them to where they are today. There are so many people who want to push the platform forward, break it down and make it better for the future.

EasyJet has:

  • Great people
  • Great teamwork ethics
  • So much scope for change!
  • So many different and exciting challenges
  • Investment and opportunities

I’m sad to leave easyJet and Team Koi. I’ve made friends for life there, and know the door is open if I want to return in the future.

My next role is as a Senior .NET Developer at JustEat.co.uk – a very exciting role on a very fast moving platform – and I’m really looking forward to the challenge.

I’m wishing all the guys at easyJet the very best for the future.

 

easyJet Recent Searches

Recent Searches Screenshot
Recent Searches

Last night we re-released Release 87 of the easyJet.com booking funnel and we’re pretty happy with it. My team delivered the Recent Searches panel at the front of the site – and there are lots of other changes too.

Recent Searches will remember up to four of your searches – either from the homepage, or when you are using the flight grid – and populate this panel for you. #

It’s fully accessible (to WCAG standard A) and supports tab ordering for those who prefer navigation by keyboard only.

MVC API Routes by Type in C#

In MVC 3 onwards you can constrain routes based on your own rules.

MVC’s default API route is /api/{action}/{id} and uses the controller and action based on the controller name and HTTP Verb.

I wanted a way to route anything with /api/version/<controller> to a specific collection of controllers, based on a naming convention of my choice. I decided to use a constraint based on interfaces, but you could implement the match using Regex or any number ways. As long as you implement a IRouteConstraint you can decide your own route logic.

Lets create an empty interface which we’ll use a flag. We only want the route to apply for an API controller that implements this interface.

public interface IVersionApi
{ }

Here’s an example of one of our API controllers which implements our IVersionAPI. Note this class implements the IVersionAPI interface.
This will be hit when I called /api/versions/FlightTracker and it will still support default HTTP verbs (so Get() will be hit for a HTTP GET request).

public class FlightTrackerController : ApiController, IVersionApi
{
  private readonly IFlightTrackerVersionRepository _flightTrackerVersionRepository;
  private readonly IConfiguration _configuration;
  public FlightTrackerController()
  {
    _configuration = new Configuration();
    _flightTrackerVersionRepository = new FlightTrackerVersionSqlRepository(_configuration);
  }
  public IEnumerable Get()
  {
    var result = _flightTrackerVersionRepository.Get();
    return result;
  }
}

Here’s the additional WebAPI route which sets up our route. Note the new constraint object where everything gets tied together.

config.Routes.MapHttpRoute(
  name: "Versions",
  routeTemplate: "api/versions/{controller}/{id}/{format}",
  defaults: new { id = RouteParameter.Optional, format = RouteParameter.Optional },
  constraints: new { abc = new TypeConstraint(typeof(IVersionApi)) }
);

The magic happens our TypeConstraint class. It gets a list of all the types which implement IVersionApi, then compares the requested controller (e.g., a request for “/api/versions/FlightTracker” will match “FlightTrackerController”) with each of the types then returns a bool. We implement the MVC IRouteConstraint interface – see Creating a Custom Route Constraint.

public class TypeConstraint : IRouteConstraint
{
  private readonly Type _type;

  public TypeConstraint(Type type)
  {
    _type = type;
  }

  public bool Match(HttpContextBase httpContext, Route route, string parameterName, RouteValueDictionary values, RouteDirection routeDirection)
  {
    var controller = values["controller"];

    if (controller == null || string.IsNullOrWhiteSpace(controller as string))
  	  return false;

    var type = _type;
    var types = AppDomain.CurrentDomain.GetAssemblies()
	  .SelectMany(s => s.GetTypes())
	  .Where(type.IsAssignableFrom);

    var matchWith = controller + "Controller";

    if (types.Any(o => o.Name == matchWith))
	  return true;

    return false;
  }
}

Hope you find it useful!

Flow – Workflow template based software

Flow is my experiment into workflow template based software.

Rigid software is a pain to customise

I used to work in a company where their source of revenue was from sales of a single software package. This software package was designed to record customer information and then schedule and manage jobs to service the customers equipment. The package was generic in order to fit as many domains as possible. This confused lots of customers because the items they sold weren’t in the language of the customer. For example, instead of selling or maintaining houses and equipment, they’d have to use products and parts, so would refer to houses as products and equipment as parts.

Every day there would be requests for additional functionality, or a change of its behaviour because it didn’t mirror the way they work closely enough. For example, customers sometimes needed to send an email or calculate a field elsewhere after something had been added to the system. If we added an additional step for one customer, it would affect another customer who wanted the opposite Where do you stop when you start to alter the software for one customer but need it to be generic? That is still a problem for my old company.

Like clothes, we need something simple as a general fit to get started, but then as we mature we can get something tailored which would suit us better and be less wasteful.

How would this work? Lets take this example of adding a new customer. We collect the data from a form and store it in the database.

Flow 1

If this was traditional software, after it’s been shipped, the process can’t be altered because the routines are hard coded. What happens if your biggest customer comes to you now asking you to alter this process because it’s important for them to send an email to the Customer Team every time a new customer is added to the system? Looks like you’ve got to go back and recode the software, then ship it with a set of feature toggles so it sends emails:

Flow 2

Imagine if the software we shipped was a system of workflow templates. If a customer wanted an additional custom action, they could add additional steps wherever they needed to. Customers that just wanted off-the-shelf-software wouldn’t need to do a thing, as they’d never make any changes.

Templated actions or their properties could be locked at the discretion of the template developer leaving everything else would open to customisation. Locking actions is only useful when updating the software versions, so an upgrade would update all the standard actions – the ones that were supported, and wouldn’t remove any custom actions. If actions were removed by the developer, this would cause a few merge issues so that needs thinking about.

Flow

I think I’d quite like to create a workflow template based solution. I’m going to concentrate on something web based to start with, designed to be portable for any platform. I’m going to write it in C#, and probably hosted in Azure to start with. It’s also going to be designed so it can be taken offline and needs a way that it can be updated, tested, and rolled back.

Actions

  • Collecting data
    This would need forms, form templates, and tie in with validation
  • Validating data
    This would be a custom rules engine linked to any step
  • Storing data
    Where do you store data? How do you set this up? NoSQL would be perfect for this. If we were going to store to a relation database, then mappings would have to be maintained.
  • Sending emails (to customer, internally)
    These need lists of emails, HTML templates, etc.
  • Push notifications
    Send info to phones and push enabled devices in real time. That would be neat.
  • Showing information to the user
    If the user is doing something important, or needs to accept a message first, we’d need a way to show it to them.

Right, enough for tonight, back to work in the morning.

Understanding the 2013 Microsoft Azure Offerings

The Intro to Windows Azure page on the Azure site gives a very good and clear overview of the different compute, data, networking, analytics, messaging, caching and identity solutions on offer.

  • Compute/Execution Models
  • Data storage
  • Networking
  • Business Analytics
  • Underlying apps (Messaging, Caching, Identity)
  • APIs (Marketplace, Store, Media, etc.)

Compute

Virtual Machines – These are entire virtual machines backed with APIs and golden images. VHDs can be used and transferred from onsite to cloud as needed. Anything you can think of using a server for can be done with Virtual Machines and they’re completely customisable. You have the overhead of maintaining them though.

Web Sites – These are managed sites, much like you’d get from a website provider like GoDaddy or Fasthosts. They’re managed by Azure and can be shared instances (so on the same box as other companies websites), or standard (one website per box). The standard versions can be scaled. These are managed by Azure infrastructure and provide APIs and support for Node.js, PHP and Python. Traffic is distributed and managed by Azure Web Sites. Once a site is running, they can be scaled out automatically or manually. They also support MySQL and WordPress, Joomla and Drupal. These are ideal for small websites created in Visual Studio and uploaded automatically or via API. Websites don’t give you admin access to the box so you can’t remote in – Cloud Services do give you these options.

Cloud Services – These are one of the most flexible solutions. These are web apps (web facing, hosted by IIS) and/or compute roles hosted in a managed execution environment (think automatically started console app). These roles are managed by Azure and can be scaled automatically or manually. These roles are restarted approx once per day and are automatically updated to the latest version of the operating system as they become available. You can remote to these boxes as an administrator.

Mobile Services – These are several APIs and tools which accelerate mobile development including native client libraries for Windows, iOS, Android and HTML. There are REST APIs for push notifications (SendGrid and Pusher) for each platform and tools to provision storage and databases. These tools have been created as components so you do not have to worry creating any underlying infrastructure yourself.

Data

In all cases, when data is stored, it is replicated across three different computers in the Azure datacentre to provide high availability. You can access the data using RESTful backed client libraries and so can be accessed from anywhere that has internet access (so from the instances, or a phone, or computer via the internet). These can be secured with passwords or certificates, or put behind your own protected endpoint.

SQL Databases – These are MS SQL databases but have a slightly reduced set of commands that can be used with them. They are scalable and are very powerful and fast. See SQL Server Feature Limitations (Windows Azure SQL Database) for more information on the reduced feature set. There are always workarounds for the slight limitations, so don’t be put off.

Tables – These are key value NoSQL tables and are stored in Microsoft Azure Table Storage providers which is a RESTful API backed with native client libraries. They’re very easy to use to find and store data. These can be searched very quickly.

Blobs – This is key/value storage for unstructured binary data (files). These are slower to search but can store anything.

Networking

Virtual Networking – You can extend your current network on demand to your own part of Azure, and you can bring up or tear down instances and storage as needed. Useful for small networks or providing intranet apps.

Traffic manager – Like AKAMAI and other routing providers, when you have global applications you may wish to shape traffic differently for each country or user profile. If your applications are stored in different datacentres around the world, Traffic manager allows you to direct client requests to certain datacentres.

Analytics

SQL Database Reporting – This is SQL Reporting Services (SSRS) on a hosted Windows Azure Virtual machine. When used in combination with Microsoft Office, you have a powerful way to create reports and charts and directly import them into Excel.

HD Insight – This is Microsofts implementation of Hadoop (MapReduce). HD Insight spreads the job across a configurable level of nodes and uses the Hadoop Distributed File System (HDFS) to share data. HD Insight clusters can be created and destroyed on demand so you will only be charged when you need them.

Messaging

Queues – You can think of Azure Queues as high availability distributed ordered lists, like a distributed MSMQ. There are RESTful backed client APIs for them. You can also query and add items on the fly. Queues can be created and destroyed programatically. They are extremely quick and are great for sending information from a source which does not need acknowledgement or where the volume will change over time. These are effectively store and forward queues, so you can get one or more Worker Roles to later grab the data and work with it.

Service Bus – These are similar to Queues but also provide a publish-and-subscribe model (queues just push and pop). The Service Bus has the notion of Topics which can be subscribed to by clients. Service Bus therefore allows one-to-many communcation and allows direct one-to-one communcation via its Relay Service, where messages can get through firewalls because both endpoints are connected to secure cloud endpoints. EasyJet use this to connect its halo systems from secure backends to terminals in the airport.

Caching

Application Caching – There are a number of solutions here. This can be either single instance in-memory caching, multiple instance in-memory caching, or single/multiple filesystem backed caching. Caching is key value and can expire. You can opt for a single instance to reserve a percentage of space for caching, or multiple instances to reserve a percentage or spaces, or you can reserve an entire instance for distributed caching. See more at Windows Server AppFabric Caching Features.

CDN – Content Distribution Network allows content to be cached closer to the end user. So if there’s a server that’s physically closer to the user, the content can come from that server, instead of another one on the other side of the world. Think Akamai.

Identity

Windows Azure Active Directory – This is AD in Azure for cloud applications and it comes with a RESTful API. This isn’t full blown AD, but if you need AD, you can use a virtual machine with AD installed on it.

Windows Azure AD Access Control – This provides a third party identity store and can be run standalone with certificates or can be extended to work with third party identity providers such as Facebook, Google and others. Access Control acts as an intermediary for the other providers and consolidates them all into a single access model and API. You can use this approach for single-sign-on for many providers to your app. Standalone identity providers can be excessively complex because they must self sign their own certificates and can be difficult to diagnose.

I hope that gives you an overview of the different offerings. See more info at the Windows Azure fundamentals page.

Executing Remote Powershell Commands

When you begin administering multiple computers, you’ll eventually want to automate the tasks you repeat more than once. Instead of logging into each computer, starting powershell and running the same command, you can use one powershell console locally and send commands to multiple remote computers from that single local console.

In addition to running locally, Powershell can run remotely using WinRM (Windows Remote Management) which uses a management protocol called WSMan. When your local computer talks to another via Powershell remoting (WinRM over WSMan), a remote session is created, the command is executed, then the remote session is normally terminated. Remote sessions can be opened at any time, and a remote session can be persisted if necessary – more about that later.

If you want to execute a command on another machine, you have a couple of options:

  • Start a new Powershell session to the remote machine, execute your code, then close the connection
  • Use Invoke-Command, which runs script in a temporary session
  • Use one of the few Powershell commands that does not run over WinRM and specify the -ComputerName argument
    According to “Get-Help about_Remote”, these commands do not need WinRM to execute:

    • Get-Counter
    • Clear-EventLog
    • Limit-EventLog
    • New-EventLog
    • Get-EventLog
    • Remove-EventLog
    • Get-HotFix
    • Restart-Computer
    • Get-Process
    • Show-EventLog
    • Get-Service
    • Stop-Computer
    • Get-WinEvent
    • Test-Connection
    • Get-WmiObject
    • Write-EventLog

Powershell Sessions

Powershell uses the concept of sessions within which code is executed and acts as the outer scope for variables.

To execute code on a remote machine, you must first start a remote session then enter that remote session. Sessions can be entered and created using Enter-PSSession, and exited using Exit-PSSession. See New-PSSession, Remote-PSSession and Get-PSSession and Get-Command *pssession* for further commands.

To enter a new session, run Enter-PSSession and specify the name of the remote machine. This immediately puts the current powershell console into the execution context of the remote machine:

PS K:\powershell> Enter-PSSession computername
[computername]: PS C:\Users\AlexanderW\Documents>

Note how the [computername] shows we are running in the context of the remote machine.

To leave a session, use Exit-PSSession – “exit” would also work here.

[ltn25039]: PS C:\Users\AlexanderW\Documents> Exit-PSSession
 PS K:\powershell>

Note how we are now back to our local session – there is no [computername] at the last prompt.

PS Sessions can be created and entered as necessary, and can also be stored in variables, letting you switch between sessions too.

PS> $computerName = computername
PS> $session1 = New-PSSession $computerName
PS> $session2 = New-PSSession $computerName
PS> Get-PSSession | Format-List

ComputerName           : computername
ConfigurationName      : Microsoft.PowerShell
InstanceId             : 876b8836-2325-43e5-8ba9-9c1ca25764af
Id                     : 7
Name                   : Session7
Availability           : Available
ApplicationPrivateData : {PSVersionTable}
Runspace               : System.Management.Automation.RemoteRunspace
State                  : Opened
IdleTimeout            : 7200000
OutputBufferingMode    : Block

ComputerName           : computername
ConfigurationName      : Microsoft.PowerShell
InstanceId             : 792d71f8-b4f3-4488-9b35-b2952aa21e90
Id                     : 6
Name                   : Session6
Availability           : Available
ApplicationPrivateData : {PSVersionTable}
Runspace               : System.Management.Automation.RemoteRunspace
State                  : Opened
IdleTimeout            : 7200000
OutputBufferingMode    : Block

PS C:\Users\AlexanderW> (Get-PSSession).length
2

PS> Get-PSSession | Remove-PSSession

PS> (Get-PSSession).length
0

Powershell Remoting with Invoke-Command

Check out the documentation for Invoke-Command by using the Get-Help command. Use the -Online switch to open the related MSDN web page.

Get-Help Invoke-Command -Online

To get a list of logged on users locally, we’d use the following command to call WMI and query the list of logged on users. We then pipe this through a Select-Object filter and fetch the Name.

Get-WmiObject Win32_LoggedOnUser | Select-Object { "$($_.Antecedent)".Split(",")[1].Substring(5) }

We can then use the -ScriptBlock argument with Invoke-Command to get a execute the powershell command to get the list of logged on users for that remote machine.

Invoke-Command -ComputerName <computername> { Get-WmiObject Win32_LoggedOnUser | Select-Object { "$($_.Antecedent)".Split(",")[1].Substring(5) } }

Executing local functions and variables on the remote machine

Functions cannot be passed as references so you must redeclare them in the scope of the command you wish to execute. You can pass variables, and I’ll show you how to do that below.

The following will not work because the -ScriptBlock is executing in a new context on the remote machine, and has no idea what the SayHello function is.

PS K:\powershell> function SayHello() { Write-Host "hello world" }
PS K:\powershell> Invoke-Command -ComputerName <computer name> -ScriptBlock { sayHello }
The term 'sayHello' is not recognized as the name of a cmdlet, function, 
script file, or operable program. Check the spelling of the name, or if a path 
was included, verify that the path is correct and try again.
 + CategoryInfo : ObjectNotFound: (sayHello:String) [], CommandNot 
 FoundException
 + FullyQualifiedErrorId : CommandNotFoundException
 + PSComputerName : ltn25039

To fix this, we re-declare the function inside the ScriptBlock and then we can call it.

PS K:\powershell> Invoke-Command -ComputerName <computername> -ScriptBlock { function SayHello() { Write-Host "hello world" }; sayHello; }
hello world

To prove that was working in the context of the remote machine, we’ll add the hostname to the output:

PS K:\powershell> Invoke-Command -ComputerName <computername> -ScriptBlock { function SayHello() { Write-Host "hello world from" (hostname) }; sayHello; }
hello world from <computername>

Variables declared locally can be passed through to the ScriptBlock via the $Using:<variable name> accessor.

PS K:\powershell> $exampleVariable = "hello world";
PS K:\powershell> Invoke-Command -ComputerName <computername> -ScriptBlock { Write-Host $Using:exampleVariable; }
hello world