About janvanderhaegen

http://www.switchtory.com/janvan

Winter is coming…

Hey blogsters!

Does anyone else feel like this year just flew by? It’s December again, holy bleep!

Personally, it’s been a crazy busy year, especially since my last blog post. I’ve moved continents again, currently I’m living and working in Paris, France, on some of the sweetest tech I’ve ever tasted.

There’s a time to talk the talk, and there’s a time to walk the walk. This year’s been a slow blogging year, but only because I’ve been so busy walking the walk. One day, who knows when, I’d love to be back with a bunch of blog posts. Time will tell if those will be titled along the way of “The best 5 things to make your startup successful”, or “The 5 pitfalls to avoid on your way to success”… Hah!

Anyways, whether you read this post in 2015 or later, I hope you are all doing absolutely amazing!

merry force

Jan

Google search tip: excluding old web pages

When searching for anything IT related, google/Bing often return Stackoverflow and CodeProject pages that are old. Really old. Like 2008 old, which often is way too old to still be accurate.

Thankfully, Google has a very simple trick: using the ‘advanced search’ you can limit the search result to pages that have been created/updated in the last ‘hour’, ‘day’, ‘week’, etc. A fixed number of different options are available through the advanced search UI, but Google’s query string has all the power you need… Simply append &as_qdr=y2 to any search URL to limit the search to pages that have been created/updated in the last 2 year.
Possible values for the as_qdr query filter are:
– d(x), where (x) is the number of days, for example &as_qdr=d14
– w(x), where (x) is the number of weeks, for example &as_qdr=w8
– y(x), where (x) is the number of years, for example &as_qdr=y1

Ironically, I found this tip in a blog post from 2007

@Bing: Come on my friend, you’ll never win this way…

Happy 4th anniversary!

Today I am meeting with a customer and he asked me how much LightSwitch experience I have. So I opened my blog’s dashboard, sorted the posts by ascending date, and there it was: my first post ever… Debugging your custom LightSwitch (Shell) Extension, dated July 27th, 2011. What a crazy coincidence, today, to the date, it’s been exactly 4 years since I transmitted my first words into the blog-o-sphere. Happy 4th anniversary guys ‘n guls!

How to make Visual Studio always run as an administrator

There’s a gazillion reasons why you might want to run visual studio as an administrator.
Mine was when installing a Nuget package that tried to run a PowerShell script on install, and complained about the execution policy not being set correctly.
Normally, Nuget automatically sets the PowerShell execution policy for the Visual Studio process itself so that it can run any script. Which might be dangerous, but hey if we were shy of danger we’d be doing something boring like being racepilots and not something adventurous like being developers.What developer would run the same track twice, manually! The dullness…

Anyways, a process cannot set it’s own PowerShell execution policy if it’s not running as an administrator.
Long story short, not running as administrator === not automatically being able to run PowerShell install scripts.

Luckily, I was doing a coding session with a very bright partner of mine, and she told me this trick:
– press Windows+E and go to C:\Program Files(x86)\Microsoft Visual Studio XX.X\Common\IDE
– right click and hit: troubleshoot compatilibity
– let it detect issues for a bit…
– troubleshoot program
– click: “the program requires additional permissions”
– yes, save these settings

From now on, VS will always and forever run as an administrator!

Ka-tching!

Forget P@ssw0rds, use phrases that motivate you!

It’s a good practice to change your passwords ever so often. If you’re working at a company with IT staff, they’ll probably force you to change your password every month. This is supposed to increase security, but in reality it decreases as people build a system for themselves to help remember what this month’s password was: J@n*05/2015

Since people change their password but not their own formula, changing password adds absolutely no increased security: even first graders can hack the current mutation given an earlier password.

So here’s the idea: stop using a fixed formula, but use phrases that motivate you!

Every month, pick one small goal for yourself. Just a small thing. Something you want to be better at, something you need to build up some courage to do so, something small you want to change. Then, motivate yourself by using that as a password. Since you’ll be typing it multiple times per day, you’ll motivate yourself to actually go do/change/forget/forgive it. It’s like your little personal life coach, whispering a single small goal you can accomplish this month, in your ear, multiple times per day.

Use this system for a year, and I promise you you’ll be happy you did. Here’s a couple to spark your inspiration:
GoHome@5PM
YouAre110%Sexy
Ask$10Raise
Lose5%WeightBeforeSummer
SendSurprise2Mom&Dad
NoMailsBefore11AM
2Coffees/day=enough
Blog1+/month
Me+Mojo->Run30Minutes
6<hoursOfSleep<9
CheckIn1+/day
EveryEstimate+2
EveryEstimate*4

… Yea, had to learn the hard way on that last one…

Feel free to inspire others too, share your favorite motivational password in the comments below!

Changing the ASP.Net web.config connectionstrings at runtime

So here’s an interesting challenge we had to overcome today: every developer in the team has small variations in his/her workstation setup. Instead of micro-managing each installation (we all know how developers love to be micro-managed and even more so how they love to be told how to set up their workstation), we thought we’d give a stab at changing the connections from the web.config at runtime.

Wait, aren’t there a million posts on how to do that already? Well, yes, but most of them use web.config transformations (which aren’t ran when you locally debug), or actually save the web.config (physically changing the file on disk so that the next developer that gets the default web.config is screwed).

What we wanted to do, is actually load the default web.config, but, per developer, change some of the in-memory values.
Turns out, all you need is a little reflection:

#if CUSTOMCONNECTIONSTRINGS 
        private static void SetConnectionString(string name, string connectionString)
        { 
             typeof(ConfigurationElementCollection)
                .GetField("bReadOnly", BindingFlags.Instance | BindingFlags.NonPublic)
                .SetValue(ConfigurationManager.ConnectionStrings, false);
            var connection = System.Configuration.ConfigurationManager.ConnectionStrings[name];
            typeof(System.Configuration.ConnectionStringSettings).BaseType
                .GetField("_bReadOnly", BindingFlags.Instance | BindingFlags.NonPublic)
                .SetValue(connection, false);
            connection.ConnectionString = connectionString;
        }
#endif

Then, in your global.asax application_start method, before doing anything else:

#if CUSTOMCONNECTIONSTRINGS 
#if BOB
  SetConnectionString("LocalSqlServer", "wow");
  SetConnectionString("DefaultConnection", "much monkey patch");
  SetConnectionString("AndAnother", "very connectionstring");  
#endif   
#endif

Finally, each developer goes to the configuration manager (the dropdown next to ‘Debug’), creates a new configuration based on ‘debug’, then in the project properties > Build > adds the conditional compilation symbols BOB, CUSTOMCONNECTIONSTRINGS.
Now each developer can run his/her own configuration and manage how they’ve set up their own system, the code that does the monkey patching of the connection strings is not even included in the release output, and the actual web.config file is never modified and will always contain the default values.

Github: the social coding experience

A couple of months ago, I joined an open source project called aurelia as a core team member. Like many open source projects, the project uses github for its source control

I’ve been a Microsoft stack lover all my tech life, thus until recently my only visits to github were when someone (for reasons unknown to me at that time) hosted a sample on github. My only experience with git was clicking on the ‘download as zip’ button in github to grab that sample.
Microsoft really never trained me to think otherwise.
I’m a big believer of ‘sharing is caring’ though, that’s why a lot of my blog posts come with inline code samples, samples on MSDN or an extension on codeplex. Imagine where we, the LightSwitch community, would be now if LightSwitch had its source openly available from the start.

About 5-6 weeks ago, I started working on the aurelia validation plugin. It was an eye-opener, to say the least. After only a week of building out some core components, another aurelia team member created a pull request (pull requests are like a request to merge a provided changeset) to implement translations for the validation messages. Great, I thought at the time, a team working together on a project.
Yet, it was more than that. That same week, someone outside of the team submitted a pull request to turn the repository into a JSPM package so it can be easily installed using the JSPM package manager. Soon after, someone fixed some small typos in the documentation. More ‘language packs’ in Mexican, Swedish, Turkish and other languages arrived that week. Some bugs were reported as issues, with clear code sample instructions on how to reproduce it and sometimes even a code to fix the issue, and another issue was opened simply to discuss an integration strategy with another open source validation plugin.
Someone even wrote additional unit tests.
Unit tests!!!
Someone willingly sacrificed personal time to write… unit tests…

I slowly grew to realize the amazing truth: open source projects are not just projects where the source is publicly visible. Github isn’t just a source control website. The open source community, github in particular, are also, and perhaps most importantly, about the social coding experience. Working together with a variety of people to accomplish common goals, to share the creation of something awesome, to share and intensify the joy of our common passion.

Earlier this year, I was talking to some Microsoft folks and they were so excited about their recent announcement that Microsoft server stack is going completely open source.
I didn’t get the big deal at that point.
I use the technology already, and if there’s something I’d like to do different, there usually is support to configure my will or I reverse engineer the sources to see if I can monkey patch it, and carry on with my task at hand.
Yet now I understand: open source is not about having their source in the open, it’s about having an open invitation to join their coding experience.

Let’s hope that everyone and every team at Microsoft truly get that too. Let’s hope that their next products or versions of existing products, embrace the same love for the social coding experience. Lets hope that Microsoft can teach their somewhat traditional B2B LOB introvert application developer flock to embrace the social coding experience too.

Because, after all, what a beautiful experience it turns out to be.

Supporting OData $inlinecount & json verbose with Web API OData

OData, the open data protocol is an awesome protocol for exposing data from your server tier, because it allows the caller to use special query arguments to filter, sort, select only particular columns, request related entities in a single call, and do paging.

Basically, this means you end up with an “open” data service API, where you literally just expose data and leave it up to the client to dictate the specific use case. Whether you want to do that is kinda negotiable for your own client, but when you’re building an application where you want to really want to support and nurture users building 3rd party integration tools, OData is the perfect candidate to build an “I don’t know beforehand what scenarios you want to accomplish” API.

Furthermore, creating an OData read service is really simple, you take the Microsoft.AspNet.WebApi.OData nuget package, you expose an IQueryable in your Controller, and you slap on the [EnableQueryAttribute]:

    public class QueryController : ApiController
    {

        [EnableQuery]         
        [HttpGet]
        public IQueryable People()
        {
            return this.dbContext.People;
        }
    }

So here’s the problem: suppose there’s 100 people in the database, with ages evenly divided from 1 – 100. The caller requests all people with age > 50 ($filter=age gt 50). We also applied a page-size (which you should really always do to avoid self-inflicted DDOS attacks) of 25 maximum records in a single response. At this point, we do not want to just send back 25 records, but we also want to inform the caller that we have a applied a page-size and there are really 50 people that match his search criteria, and wouldn’t it be nice if we can also inform the caller how to get the next page?

The good news is: according to the OData spec, you can! By returning an “OData verbose” response (“verbose” being the opposite of “light”, which is the new OData default response), you can send back a result not only containing the actual results but additional metadata like the number of people that matched your search criteria, and how to get the next page of results.

The really bad news is: the Web API OData implementation does not support the $inlinecount query parameter (which instructs the server to send back the count after filtering but before paging). OUCH!

Weirdly, after following a dozen blog posts (like this really good one ) I stumbled upon the fact that this is only partly true… The Web API Odata implementation does in fact support the $inlinecount query parameter, however it does not in any way support actually sending back the JSON verbose format where the caller actually gets to see the query parameter…
Wait, whot?
A caller can send the $inlinecount, the EnableQueryAtrribute (which really does all the heavy work) will correctly handle it, but instead of properly sending the count to the client it will simply keep it in memory and send only the results back. Same story with the link to the next page of records, when you implement a PageSize.
So the good news is: to re-enable the $inlinecount, or in other words: send back a more verbose response to the user, you can make your own EnableQueryAttribute:

using Newtonsoft.Json;
using System;
using System.Linq;
using System.Net;
using System.Net.Http;
using System.Web.Http;
using System.Web.Http.Filters;
using System.Web.Http.OData;
using System.Web.Http.OData.Extensions;
using System.Web.Http.OData.Query;

namespace Lobsta.webapi
{
    internal class ODataVerbose
    {
        public IQueryable Results { get; set; }

        [JsonProperty(NullValueHandling = NullValueHandling.Ignore)]
        public long? __count { get; set; }

        [JsonProperty(NullValueHandling = NullValueHandling.Ignore)]
        public string __next { get; set; }
    }
    public class QueryableAttribute : EnableQueryAttribute
    {
        public bool ForceInlineCount { get; private set; } 
        public QueryableAttribute(bool forceInlineCount = true, int PageSize = 25)
        {
            this.ForceInlineCount = forceInlineCount;
            //Enables server paging by default
            if (this.PageSize == 0)
            {
                this.PageSize = PageSize;
            }
        }
        public override void OnActionExecuted(HttpActionExecutedContext actionExecutedContext)
        {
            //Enables inlinecount by default if forced to do so by adding to query string
            if (this.ForceInlineCount && !actionExecutedContext.Request.GetQueryNameValuePairs().Any(c => c.Key == "inlinecount"))
            {
                var requestUri = actionExecutedContext.Request.RequestUri.ToString();
                if (string.IsNullOrEmpty(actionExecutedContext.Request.RequestUri.Query))
                    requestUri += "?$inlinecount=allpages";
                else
                    requestUri += "&$inlinecount=allpages";
                actionExecutedContext.Request.RequestUri = new Uri(requestUri); 
            }

            //Let OData implementation handle everything
            base.OnActionExecuted(actionExecutedContext);

            //Examine if we want to return fat result instead of default
            var odataOptions = actionExecutedContext.Request.ODataProperties();  //This is the secret sauce, really.
            object responseObject;
            if (
                ResponseIsValid(actionExecutedContext.Response) 
                && actionExecutedContext.Response.TryGetContentValue(out responseObject)
                && responseObject is IQueryable)
            {
                actionExecutedContext.Response =
                    actionExecutedContext.Request.CreateResponse(
                        HttpStatusCode.OK,
                        new ODataVerbose
                        {
                            Results = (IQueryable)responseObject,
                            __count = odataOptions.TotalCount,
                            __next = (odataOptions.NextLink == null) ? null : odataOptions.NextLink.PathAndQuery
                        }
                    );
            }
        }

        private bool ResponseIsValid(HttpResponseMessage response)
        {
            return (response != null && response.StatusCode == HttpStatusCode.OK && (response.Content is ObjectContent));
        }
    }
}

Note: this is highly opinionated sample code, it always uses a page size of 25, and always returns the inlinecount… Change to your liking, by example checking if the requested format is jsonverbose, to be OData spec compliant
Finally, replace the ‘EnableQuery’ attribute with our custom one:

    public class QueryController : ApiController
    {

        [Queryable]         
        [HttpGet]
        public IQueryable People()
        {
            return this.dbContext.People;
        }
    }

Putting it to the test, I called: /api/query/people?$orderby=name&$filter=age gt 50&$inlinecount=allpages again and now correctly receive my requested metadata:

{
 "Results":[
   {"age":51,"name":"Anna"} /* More results were included of course */
 ],
 "__count":50,
 "__next":"/api/query/people?$orderby=name&$filter=age%20gt%2050$inlinecount=allpages&$skip=25"
}

Coding tip #2: fail fast, don’t fail, don’t worry

Facebook recently pulled the plug on one of their data centers…
On purpose.

The idea was to investigate how well they could recover from live failures.

We developers, and beginning developers especially, sometimes have this weird notion that code should be perfect and withstand any storm. Truth is, something can and will always go wrong at some point in time, and we should stop fearing it.

The first, most noticeable form of something going wrong, is an exception that’s being thrown. Beginning developers will often shy exceptions. They’re cryptic and are more likely to happen in the middle of a demo than while developing…

Fail fast

Hence, out of fear of introducing new exceptions by actually throwing one, beginning developers start writing code as:

public object getValue(string key){
  if(key == "CurrentUser")
    return SomeContext.User.Name;
  if(key == "CurrentTeam")
    return SomeContext.Team.Name;
  return "Not found";
}

Peaceful, right? No matter what value you ask for, no exception shall ever leave this method.

The unfortunate thing here if the calling logic is flawed somewhere, you might only find out much, much later in the process.
The above piece of code is called by some EmailTaskPreparer, which retrieves “current_user” to create an instance of a task. That task is put on a queue, one hour later a worker process picks it up and processes it by getting the current user’s email address, then sending an email.
One day later, you get a bug report there are undeliverable emails hanging around the system and you get to embark on the pleasant adventure of backtracking every possible piece of code that is sending emails, putting email-tasks on queues, and how they build those tasks.

The key lesson is: fail fast. If something is wrong, throw an exception on the spot instead of returning a peaceful default value.
The calling logic will still be just as flawed, but at least now you end up with a bug report stating that an InvalidKeyArgument was thrown when the EmailTaskPreparer called ‘getValue’, which will be easy to find, fix and will give you more time to actually get some real work done.

Don’t fail

Obviously, learning to code is all about understanding to take everything in moderation. The next rule of fist is to understand that exceptions are for ‘exceptional situations’ only.
When you have an abundance of exceptions being thrown all over your code, you’ll soon end up with a lot of try-catch blocks, and eventually you’ll end up with a code base that has two new problems:
– the code becomes less readable (at the bottom of your try block are a bunch of alternative code paths that make your logic harder to follow)
– the code becomes slower (the compiler can do less optimizations because it needs to make sure it can handle your expected exceptional code pathing)

To address the first, consider adding logic to your classes that can pre-approve an operation. This is why an iterator has a ‘hasNext()’ function, a command has a ‘canExecute()’ function: you can ask if you should expect something to go wrong, and decide on how to handle that on the spot, instead of 100’s of lines lower in a catch=block. It’ll make your code much more readable. Don’t fail if you could have avoided it.

Don’t worry

Finally, there are very little use cases to actually catch an exception. If you take the two previous rules in mind, exceptions will only occur when something really unexpected happens. In literature, exceptions are considered ‘final’ (the code below where you throw an exception will not execute) because they signify the system has entered a state from which recovery is not expected to be possible and execution should not continue.
Hence, if an exception occurs that you could not possibly have avoided and there’s no way you can recover from it, why bother catching it?
Don’t worry. Really, you should only really catch exceptions in a very limited couple of cases:
– you could not avoid it (no ‘hasNext’, ‘canExecute’, etc) but you still know how to recover from it For example: reschedule the task for later execution
– you want to hide exception details: a general catch-all block that catches any exception, logs it, and throws a new exception that hides any internals specific to the current layer of your application. For example you can should the SQL exception (“Connection failed for user “Bob” with password “Bob123″), only to throw a new generic DatabaseOperationFailedException.*
Beyond the above use cases and a couple of others, catching and handling exceptions should not be a part of the majority of your code base.

Don’t worry, all systems will fail at one point or the other, just try to make sure that when it fails, you’ll have a precise stacktrace and clean codebase to help you trace their cause (or, that you know how to plug a data center back in).

Understanding aurelia.io development prerequisites from a .NET perspective (NPM, JSPM, GULP, KARMA, WTH)

There is this awesome javascript framework called aurelia.io that I’ve been working on and using for a little while now. At a certain point, I feel you’ll owe it to yourself (*if not then at least you owe it to me *) to download one of the sample apps and play around a bit.

Some time ago, when I was trying to do just that, I felt overwhelmed at first. I had limited experience with javascript, HTML and CSS. Aurelia itself looked (is) really easy to grasp, especially since it has a very C# + XAMLesque feel to it. The learning curve still felt steep though, but only because of the surrounding open source stack: NPM, JSPM, GULP, KARMA and the likes. I was like WTH are all of these, and I wish I took 3 minutes to read an absolute beginners blog post like this one to bring my vocabulary up to speed.

Your system

There are some things you’ll need to setup on your system before making the switch to this (or any) open source project…

Git

Git is a free and distributed version control system. In .NET speak: yes it is exactly the same as TFS but it’s totally different. Whenever you want to pull or clone a project (‘check out’) from a server (like github, you’ll need git so your OS understands what the hell pull or clone even means.
Additionally, you’ll often need to execute a command (more about that later). The normal cmd.exe or hipster mac OS alternative will at one point or the other drop the ball in understanding those commands, whereas installing git gives you a ‘git bash’ which looks and acts completely the same as your command prompt but understands how everything in this OSS stack fits together, better.

NodeJS

NodeJS is a platform based on Chrome’s javascript runtime. You could compare this to installing the .NET runtime on your system. To run your aurelia apps, users do not need to install NodeJS first, because your app will run in the browser and use that stack. However, when you are developing apps, you do need a runtime for your tooling to execute in. And that runtime, will be NodeJS.

IDE

You’ll also need an IDE. Your IDE should be an integrated bundle that can do three things: manage your project, execute commands and help you write code.

Unfortunately, Visual Studio ultimate super duper bundle, fails the third requirement as it does not (yet) support ES6 syntax. I know! What a shocker right!

My first alternative was going back to a simple text editor. According to stackoverflow’s 2015 survey, SublimeText is the second favorite one. Once set up with an ES6 plugin for next generation javascript syntax highlighting and intellisense and an integrated terminal/command prompt I realized going back to a text editor was the exact opposite of taking a step back. It’s really, really, really fast. Like: really fast!

A little later, we received a free OSS license for JetBrain’s WebStorm IDE. One of the biggest, noteworthy advantages for me was the integrated (visual) support for git&github. After using it for about 2 months now, it has my love, my official seal of approval, and I doubt I’ll go back to VS for javascript development…

Package managers

Managers… Plural.

NPM

NPM is a javascript package manager. In aurelia, we use NPM because of it’s ability not only to install packages in your project, but also in your tooling. Think of it as Visual Studio’s “plugins”, only you install the plugins in your build process/tooling instead of your IDE.

JSPM

JSPM is also a javascript package manager. In aurelia, we use JSPM because of it’s ability to manage dependencies not only in your project but also in your app while it’s running. Suppose your project uses package A and B. Both A and B have a dependency on package C, but they depend on a different major version: A uses C1 and B uses C2. Whoa, now what? Actually, JSPM will download both version 1 and 2 of C, then when you run your project it will load both C1 and C2 in a different ‘scope’. If some component in A needs something of C, it’ll get the C1 version, whereas a component in B will get the C2 version. WHOA! Think of it as a (really) advanced Nuget.

NPM Libraries

Remember: NPM is used to install tooling, so these libraries are your development tools.

Gulp

Using NPM, you can install Gulp. Gulp runs automated development tasks. gulp watch for example, is the equivalent of your Visual Studio’s F5. It will run several automation tasks like cleaning the code, building it, starting a NodeJS server (like VS starts an IISExpress) and serve up your website (the actual watch part). While you are developing, you can keep the gulp watch command running and any change you do in your code will trigger the complete clean, build & run task chain (in about 0.3 secs) so that your website is automatically refreshed and you can see your changes.

Wait: build? Is javascript built? Javascript itself is not built but it’s a scripting language that’s executed by the browser’s javascript runtime. However, you’ll often use a superset of Javascript (like TypeScript, Dart, … The aurelia team just code in ES6/ES7 (the next and next-next javascript spec)). Because your browser does not understand those, your TS/Dart/ES6 is compiled down (called ‘transpiled’) to plain vanilla javascript that works in any browser.

We use gulp for a number of tasks including building documentation, releasing, etc as well.

Karma

Karma is your test runner. It performs a number of tasks to build the code (like gulp), scan a directory for all test cases and run them lightning fast. Just like gulp watch, you can trigger your test runner with karma start and keep the command running. Any code change will trigger a rerun of affected tests (again: wow) so you can instantly check what you’re breaking/fixing. (More breaking than fixing in my case, but whatev)

Time to play

Armed with this knowledge, I hope you find it easier to go play with an aurelia sample. Set up git and NodeJS from the System chapter, then launch a git bash. Navigate to a working folder:

cd .. #go up one directory, or
cd someFolderName # go into a directory

Now, execute

git clone https://github.com/aurelia/app-contacts.git

This will clone (download) the contents of the a sample app into a folder called app-contacts. When done, navigate to that folder

cd app-contacts

Armed with the knowledge in this article, you should now be able to follow this little guide, and have the app running locally without screaming WTH once. 😉