Binary Thoughts


Acceptance Testing With SpecFlow and Selenium

| Comments

I’m an avid believer in testingTDD helps drive design, and having a test suite available to verify behavior while maintaining an application is worth a million bucks. However, even the most complete unit test suite still doesn’t guarantee that the integration between different components is perfectly done, nor does it test the value a system delivers from a user perspective. Does a feature really do what a user expects it to do? Does your application fulfill it’s functional and non-functional requirements? Does it even run?

Acceptance testing tests features as if a user was interacting with them. This means testing the system through the user interface, a system driver layer, or a combination of the two. This could be useful in a couple of ways:

Acceptance testing can be implemented as a form of BDD – in a user story we should be able to express the requirements in a format that we can use to write an executable specification.

I quite liked the following simple distinction between what BDD and TDD aims for:

TDD implements and verifies details - did I build the thing right?
BDD specifies features and scenarios - did I build the right thing?

To enable BDD, RSpec does a beautiful job in setting up context and Cucumber guides you down the given when then path. Both (as well as other testing frameworks) can be used to implement acceptance tests, but Cucumber only really makes sense if you have a business facing person looking at your specs. If not, writing your tests in English is a complete waste of time.

Unfortunately the .NET world does not quite have the same amount of tooling support for this kind of thing. This StackOverflow question gives a nice summary on the current state of BDD libraries for .NET. For the xSpec (context / specification) flavour, we have NSpec, Machine.Specifications, and NSpecify. Quite frankly, because of the constraints of a statically typed language, the syntax of all of these libraries sucks a little. We could try and use RSpec with IronRuby, but it will add an extra, unneeded paradigm in a project.

Saving the Environment With Vagrant

| Comments

I’ve been playing a bit with Vagrant, an easy way to set up virtual machines for development purposes. Vagrant provides a simple command line interface paired with a setup script (VagrantFile) in Ruby to provision virtual machines, share folders between the host and the virtual machine and to provide port forwarding. I’ve been using it to set up some Linux development environments on Windows, as well as just for keeping my development machine clean, and running different versions of otherwise incompatible packages/gems.

To illustrate how easy Vagrant makes this process, let us set up a Solr development environment. After downloading and installing Vagrant, we can initialise a project with the init command :

This command creates a VagrantFile with some Ruby code in it that acts as configuration for your virtual machine. Vagrant builds a virtual machine from a base box. A list of publicly available boxes can be found here. For this machine, we can base it on the Ubuntu Lucid 32 bit image:

Now Blogging on Octopress

| Comments

A week ago, I wrote :

Sigh, I can’t seem to make up my mind about the platform to host my blog on. You can find my new blog over at

Will post the details soon over there. This blog will self-destruct in a week or so.

My first blog was at Blogger, back when Google just bought it. What pained me then, was the lack of control – I couldn’t customize it to do exactly what I wanted too. After that, I tried Wordpress, where I was happy for quite some time, but the workflow of editing posts in html or via the rich text editor didn’t work out – I realised that I was spending more time formatting posts (images, code, etc.) then writing them.

This drove me to Posterous, where I could write in Markdown, have gist support, and post by email. This helped me to blog more regularly since it lowered the friction in posting considerably. There were a few bugs here and there, but nothing too dramatic. Over time, I was more and more convinced that I should move again due to:

  • Death by a thousand paper cuts – little things that just worked in other blogging platforms were not functional (or plain non-existent) in Posterous.
  • The acquisition of Posterous by Twitter puts the platform at risk of stagnating.
  • Lack of control (again).

I’ve decided to finally bite the bullet and host my own instance of a blogging platform that I can control. This blog is now running on Octopress, the “blogging framework for hackers”, and I’ve never been happier.

Moving to Git

| Comments

Git, the distributed source control system is fast becoming the de facto standard, at least in the open source arena. Although it comes with a bit of a learning curve, the advantages of using Git over more traditional SCMs like Subversion and TFS far outweighs the investment of time unnecessary to learn it.


A quick introduction to Git, with a basic command reference. If you are familiar with Git, you probably won’t learn anything new. If not, please continue reading.

The Pros and cons

Why switch to Git?

  • It’s fast. Repository history is stored locally, so we can diff and traverse history at a whim without having to involve the server.
  • Cheap local branches. Ever wanted to branch per feature? Git is your friend.
  • Merges are almost magical. Git does a much better job of merging than any other SCM I’ve seen.
  • Since commits are done locally, you can commit as often as you like. Because history for your local repository can be rewritten, commits become more of a checkpoint than something permanent.
  • Awesome command line flow. Although Git has plenty of commands available, you only need a couple to get the job done on a daily basis. Git smoothly integrates with several command line and GUI tools, and you can easily jump to the GUI for more complex operations like merging.

The only downside to Git is with binary assets – if you store and change binary assets in your repository, it will quickly grow to a unwieldy size. 1

How Small Is Small Enough

| Comments

As developers, we are very good at breaking up components into sub-components – sometimes exceedingly so. When confronted with a larger than average chunk of work, we make a choice: either consider the work as indivisible to be delivered in its entirety, or break it up smaller pieces. Most of us already experience a sense of foreboding when confronted with a piece of work that is too large. Whether you are estimating in hours or in story points the question is: when breaking up work items into smaller pieces – how small is small enough?

Delivering software, one piece at a time

By now, we know that the best way to write software is to evolve it bit by bit. Doing work in small chunks allows us to

  • free our minds of details that are not relevant to the here and now
  • make small, certain steps towards a goal
  • measure progress as soon as it happens
  • get feedback as soon as possible

Too much of a good thing can also be bad for you – too many small tasks could cause the overhead of managing these tasks to become significant in itself, and measuring progress could become a nightmare. Also, when estimating on very fine-grained (low-level) tasks, local safety typically becomes a problem.

The Humble Story Point

| Comments

I’ve had some interesting discussions lately on the management of work through user stories. A lot of teams, especially those just starting to use agile techniques, seem to have quite a bit of uncertainty around some common topics :

  • The theory behind story points and why they are preferred over estimations in hours
  • Why story points and velocity are self-correcting measures
  • The sizing process and appropriate sizes for stories

The problems with sizing in units of time

I’ve had the opportunity of working on several waterfall projects. I feel blessed to have been in these teams because I now recognize that:

  • Most estimations in hours are completely thumb-sucked. Given that software development is an incredibly complex beast, how can we possibly forecast time to be spent on a specific task in an accurate fashion?
  • Work break down structures are frequently used to aid in estimations, even though they exasperate inaccuracies of estimation at lower levels.
  • As teams typically contain members at different skill levels, how long a task takes depends on which members of the team actually do the work. An estimate based on time does not make sense in this context.
  • Traditional project management focuses on getting projects on time, sometimes sending teams on a death march and severely compromising on quality.
  • Waterfall based software teams are held to completely inaccurate estimations. The Gantt chart proves to be completely inflexible as the focus of the traditional project manager is on staying true to the original plan.

It should be obvious that I’m not a big believer in time-based estimations. Apart from the above, one should also watch out for the problem of local safety :

If asked how long a small task will take, a developer will naturally not want to deliver it late. After all, this is a small task, it should be on time or the developer wouldn’t be professional. Hence, the estimate will include a buffer to absorb the developer’s uncertainty about the task.

The estimate given by the developer will depend on how confident that developer is that he has understood the task and knows how to perform it. If there is uncertainty in the understanding of the task, then the estimated time to completion is likely to be significantly longer. No one wants to be late with such a small task.

Complexity in Software

| Comments

In a discussion with a former colleague of mine on the organization of components and on system boundaries, we focused on the complexity inherent to software building. It hit me that we can learn a little from physics here.

The law

The first law of thermodynamics states that

energy can be transformed, i.e. changed from one form to another, but cannot be created or destroyed.

This law, in my mind, can be applied to software development quite generally :

Complexity in software can be transformed, i.e. changed from one form to another, but cannot be created or destroyed.

I was about to make this law my own, but on writing this I found that another named this back in 2003. Matt’s first law of software complexity states :

The underlying complexity of a given problem is constant. It can be hidden, but it does not go away.


Complexity is conserved by abstractions. In fact, apparent complexity can be increased by abstractions, but the underlying complexity can never be reduced.

Greener Pastures

| Comments

This post is more than two months late, but I’ve been at DStv Online since November 2011.

Intervate has been incredibly good to me, and is still on my recommended list of employers. I have had incredible growth there, and I thank them for that – but with consideration of some quality of life issues I’ve been having, and a need to stretch my skillset a little, I’ve decided to move on.

So far, I’ve found my new environment stimulating and challenging. I’ve spent most of my career in finance (aka banks) – the media industry is completely new to me so I’m learning a lot there. The people are eager to experiment and learn on both technical and process levels, and I can say that I have never felt at home quite so quickly at any of my previous jobs. This video is a good indication of the type of enviroment they have.

Some perks :

  • Exciting, pioneering work. Not (as a friend put it) writing Excel over and over again.
  • Public facing sites. You know that “What do you do for a living?” question? I have an answer for people who are not complete geeks. I can show them.
  • I work close to my house. No highway travelling. Period. No sitting in traffic. Actually, I might even downgrade to one of these soon.

Nothing quite like a bit of change.

Blog Moved

| Comments

In preparation for my yearly “I really need to update that dang blog, yo!” new years resolution, I’ve followed the advice of Simon Cropp and moved my old blog to Posterous.  

If you are subscribed to my FeedBurner feed, there should be no changes necessary from your side to receive updates from me.  Please update any other bookmarks to my site.

Wordpress has served me well, but has become a bit of an antiquity when compared to some of the modern blogging platforms.  Some of the features here are very encouraging.

Here’s to a new year of fresh content!


Feature Request for SQL Server Management Studio

| Comments

A “Are you sure you want to do this?” confirmation when you attempt to run a query that contains a destructive update (delete/update) without a where clause.   With the hoards of database tables being accidently wiped in the world, why hasn’t this been done yet?  This feature would sure remove a lot of fear of pressing F5 *. * Those who are not afraid running a delete query query on a production database clearly haven’t lived long enough yet.  On a similiar note, those who trust their code still need to experience taking down a couple of servers with it.