Posts

  • Using Sonar Cloud on .NET Core with Travis

    Note to self since I just spent a fustratingly long time on this. In order to analyze your .NET Core project with Sonar on Linux (using Travis in my case since it’s an open source project), the following is required:

    • Have a machine with Mono and .NET Core installed
    • Download the MSBuild Sonar Analyzer and ensure that the scripts are executable
    • Execute it perfectly with just the right amount of parameters

    For reference, some build steps out of a Travis YAML file I’ve been working for NGenerics (ihttps://github.com/ngenerics/ngenerics):

    • mono ../tools/sonar/SonarQube.Scanner.MSBuild.exe begin /n:NGenerics /k:ngenerics-github /d:sonar.login=${SONAR_TOKEN} /d:sonar.host.url=”https://sonarcloud.io” /d:sonar.cs.vstest.reportsPaths=”*/TestResults/.trx” /v:”2.0”
    • dotnet build
    • dotnet test NGenerics.Tests –logger:trx
    • dotnet test NGenerics.Examples –logger:trx
    • mono ../tools/sonar/SonarQube.Scanner.MSBuild.exe end /d:sonar.login=${SONAR_TOKEN}

    Notes:

    • The CLI scanner does not work for C# code at the time of writing. Only the MSBuild scanner is supported.
    • The MSBuild scanner currently requires Mono as the SonarQube team is still in the progress of migrating some dependencies.
    • ${SONAR_TOKEN} is an environment variable injected by Travis that contains an authentication token. The VSTest reports are generated from NUnit tests via the “–logger:trx” parameter passed to dotnet run.
    • The property sonar.login needs to be passed to the “end” step of the MSBuild scanner since that information is not persisted to disk when running “begin”.

    Hope this saves someone some time - the feedback cycle is incredibly long as the Sonar scan tends to break only on the last step.

    READ MORE >

  • Still alive

    I’m still alive. And so will this blog be once again.

    It’s an interesting feeling to notice that your last post is dated 4 years (4 YEARS!) ago.

    I’ve done the migration from Octopress to vanilla Jekyll now and in the spirit of doing the smallest thing possible, this is the MVP.

    Since my last post I have made the leap to Software Development Manager, and then to General Manager - Technology and Development at DStv Digital Media. It’s been an interesting journey and I have some scars to show and some stories to tell.

    For now, let’s consider this is a kickstart of a habit before new years resolutions kick in.

    Hold thumbs. Let’s see how this goes.

    READ MORE >

  • Acceptance testing with SpecFlow and Selenium

    I’m an avid believer in testing - TDD helps drive design, and having a test suite available to verify behavior while maintaining an application is worth a million bucks. However, even the most complete unit test suite still doesn’t guarantee that the integration between different components is perfectly done, nor does it test the value a system delivers from a user perspective. Does a feature really do what a user expects it to do? Does your application fulfill it’s functional and non-functional requirements? Does it even run?

    Acceptance testing tests features as if a user was interacting with them. This means testing the system through the user interface, a system driver layer, or a combination of the two. This could be useful in a couple of ways:

    Acceptance testing can be implemented as a form of BDD - in a user story we should be able to express the requirements in a format that we can use to write an executable specification.

    I quite liked the following simple distinction between what BDD and TDD aims for:

    TDD implements and verifies details - did I build the thing right? BDD specifies features and scenarios - did I build the right thing?

    To enable BDD, RSpec does a beautiful job in setting up context and Cucumber guides you down the given when then path. Both (as well as other testing frameworks) can be used to implement acceptance tests, but Cucumber only really makes sense if you have a business facing person looking at your specs. If not, writing your tests in English is a complete waste of time.

    Unfortunately the .NET world does not quite have the same amount of tooling support for this kind of thing. This StackOverflow question gives a nice summary on the current state of BDD libraries for .NET. For the xSpec (context / specification) flavour, we have NSpec, Machine.Specifications, and NSpecify. Quite frankly, because of the constraints of a statically typed language, the syntax of all of these libraries sucks a little. We could try and use RSpec with IronRuby, but it will add an extra, unneeded paradigm in a project.

    READ MORE >

  • Saving the environment with Vagrant

    I’ve been playing a bit with Vagrant, an easy way to set up virtual machines for development purposes. Vagrant provides a simple command line interface paired with a setup script (VagrantFile) in Ruby to provision virtual machines, share folders between the host and the virtual machine and to provide port forwarding. I’ve been using it to set up some Linux development environments on Windows, as well as just for keeping my development machine clean, and running different versions of otherwise incompatible packages/gems.

    To illustrate how easy Vagrant makes this process, let us set up a Solr development environment. After downloading and installing Vagrant, we can initialise a project with the init command :

    This command creates a VagrantFile with some Ruby code in it that acts as configuration for your virtual machine. Vagrant builds a virtual machine from a base box. A list of publicly available boxes can be found here. For this machine, we can base it on the Ubuntu Lucid 32 bit image:

    READ MORE >

  • Now blogging on Octopress

    A week ago, I wrote :

    Sigh, I can’t seem to make up my mind about the platform to host my blog on. You can find my new blog over at www.riaanhanekom.com.

    Will post the details soon over there. This blog will self-destruct in a week or so.

    My first blog was at Blogger, back when Google just bought it. What pained me then, was the lack of control - I couldn’t customize it to do exactly what I wanted too. After that, I tried Wordpress, where I was happy for quite some time, but the workflow of editing posts in html or via the rich text editor didn’t work out - I realised that I was spending more time formatting posts (images, code, etc.) then writing them.

    This drove me to Posterous, where I could write in Markdown, have gist support, and post by email. This helped me to blog more regularly since it lowered the friction in posting considerably. There were a few bugs here and there, but nothing too dramatic. Over time, I was more and more convinced that I should move again due to:

    • Death by a thousand paper cuts - little things that just worked in other blogging platforms were not functional (or plain non-existent) in Posterous.
    • The acquisition of Posterous by Twitter puts the platform at risk of stagnating.
    • Lack of control (again).

    I’ve decided to finally bite the bullet and host my own instance of a blogging platform that I can control. This blog is now running on Octopress, the “blogging framework for hackers”, and I’ve never been happier.

    READ MORE >

  • Moving to Git

    Git, the distributed source control system is fast becoming the de facto standard, at least in the open source arena. Although it comes with a bit of a learning curve, the advantages of using Git over more traditional SCMs like Subversion and TFS far outweighs the investment of time unnecessary to learn it.

    TL;DR

    A quick introduction to Git, with a basic command reference. If you are familiar with Git, you probably won’t learn anything new. If not, please continue reading.

    The Pros and cons

    Why switch to Git?

    • It’s fast. Repository history is stored locally, so we can diff and traverse history at a whim without having to involve the server.
    • Cheap local branches. Ever wanted to branch per feature? Git is your friend.
    • Merges are almost magical. Git does a much better job of merging than any other SCM I’ve seen.
    • Since commits are done locally, you can commit as often as you like. Because history for your local repository can be rewritten, commits become more of a checkpoint than something permanent.
    • Awesome command line flow. Although Git has plenty of commands available, you only need a couple to get the job done on a daily basis. Git smoothly integrates with several command line and GUI tools, and you can easily jump to the GUI for more complex operations like merging.

    The only downside to Git is with binary assets - if you store and change binary assets in your repository, it will quickly grow to a unwieldy size. 1

    READ MORE >

  • How Small is Small Enough

    As developers, we are very good at breaking up components into sub-components - sometimes exceedingly so. When confronted with a larger than average chunk of work, we make a choice: either consider the work as indivisible to be delivered in its entirety, or break it up smaller pieces. Most of us already experience a sense of foreboding when confronted with a piece of work that is too large. Whether you are estimating in hours or in story points the question is: when breaking up work items into smaller pieces - how small is small enough?

    Delivering software, one piece at a time

    By now, we know that the best way to write software is to evolve it bit by bit. Doing work in small chunks allows us to

    • free our minds of details that are not relevant to the here and now
    • make small, certain steps towards a goal
    • measure progress as soon as it happens
    • get feedback as soon as possible

    Too much of a good thing can also be bad for you - too many small tasks could cause the overhead of managing these tasks to become significant in itself, and measuring progress could become a nightmare. Also, when estimating on very fine-grained (low-level) tasks, local safety typically becomes a problem.

    READ MORE >

  • The Humble Story Point

    I’ve had some interesting discussions lately on the management of work through user stories. A lot of teams, especially those just starting to use agile techniques, seem to have quite a bit of uncertainty around some common topics :

    • The theory behind story points and why they are preferred over estimations in hours
    • Why story points and velocity are self-correcting measures
    • The sizing process and appropriate sizes for stories

    The problems with sizing in units of time

    I’ve had the opportunity of working on several waterfall projects. I feel blessed to have been in these teams because I now recognize that:

    • Most estimations in hours are completely thumb-sucked. Given that software development is an incredibly complex beast, how can we possibly forecast time to be spent on a specific task in an accurate fashion?
    • Work break down structures are frequently used to aid in estimations, even though they exasperate inaccuracies of estimation at lower levels.
    • As teams typically contain members at different skill levels, how long a task takes depends on which members of the team actually do the work. An estimate based on time does not make sense in this context.
    • Traditional project management focuses on getting projects on time, sometimes sending teams on a death march and severely compromising on quality.
    • Waterfall based software teams are held to completely inaccurate estimations. The Gantt chart proves to be completely inflexible as the focus of the traditional project manager is on staying true to the original plan.

    It should be obvious that I’m not a big believer in time-based estimations. Apart from the above, one should also watch out for the problem of local safety :

    If asked how long a small task will take, a developer will naturally not want to deliver it late. After all, this is a small task, it should be on time or the developer wouldn’t be professional. Hence, the estimate will include a buffer to absorb the developer’s uncertainty about the task.

    The estimate given by the developer will depend on how confident that developer is that he has understood the task and knows how to perform it. If there is uncertainty in the understanding of the task, then the estimated time to completion is likely to be significantly longer. No one wants to be late with such a small task.

    READ MORE >

subscribe via RSS