I’m still alive. And so will this blog be once again.
It’s an interesting feeling to notice that your last post is dated 4 years (4 YEARS!) ago.
I’ve done the migration from Octopress to vanilla Jekyll now and in the spirit of doing the smallest thing possible, this is the MVP.
Since my last post I have made the leap to Software Development Manager, and then to General Manager - Technology and Development at DStv Digital Media. It’s been an interesting journey and I have some scars to show and some stories to tell.
For now, let’s consider this is a kickstart of a habit before new years resolutions kick in.
Hold thumbs. Let’s see how this goes.
I’m an avid believer in testing - TDD helps drive design, and having a test suite available to verify behavior while maintaining an application is worth a million bucks. However, even the most complete unit test suite still doesn’t guarantee that the integration between different components is perfectly done, nor does it test the value a system delivers from a user perspective. Does a feature really do what a user expects it to do? Does your application fulfill it’s functional and non-functional requirements? Does it even run?
Acceptance testing tests features as if a user was interacting with them. This means testing the system through the user interface, a system driver layer, or a combination of the two. This could be useful in a couple of ways:
- As a smoke test for a deployed application.
- As part of your continuous integration build.
- As part of your deployment pipeline strategy.
I quite liked the following simple distinction between what BDD and TDD aims for:
To enable BDD, RSpec does a beautiful job in setting up context and Cucumber guides you down the given when then path. Both (as well as other testing frameworks) can be used to implement acceptance tests, but Cucumber only really makes sense if you have a business facing person looking at your specs. If not, writing your tests in English is a complete waste of time.
Unfortunately the .NET world does not quite have the same amount of tooling support for this kind of thing. This StackOverflow question gives a nice summary on the current state of BDD libraries for .NET. For the xSpec (context / specification) flavour, we have NSpec, Machine.Specifications, and NSpecify. Quite frankly, because of the constraints of a statically typed language, the syntax of all of these libraries sucks a little. We could try and use RSpec with IronRuby, but it will add an extra, unneeded paradigm in a project.
I’ve been playing a bit with Vagrant, an easy way to set up virtual machines for development purposes. Vagrant provides a simple command line interface paired with a setup script (VagrantFile) in Ruby to provision virtual machines, share folders between the host and the virtual machine and to provide port forwarding. I’ve been using it to set up some Linux development environments on Windows, as well as just for keeping my development machine clean, and running different versions of otherwise incompatible packages/gems.
This command creates a VagrantFile with some Ruby code in it that acts as configuration for your virtual machine. Vagrant builds a virtual machine from a base box. A list of publicly available boxes can be found here. For this machine, we can base it on the Ubuntu Lucid 32 bit image:
A week ago, I wrote :
Sigh, I can’t seem to make up my mind about the platform to host my blog on. You can find my new blog over at www.riaanhanekom.com.
Will post the details soon over there. This blog will self-destruct in a week or so.
My first blog was at Blogger, back when Google just bought it. What pained me then, was the lack of control - I couldn’t customize it to do exactly what I wanted too. After that, I tried Wordpress, where I was happy for quite some time, but the workflow of editing posts in html or via the rich text editor didn’t work out - I realised that I was spending more time formatting posts (images, code, etc.) then writing them.
This drove me to Posterous, where I could write in Markdown, have gist support, and post by email. This helped me to blog more regularly since it lowered the friction in posting considerably. There were a few bugs here and there, but nothing too dramatic. Over time, I was more and more convinced that I should move again due to:
- Death by a thousand paper cuts - little things that just worked in other blogging platforms were not functional (or plain non-existent) in Posterous.
- The acquisition of Posterous by Twitter puts the platform at risk of stagnating.
- Lack of control (again).
I’ve decided to finally bite the bullet and host my own instance of a blogging platform that I can control. This blog is now running on Octopress, the “blogging framework for hackers”, and I’ve never been happier.
Git, the distributed source control system is fast becoming the de facto standard, at least in the open source arena. Although it comes with a bit of a learning curve, the advantages of using Git over more traditional SCMs like Subversion and TFS far outweighs the investment of time unnecessary to learn it.
A quick introduction to Git, with a basic command reference. If you are familiar with Git, you probably won’t learn anything new. If not, please continue reading.
The Pros and cons
Why switch to Git?
- It’s fast. Repository history is stored locally, so we can diff and traverse history at a whim without having to involve the server.
- Cheap local branches. Ever wanted to branch per feature? Git is your friend.
- Merges are almost magical. Git does a much better job of merging than any other SCM I’ve seen.
- Since commits are done locally, you can commit as often as you like. Because history for your local repository can be rewritten, commits become more of a checkpoint than something permanent.
- Awesome command line flow. Although Git has plenty of commands available, you only need a couple to get the job done on a daily basis. Git smoothly integrates with several command line and GUI tools, and you can easily jump to the GUI for more complex operations like merging.
The only downside to Git is with binary assets - if you store and change binary assets in your repository, it will quickly grow to a unwieldy size. 1
As developers, we are very good at breaking up components into sub-components - sometimes exceedingly so. When confronted with a larger than average chunk of work, we make a choice: either consider the work as indivisible to be delivered in its entirety, or break it up smaller pieces. Most of us already experience a sense of foreboding when confronted with a piece of work that is too large. Whether you are estimating in hours or in story points the question is: when breaking up work items into smaller pieces - how small is small enough?
Delivering software, one piece at a time
By now, we know that the best way to write software is to evolve it bit by bit. Doing work in small chunks allows us to
- free our minds of details that are not relevant to the here and now
- make small, certain steps towards a goal
- measure progress as soon as it happens
- get feedback as soon as possible
Too much of a good thing can also be bad for you - too many small tasks could cause the overhead of managing these tasks to become significant in itself, and measuring progress could become a nightmare. Also, when estimating on very fine-grained (low-level) tasks, local safety typically becomes a problem.
I’ve had some interesting discussions lately on the management of work through user stories. A lot of teams, especially those just starting to use agile techniques, seem to have quite a bit of uncertainty around some common topics :
- The theory behind story points and why they are preferred over estimations in hours
- Why story points and velocity are self-correcting measures
- The sizing process and appropriate sizes for stories
The problems with sizing in units of time
I’ve had the opportunity of working on several waterfall projects. I feel blessed to have been in these teams because I now recognize that:
- Most estimations in hours are completely thumb-sucked. Given that software development is an incredibly complex beast, how can we possibly forecast time to be spent on a specific task in an accurate fashion?
- Work break down structures are frequently used to aid in estimations, even though they exasperate inaccuracies of estimation at lower levels.
- As teams typically contain members at different skill levels, how long a task takes depends on which members of the team actually do the work. An estimate based on time does not make sense in this context.
- Traditional project management focuses on getting projects on time, sometimes sending teams on a death march and severely compromising on quality.
- Waterfall based software teams are held to completely inaccurate estimations. The Gantt chart proves to be completely inflexible as the focus of the traditional project manager is on staying true to the original plan.
It should be obvious that I’m not a big believer in time-based estimations. Apart from the above, one should also watch out for the problem of local safety :
If asked how long a small task will take, a developer will naturally not want to deliver it late. After all, this is a small task, it should be on time or the developer wouldn’t be professional. Hence, the estimate will include a buffer to absorb the developer’s uncertainty about the task.
The estimate given by the developer will depend on how confident that developer is that he has understood the task and knows how to perform it. If there is uncertainty in the understanding of the task, then the estimated time to completion is likely to be significantly longer. No one wants to be late with such a small task.
In a discussion with a former colleague of mine on the organization of components and on system boundaries, we focused on the complexity inherent to software building. It hit me that we can learn a little from physics here.
The first law of thermodynamics states that
energy can be transformed, i.e. changed from one form to another, but cannot be created or destroyed.
This law, in my mind, can be applied to software development quite generally :
Complexity in software can be transformed, i.e. changed from one form to another, but cannot be created or destroyed.
I was about to make this law my own, but on writing this I found that another named this back in 2003. Matt’s first law of software complexity states :
The underlying complexity of a given problem is constant. It can be hidden, but it does not go away.
Complexity is conserved by abstractions. In fact, apparent complexity can be increased by abstractions, but the underlying complexity can never be reduced.
This post is more than two months late, but I’ve been at DStv Online since November 2011.
Intervate has been incredibly good to me, and is still on my recommended list of employers. I have had incredible growth there, and I thank them for that - but with consideration of some quality of life issues I’ve been having, and a need to stretch my skillset a little, I’ve decided to move on.
So far, I’ve found my new environment stimulating and challenging. I’ve spent most of my career in finance (aka banks) - the media industry is completely new to me so I’m learning a lot there. The people are eager to experiment and learn on both technical and process levels, and I can say that I have never felt at home quite so quickly at any of my previous jobs. This video is a good indication of the type of enviroment they have.
Some perks :
- Exciting, pioneering work. Not (as a friend put it) writing Excel over and over again.
- Public facing sites. You know that “What do you do for a living?” question? I have an answer for people who are not complete geeks. I can show them.
- I work close to my house. No highway travelling. Period. No sitting in traffic. Actually, I might even downgrade to one of these soon.
Nothing quite like a bit of change.
If you are subscribed to my FeedBurner feed, there should be no changes necessary from your side to receive updates from me. Please update any other bookmarks to my site.
Wordpress has served me well, but has become a bit of an antiquity when compared to some of the modern blogging platforms. Some of the features here are very encouraging.
Here's to a new year of fresh content!
A "Are you sure you want to do this?" confirmation when you attempt to run a query that contains a destructive update (delete/update) without a where clause. With the hoards of database tables being accidently wiped in the world, why hasn't this been done yet? This feature would sure remove a lot of fear of pressing F5 *. * Those who are not afraid running a delete query query on a production database clearly haven't lived long enough yet. On a similiar note, those who trust their code still need to experience taking down a couple of servers with it.
Lean Software Development : An Agile Toolkit, Mary Poppendieck and Tom Poppendieck. An excellent resource on how the lean principles from product manufacturing can be applied unto software development. A good overview of content can be found in their 2002 paper here (pdf) and the Wikipedia article on Lean.
Each chapter in the book describes several “thinking tools” for the 7 Lean Principles.
Background: We’ve started a weekly patterns & practices meeting at work with some of our senior developers where our discussions and actions will hopefully bring some improvement to the current development environment. Once a week one of us has an opportunity to demonstrate a new topic – very much akin to knowledge transfer session but more fine-grained and at a higher level than just technology. Gareth Stephenson suggested we blog about the content for the benefit of others, which I think is not a bad idea at all. Furthermore, I’ve never been able to find a good, down-to-earth resource that explains the benefits of DI and how to get started with it. This post is a recording of a short presentation I performed on DI and the patterns that emerge in its usage – I hope it proves useful.
.NET Developers Don’t Do Patterns
There’s a surprising amount of .NET developers that are still not familiar with Dependency Injection (DI), and patterns in general. In the Java world, File –> New Project would give you a template with Spring baked in – when working with .NET that kind of guidance is missing from the toolset. Perhaps some exposure to MEF with .NET 4.0 will entice .NET developers to look further than just the Microsoft toolset, in the same way that Entity Framework finally exposed the majority of developers to ORM’s with NHibernate getting quite a bit of adoption as a result. DI and Inversion of Control go hand-in-hand, but be careful not to confuse the two – DI is a tool, whereas Inversion of Control is a pattern.
A World Filled With Legacy Code
We’ll get to why we would use a container soon, but consider the following code in the meanwhile:
The Passionate Programmer: Creating a Remarkable Career in Software Development (Chad Fowler) and The Pragmatic Programmer (Andrew Hunt and David Thomas).
I wish I had read The Pragmatic Programmer earlier in my career as it provides an excellent overview of many good practices in software development. Reading this a couple of years ago would have avoided/shortened many a learning experience.
A warm welcome to David Schmitt, who joined the NGenerics Team. David's bio : David is a Debian Admin and .Net Developer living and working in Vienna, Austria. He's partner in a small tech company startup at http://dasz.at/. His current interests with NGenerics is the implementation of the Immutable namespace. As stated in his bio, David will be working on implementing a selection of immutable data structures.
- NGenerics now has a Silverlight version in this release. The assemblies have been compiled against Silverlight 3.
New data structures
- Single-Valued (i.e) not KeyValuePair version of the RedBlackTree.
- A generic version of the priority queue that can have a key implementing IComparable, or any object with an IComparer instance.
- Added the Converter pattern under NGenerics.Patterns.Conversion.
- Added the Specification pattern under NGenerics.Patterns.Specification.
New Extension Methods
- ConvertTo, Serialize, Deserialize, and DeepCopy extension on Object
- ForEach extension on IEnumerable<T>
- IsSimiliar extensions on double
- Missing list extensions in Silverlight
Content migration for NGenerics went well - but please note the following changes:
- All downloads for releases have been moved to the new project site.
- Due to Google not being willing to add the MS-PL license to Google Code, we've had to re-license NGenerics under the LGPL. Practically the same license, just more wordy.
- A User Voice site has been set up for feature requests - please log (and vote for) any features you would like to see in NGenerics. We'll get to these in order of the number of votes attached to them.
- Discussions are now hosted on the ngenerics-users Google group.
The home of NGenerics has moved to Google Code from it's original project site on CodePlex. We're still in the process of migrating some of the content like the downloads, but for the most part we're up and running. If you have any issues or feature suggestions, please submit them to the Google Code project site.
General Data Structures
The Priority Queue data structure has the same basic behavior and operations as the classic queue found in .NET. A queue applies a first-in, first-out (FIFO) approach to a list of objects, and is typically used in message processing where messages are processed in order of arrival. Queue implementations usually have the following operations available :
- Enqueue - Adds the object at the back of the queue.
- Dequeue - Fetches and removes the object at the front of the queue.
- Peek - Fetches the object at the front of the queue for inspection without removing it.
- IsEmpty - Provides an indication of whether the queue contains any items.
- Count - The number of items in the queue.
Cool, didn't know you could do that :
Of course, this doesn't play all that well with C# :
Maybe I should finally tackle that CLI specification...
Incredibly simple, yet it took me a while to craft the correct search terms to come up with this solution. I can't even find that forum post now, so hopefully this entry will save someone else some time.
In a lot of solutions there exists a base class that would take the inheritor as a type reference to be able to do some work on it - you'll typically see this pattern in fluent interfaces and classes that utilize reflection. In our current solution we generate NHibernate entities on the fly, which all need to inherit from a base class that look something like this :
Utilizing TypeBuilder, we can build new classes as follows :
General Data Structures
Looking at too many tree structures is bound to make your eyes bleed, so we'll take a bit of of break to discuss one of the simpler data structures NGenerics offers - the ObjectMatrix. The ObjectMatrix is a representation of a 2-dimensional array of objects, like, for example, a game board :
Thanks to a contribution from Andre van der Merwe (blog, twitter), NGenerics now features Tarjan's strongly connected components algorithm on it's Graph implementation. By using Tarjan's algorithm, we're able to find the strongly connected components (read : cycles) in a directed graph. You can invoke it via the FindCycles method on the Graph<T> class, available from build #126.96.36.199764 (30 Jun 09 13:40).
General Data Structures
[If you haven't read my previous post on General Trees, it might be a good idea to do so before reading further.]
A Binary Tree is a refined version of the General Tree that limits the maximum number of children each node can have - two, to be exact, hence the name. Child nodes are thus referred to as the Left and Right child nodes. The BinaryTree<T> class in NGenerics provides an implementation of a Binary Tree, which, when simplified, looks something like this :
I've found podcasts a good way to spend your time when you're stuck in traffic - here's a list of my favorite developer podcasts in alphabetical order :
- .NET Rocks - still one of the best podcasts out there. The banter is fun to listen too, and they have a wide variety of guests and topics on .NET. Also presents a huge (but fun) challenge in catching up on previous episodes with more than 400 episodes recorded.
- 43 Folders - a productivity podcast, recommended.
- Alt.NET Podcast - infrequent releases, but some interesting topics.
- Deep Fried Bytes - good topics and friendly banter in a Southern flavour.
- Elegant Code - good topics, nice variety of guests.
- HanselMinutes - possibly my top podcast. Scott Hanselman provides a technical (and sometime hilarious) view on a variety of developer topics.
- Get Scripting - useful if you're into PowerShell scripting. The audio quality is not that great though.
- Herding Code - falls into the same class as Deep Fried Bytes and Elegant Code.
- Linux Outlaws - News cast for everything Linux.
- On Open Source - variety of topics on different platforms/languages, infrequent releases in audio format.
- Polymorphic Podcast - infrequent releases, but some good content in most of the episodes.
- Pragmatic Podcast - I just love the format of the Pragmatic Podcast - it plays like a news reel.
- Rookie Designer - originally downloaded it for my wife, but now I'm listening to it. All about the life of a designer (web / graphics).
- Ruby On Rails Podcast - ruby on rails.
- Software Engineering Podcast - more formal discussions on computer science topics. The accent takes some getting used to, but the content is worthwhile.
- Sparkling Client - short and sweet Silverlight podcasts.
- The ASP.NET Podcast - (mostly) short podcasts on ASP.NET and web development in general - sound quality gets in the way sometimes.
- The Java Posse - newscast on all things Java.
- The Startup Success Podcast - podcast on what it takes to build an IT startup - definitely recommended.
- The Thirsty Developer - wide variety of topics, good content.
General Data Structures
[ Note : This post feels a little like CS101 - but I felt it necessary to discuss the basic tree concepts before we move on to some of the more specialized trees like search trees. ]
Visually, we can represent it, well, as a tree :
I share the same sentiment as Ayende on the visibility of open source project usage - page views and downloads are not useful in determining usage. If you do use NGenerics in your pojects (whether it be public or private), you can let us know on the Ohloh project page. If you're using it in an public/open source project, drop us a line so that we can add a link to you in our Hall Of Fame.
If you're interested in trying out NGenerics on the Silverlight platform, you can find the assemblies under NGenerics Trunk/Artifacts. Be sure to let us know if you find any bugs/pain points/things that can be improved on this particular platform.
More for my own records than anything else... I'm a big fan of the about:blank home page, but sometimes corporate policies dictate (and enforce) the company intranet site as IE's home page. To start IE without going to the set home page, add the nohome command line parameter as such :
Something that I find a use for in almost every project I work on, is the HashList (also known as a MultiMap in the Java world) that's been added in NGenerics 1.2. Simply put, a HashList is a multi-valued dictionary (one key, multiple values), that uses something like a Dictionary<TKey, IList<TValue>> under the covers. It still retains dictionary semantics, but handles the creation and destruction of the key/list pairs itself.
For example, adding a couple of items to the HashList will have the following result:
Key Values 4 Cow Horse 2 Chimpanzee 8 Spider
Removing the cow and the horse will result in the list (and the key) being removed from the dictionary, affecting both the key count and the value count. The HashList implements IEnumerable<KeyValuePair<TKey, IList<TValue>>, so items can be traversed in grouped order. Sure beats the heck out of having all that list management code sitting in your code base.
I'll say it once more - it's not worth it.
Your aim should be at about 70% code coverage - as long as you know in your hearts heart that all has been tested that should have been tested.
What's vitally important (and which is the whole reason why I run coverage tools in the first place), is not the warm, fuzzy feeling I get in my tummy when I see a high number - it's the easy identification of untested code and pathways that yield value in this regard.
The build script packages the main NGenerics assemblies in a zip file, so you always have access to the latest build from source. You can find those under "Artifacts".
We're still waiting for a response from the CodeBetter team on supporting Silverlight on the server - if we get that, the latest Silverlight version of NGenerics will be available as well. If you own an open source project and want CI for your project, find James Kovacs' post here.
About a year ago, the VisualSVN guys gave the NGenerics team a complimentary license for their product. After a year with it, I wouldn't be caught dead using Subversion as a SCC without it!
Get it here, and have a look at their VisualSVN Server product while you're there - truly fantastic stuff.
The Specification pattern has been added to NGenerics. In my previous post on the Specification Pattern, we explored creating specification functionality using extension methods. It's actually been implemented with the operator methods (And, Or, Xor) on the actual interface, with an abstract class forming the base of all specifications. The deal-breaker for this approach was the need to add operators |, &, | and ^ in order to trim down the syntax a little bit. With the Specification Pattern in NGenerics, you can now do this :
NGenerics 1.3 has finally reached production status - you can download it here.
With it released, we can finally start working on several exciting new features for 1.4. Watch this space!
Just installed a bunch of them - ahh, it's good to have them back.
13 September 2008 - Update : We've just found the little girl in the picture's first teeth!
- (F) Platform Limitation- The licenses granted in sections 2(A) & 2(B) extend only to the software or derivative works that you create that run on a Microsoft Windows operating system product.Wow. Can't believe that you can still get away with open-sourcing code and then choosing a license like that.There's a great need for .NET application developers and consumers for them to be able to be cross-platform compatible - limiting yourself to a Windows environment limits your audience and adoption.Hats off to Miguel and the Mono team for keeping Mono alive and well.
One of the walls I hit the most in C# when designing classes is the lack of support for multiple inheritance, which makes that one spot for inheriting from a very valuable spot indeed. For the purposes of this discussion, we'll start with a simple implementation of the specification pattern :
The evaluation is all set, but now we feel the need to add the boolean operators And, Or, and Not. Most developers follow the route of changing the interface to add And, Or and Not, create a base (abstract) class (let call it SpecificationBase) implementing the operators and leaving the IsSatisfiedBy method abstract. The downside of this approach is that concrete implementations of specifications are forced to give up their one empty slot for inheritance. An alternative is to leave the interface as is, and implement extension methods that operate on the interface - no inheritance needed.
The downside with this implementation is that you need to import the namespace in order use the methods. Although most people will probably do this automatically anyway, it might not be clear what operations the class has to offer.
My question is this : In the context of a reusable class library like NGenerics, is using extension methods in this way evil?
The best FireFox addon ever.
1) Get features out there quicker.
2) Decrease the time to get feedback on what we've done.
3) Keep the momentum.
4) Generate less bugs per release as the scope of releases are reduced.NGenerics 1.3 is more or less fixed in scope now. All the release version will add is some more unit tests, bug fixes, and updates to documentation. As always, you can request new features for version 1.4 here.
I'm a little bit late with this announcement - a month or so.Welcome to the world, little one!
Cayla Rose Hanekom was born on the 4th of June 2008 in Johannesburg, South Africa, weighing a whopping 4.1 kg.
You would think that the integration between Visual Studio (using the TFS client) and TFS would be better than the the Visual Sourcesafe integration. Nope, the same annoying bugs that were present in the Visual Sourcesafe integration has rocked up in TFS, so I'm assuming the old codebase has just been extended.After several unsuccesful check-ins (read, breaking the build) when I've worked disconnected on NGenerics, I've finally given up on TFS. Well, not really, but as soon as the SVN bridge brings out a stable version, I'm switching.Congratulations to the Codeplex team for trying to fill the gap - although it took Subsonic abandoning TFS to get to this point.
I'm one of the converted in terms of test-driven development. One of the main "aha" moments for me is not to write unit tests - it's writing testable code. Any code that's testable is sure to be of a higher quality and easier to maintain than typical spaghetti code.That brings me to the second part of the title - mocks. The art of mocking objects have taken off over the last couple of years. However, I feel that there is a real danger of abusing them in order to test code, when the code is not written in a testable way. I've seen many examples where mocking is used in such abundance that what you're testing is the mock object's functionality - and not your code. In cases like these you'd be much better off refactoring than mocking dependencies.So here's my recommendation : focus on writing testable code, and try minimize the use of mocks. Only use it when you've got no other method of testing your code.
As part of version 1.3 of NGenerics, I've finally remove most of the sealed keywords from the classes (where it made sense). It took me a while, but I've accepted the following rule (and Microsoft is of the same opinion, it seems) :Never seal a class unless there's a very good reason to do so (like it being internal, security reasons, etc). Programmers using your library will use it many creative ways you can't even imagine, and sealing a class limits that creativity.Also see the post here.
I've recently left Avision to join Intervate. Although I miss Avision and it's people tremendously, I felt that I've reached my maximum growth point and that I needed some new challenges.I've been an employee at Intervate for two months now - and it's a great company : Microsoft Gold Partner, intelligent and competent people, exciting projects and learning opportunities.At the moment I'm somewhat of a contractor - but that should pass pretty soon.Other than that, I'm working hard to get the 1.3 release of NGenerics out of the door. A lot of focus is being placed on the quality of the library - thus a couple of breaking changes have been introduced (like the changing of namespaces). After this release however, NGenerics will remain pretty stable (and backwards compatible). As we go along and change it, we'll start marking methods as deprecated whenever changes are made breaking compatibility with previous releases.
Yeah, NGenerics 1.2 has been released!If you haven't checked it out yet, do so now - no decent programmer should go without a toolbox of data structures and algorithms.Some new features in NGenerics 1.2 includes :
Well, ok, if you say so...You are Superman
85% Green Lantern
60% The Flash
60% Iron Man
55% Wonder Woman
47% You are mild-mannered, good,
strong and you love to help others.
I've posted an article on generic data structures on CodeProject.At the moment, it provides the following data structures :
A couple of sorting algorithms has also been implemented.If you get a change, go check it out, and give me some feedback.
I'm back from a holiday of doing absolutely nothing on Ballito's beautiful coast line (Kwazulu Natal, South Africa)...
Back to reality...
Wow. Biztalk is always full of suprises.
Stephen W. Thomas has an explanation here.
There's been some long gaps between blog entries - simply due to the amount of time I can afford to write.
What am I up to these days?
- Still working my full time day job as a systems developer (I actually think it qualifies as a day-night job).
- Writing (and fixing) articles for CodeProject.
- Doing lots and lots of R&D (new frameworks / applications / general cool stuff are emerging at an incredible rate these days). I blame this one on Microsoft with all of their pre-release software.
- Spending lots of time on my studies (B.Sc. Hon in Information Systems). Didn't think it would take up this much time when I started it, but I'm almost there...
- Working on the next big thing, which will hopefully provide that handy additional cash flow.
- Spending time with the family ;)
- Trying to get some actual sleep in between.
I'm taking a break soon to spent all my time on a beach for a couple of days. LiveWriter is coming with, I'll post a pic of South Africa's beautiful coastline.
I've grown interested in Chess programming this year, and started building my own chess engine that will understand the WinBoard protocol. Fascinating stuff, chess programming. Initial development is difficult - but I'm starting off with the basics :
- A simple bitboard representation.
- A simple implementation of the Alpha-Beta algorithm.
- Implementation of the WinBoard protocol.
When that's done there's going to be ample room for improvement (only 20 or so algorithms I need to implement and try out, and then you can still play around with different representations of the board).
Chess engines can compete against each other on sites like these - I'd love to do this soon.
Wow, this actually works! (Testing with LifeWriter).
Got my first CodeProject article submitted!Yes I know, it's really small (and still a work in progress), but it's a start!The URL's going to change soon, I'll update this post with the new link.
Sure, we all know how to do that by know, right?Funny thing is, all the blog entries I've seen like the ones here and here, end up creating a new map, usually situated in the root of your assembly (or event worse, create temporary orchestrations to do that). In order to move it to another folder in your project you'll have to edit the xml contained inside the map since it contains relative paths to the schemas mapped (and watch out for those namespaces).Here's an easier way to do it :
- Create your empty map exactly where you want it.
- Add a transforms shape and open up the properties.
- Instead of choosing new map, choose existing map.
- Select your newly created (empty) map.
The Biztalk Editor will modify the map to take the new message types - multiple if you selected them.
Dispose those objects!If you're using the Sharepoint object model anywhere, make sure that you dispose any objects that you create (not those that the Sharepoint object model creates for you, like calling GetContextWeb()).These objects have a small memory footprint visible to the Garbage Collector, but a large amount of unmanaged resources (effectively not visible to the Garbage Collector).If you don't dispose of them, the GC will not be in any rush to free up the memory used by these objects since it can't (or won't) see the bigger picture. The result? OutofMemoryExceptions when doing any operation that creates a lot of these objects and not disposing them.(SideTrack : This is exactly what happens to the SqlCeCommand objects I moaned about here).See the full details on "Best Coding Practices" for Sharepoint in this Microsoft article.One thing's for certain - we're not in a completely managed world yet...
I've been porting an application of ours to .NET CF 2.0, Mobile 5.0 and SQL Mobile the past couple of weeks.On .NET CF 2.0 : very impressive. The .NET Compact Framework 1 was a bit lacking in functionality - it really felt like Microsoft shipped an unfinished product. However, .NET CF 2 provides a lot of new functionality : Pocket Outlook Access, Phone and messaging capabilities, new controls, etc. Basically everything the guys at OpenNet CF had in their previous version, is included into the new framework. They've released a new version of their library that adds even more functionality - download it and use it if you work with the Compact Framework.Working with generics is an absolute godsend - can't tell you how much the incessant casting of objects chewed on my nerves...I'm a bit dissapointed in the SQL Mobile zone - it seems that all the old features (read "bugs) are still there like the failure of cleaning up SqlCeCommand objects. However, being able to make multiple connections to the database and remove my Singleton connection manager class is great! Side Track : Note, however, that making the connection to the database is a real bottleneck. For multiple transactions (Microsoft, when can we get batch transactions on Mobile?) like batch inserts, it's best to keep a connection open and reuse the same command - you'll definitely notice the speed difference when you don't.Microsoft has added support in the Visual Studio IDE for designing SQL Mobile databases (no more SQL queries painfully typed in using a stencil on the device). Works great, get's deployed onto your device and you can even query it! What I'm missing there (and it's driving me nuts) is support for ordering columns (I'm adding a new column and I want it to be next to the primary key), renaming tables (the only way I've found so far to do this is to recreate the entire table), and stored procedures.OK, the stored procedures is going a bit too far, but it would be nice, wouldn't it?
I've been dabbling with SQL 2005 quite a lot these days, and two things really impress me : SSIS and Reporting Services.However, it seems that it might be a bit difficult to actually deploy a SSIS.I'm missing something with calling a web service in SSIS... It seems that even when you change the connection manager setting for the HTTP Connection manager to a new value (say we deploy to a live environment), it still reads the WSDL file you provided and calls the wrong webservice (Ethereal is the coolest network tracing tool ever btw.).And Deployment!There doesn't seem to be a way (enlighten me if you know a way to do this) to specify a username and a password to your HTTP connection manager at any time but design time - i.e. it's saved with your SSIS package.Now comes the problem : by default SSIS encrypts sensitive data like usernames and passwords in the xml file. By default the encryption type is set to "Encrypt With User Key", which basically means that the file is encrypted with your username token - no other user can run your package.Set this to "Encrypt Sensitive With Password", deploy and run it - it works! Now try to schedule the SSIS package with a SQL Agent Job - it fails. Try to run it manually with dtsexec: it fails, stating that it can't decrypt the data even though you've specified the password for the package (Side Track : No useful information comes out of running a SSIS package manually. If you want to know what actually went wrong, use dtsexec).One final try : "SQL Storage" for the package, which leaves in unencrypted and depends on the datebase security to secure sensitive information. It seems that there is no actual way to deploy a package to SQL Storage (unenecrypted) from Visual Studio (again, enlighten me if you know how). Instead, open SQL Server Management Studio and find the "Save a copy" option under the file menu - there you can select a SQL server to deploy to.Otherwise than the deployment issues, great work done by Microsoft on SSIS and Reporting Services!
Yeah! Finally got my new toy - an IMATE JAMIN with (very important) Windows Mobile 5.0. I'll be playing around with the new SQL Mobile and .NET Compact Framework 2.0 on this device - expect a storm of new posts.
Amazing how little time you get as a developer for doing the things you really want to do (and here I am sitting in front of a computer again ;)). This blog has been a bit stale, and I've made a quarter-year resolution to put new energy into it.Projects have been keeping me extremely busy these last couple of weeks - specifically an integration project (using Biztalk of course). That's one of those projects where everything goes wrong and all unforeseen circumstances come to the party. Hell, by this time I'm an expert at crisis management. Luckily, none of these problems are unsolvable (but that's where the time issue come in).So how does a system's developer end up making money, moving forward in life? Not that money is my main goal in life, but hey, it would be nice to retire at 35 wouldn't it? Probably wouldn't stop doing what I do though...Anyway, enough with the ramblings, let's get some real content on this blog!
If your Biztalk orchestration is published as a webservice and something goes awry in your orchestration a specific exception might not be thrown to your webservice, but rather it will throw ApplicationTimeOutException.I was prepared just to live with it, until I found the following jewel here.Turns out you can have custome SOAP faults returned in your orchestration. Who would've known?
I'm no Scott Woodgate, but I think I know a fair amount about Biztalk. However, it seems that I never recognized what the Listen shape actually does (you don't see this mentioned very often btw.). Turns out that this little shape can solve many of the problems we're currently having, mainly correlation issues, like situations where the message you are waiting for might never arrive, keeping your orchestration in a dehydrated state, waiting forever.This shape works like a parallel branch, but with one major difference : Only one leg of the shape needs to complete for the orchestration to continue, while in a parallel branch all branches need to complete. Add it to your orchestration, add a receive shape, add a delay shape with a terminate on the other branch. If a message is not received in 1 day or whatever your delay is configured for, the orchestration will terminate. However, if a message is received, the orchestration will continue as normal and the termination won't occur.
Here we go - my first post...I'm currently working as a Systems Developer for a company in Johannesburg, specializing in Microsoft technologies (C#, Sharepoint, Biztalk, SQL, the whole stack).This blog is meant to be a place where I can get my thoughts into the open and get opinions from the whole wide world (www). It is meant as a learning experience not only for people out there, but for me too.This blog will be dominated by Biztalk, Mobile development, Sharepoint, SQL and general C# .NET topics with a little C++, AI and theoretical computing sprinkled on top.I'm breaking a bottle of champagne against this blog and naming it "Binary Thoughts".
subscribe via RSS