WPF Toolkit ported to .Net 4.0 and VS 2010

Today I needed to make use of the WPF Toolkit from Codeplex in a .Net 4.0 WPF application.  Those who have used the WPF Toolkit will know that it was built for .Net 3.5 WPF, and a number of features within the toolkit got folded into the .Net 4.0 WPF framework (most notably a DataGrid control and the VisualStateManager).

Unfortunately the fact that these features got folded into WPF itself means that there a number of naming clashes between the toolkit and WPF 4.  This has been reported on the forums a few times and there is also a thread in which someone from MS China states “the WPF Toolkit will not be updated to 4.0 anytime soon”.

I really needed to use the charting controls from the toolkit and I didn’t want to deal with naming conflicts, so I forked the WPF Toolkit from Codeplex into a Bitbucket repo, upgraded it to .Net 4.0 and VS 2010 and removed all the bits which conflict with WPF 4.  The following have been removed:

  • VisualStateManager
  • DataGrid
  • Calendar
  • DatePicker

I have also dropped out the Visual Studio design-time support.  You can grab the code or the compiled assemblies from Bitbucket.

Enjoy!

July 21 2011

How to focus on the wrong thing when writing code

A recent discussion on our mailing list revolved around the use of regions, and whether stripping them out of a codebase was a useful first task when joining a new project!  The conclusion was, sensibly, that there were more likely bigger fish to fry (especially since region outlining can easily be disabled in your IDE) and so regions should be left alone.  Along the way, however, someone voiced an opinion that the software devs have a professional responsibility to adhere to existing standards.

Consistency in code is a great aid to readability – no doubt about it – but often someone needs to be the first to write a unit test, for example.  The benefit that comes about from automated testing massively outweighs any benefit from uniform code, by many orders of magnitude.  Similar arguments apply to use of IOC, good design of code, proper consideration of class / method / variable names, refactoring, SOLID principles and so on.  All of these things are simply more important than whether you group all your methods into a region named “Methods” or not.  And unfortunately, none of these items can be adequately codified into standards – it’s simply not possible.

Curiously, coding standards are so mechanical that they can actually be captured by software and fixes can be automatically applied.  These are the only coding standards that I personally ever bother to try and apply on a project.  I want development tools to be worrying about capitalisation, the presence or absence of underscores at the start of variable names, positioning of braces, and so on.  I do not want developers to concern themselves with such issues – the time and brain power of a good developer is simply too valuable to waste on such niceties.

Adherence to coding standards is just like tidying the decks of a ship.  It’s a good thing to do … just make sure that you aren’t on a Titanic that is slowly sinking into the deep!

How would you like that developed, sir? Tactically or strategically?

Here’s a number of scenarios which I’ve seen played out repeatedly in different settings.  They all have a common root – see if you can spot it:

Business user: “Ever since you developers adopted agile you’re releases have been buggy!  Agile is rubbish!” Product Owner: “Great!  You’ve shown me exactly what I want!  Let’s launch tomorrow!”

Developer: “Oh no … it will take a least a month to get this ready for launch”
Product Owner: “That POC is spot on! Let’s start developing the next feature.”

Developer: “But it’s a POC … there’s a bunch of stuff we need to do to turn it into production-ready code.”
Project Manager: “The velocity of the team is far too low.  We should cut some of the useless unit testing stuff that you guys do.”

 

So what’s the common link?  Right … quality!  Or more specifically, different views around levels of quality.

Now in agile terms, quality is best represented in the definition of done.  This definition should codify exactly what you mean when you say “this story is done”, or “how long until it is done?”.  Scrum itself doesn’t provide any specific guidance around the definition of done, but says that it’s important the team agree this.

It’s important to note that the definition of done should not be specific to a given story.  So my story about bank transfers may have a number of acceptance criteria around how to debit and credit various accounts, but even if my code works I might not be done because I haven’t had my peer review yet.

With that all said, here is what I see is going wrong in the above scenarios:

The team have set their definition of done below the business user’s expectations (which are probably unstated) The team have set their definition of done below the product owner’s expectations - the product owner is expecting to include all release tasks The product owner doesn’t appreciate that there is a difference between POC code and code suitable for a long-term solution The project manager either doesn’t appreciate the benefits of unit tests, or thinks that the team have set their definition of done too high.

 

There are numerous good discussions and articles on the web about a definition of done (StackOverflow question, another question, an article, and another, and a HanselMinutes podcast), but I’d like to propose the idea that we should have some overall quality levels.  For instance, it doesn’t make sense to develop a strategic, long-term solution in the same way as a prototype.  So here’s what I propose as some overall quality levels:

  • Spike – Written to prove one single technical question.  Should never be extended unless that technical question needs to be explored further.
  • Prototype – Written as a quick-and-dirty demonstration of some functionality.  Can be used for on-going demonstrations, and so may need to be extended, but should never be deployed into production.
  • Tactical – Written to fulfil a specific, limited business requirement.  Life expectancy should be in the order of 2-3 years, after which it ought to be decommissioned and replaced.
  • Strategic – Written in response to on-going and continued business requirements.  Will be expected to evolve over time to meet changing business needs and emerging technologies.

And in terms of what I think these mean for a definition of done, here is my strawman (additional steps will most likely apply for a specific project, depending on the nature of the project):

Quality levels

So the next time you start a project, make up your own grid like this (or use this list, I don’t mind) and use it to have a detailed discussion with your product owner and scrum master.  It may surprise them to find that you are thinking about these issues, and their position on quality may well surprise you too!

The Template Method pattern gone wrong (or how to stop your overridden methods getting called)

The definition of the “Template Method pattern” may or may not be familiar to you, but the following bit of code sure will:

public abstract class BroadcasterBase 
{ 
    public void SendMessage(Message msg) 
    { 
        // Perform some validation and pre-processing on msg 
        base.SendMessageInternal(msg); 
    } 

    protected virtual void SendMessageInternal(msg)
    {
        // Some default behaviour
    }
}

public class NetworkBroadcaster : BroadcasterBase
{
    protected override void SendMessageInternal(Message msg) 
    {
        // Send the message, now that it has been validated and all pre-processing is complete
    }
}

For the pedants amongst us (which includes me), the Template Method pattern can be defined as:

A template method defines the program skeleton of an algorithm. One or more of the algorithm steps can be overridden by subclasses to allow differing behaviours while ensuring that the overarching algorithm is still followed.

Now this works well until it starts becoming over-engineered and we start having more than one base class. 

public abstract class BroadcasterBase 
{ 
    protected virtual void SendMessageInternal(msg)
    {
        // Some default behaviour
    }
}

public abstract class ValidatingBroadcaster : BroadcasterBase
{
    public void SendMessage(Message msg) 
    { 
        // Perform some validation and pre-processing on msg 
        base.SendMessageInternal(msg); 
    } 
}

public class NetworkBroadcaster : ValidatingBroadcaster
{
    protected override void SendMessageInternal(Message msg) 
    {
        // Send the message, now that it has been validated and all pre-processing is complete
    }
}

And at this point it’s actually broken!  The code inside NetworkBroadcaster will no longer get executed.  This is actually down to one very small bit of code … the use of the “base” keyword in the ValidatingBroadcaster.  This actually makes the call non-virtual and binds it to the specific class, in this case BroadcasterBase.SendMessageInternal.  Eric Lippert puts it nicely:

a base call is a non-virtual call to the nearest method on any base class, based entirely on information known at compile time.

I ran into this problem with the Microsoft Prism framework when extending the eventing mechanism.  Specifically I wanted to derive from CompositePresentationEvent and override the event publication mechanisms.  This particular problem prevented me from doing that.

There are probably two important lessons from this:

  1. Don’t go blithely putting “base.” in front of methods calls, e.g. because something like StyleCop thinks you should
  2. Just because you have a base class with some overridable methods, don’t assume they will get called
June 27 2011
Newer Posts Older Posts