Thanks to this being Posterous’ last day I’m spending the evening moving my blog to Octopress, hosted on Github.
Fortunately my strategy of creating very few posts has made this task a little less arduous than it might otherwise have been.
Thanks to this being Posterous’ last day I’m spending the evening moving my blog to Octopress, hosted on Github.
Fortunately my strategy of creating very few posts has made this task a little less arduous than it might otherwise have been.
I decided to write this post because it was getting really hard to discuss 140 characters at a time on Twitter, and I just want to get some thoughts down before I go looking for other people’s solution to the problem.
Over the last couple of years I’ve learned a lot about ReSTful web API’s (the kind that use HATEOAS, not the namby-pamby “it’s HTTP therefore it’s ReST” variety) and have also been intrigued by the simplicity of CQRS+ES to address scalability. I’ve played with both in personal projects, and done a bit of each at work too (although not as much as I’d like), and have wondered how to bring the best of both together. At DDD10 yesterday I attended Neil Barnwell’s CQRS and Event Sourcing… how do I actually DO it? and Jacob Reimers’ Taking REST beyond the pretty URL. In the latter Neil asked this very question, which got me thinking about it again.
After a bit of discussion on Twitter, Neil identified the problem as this:
CQRS with event sourcing only really comes into its own with a task-based UI; instead of simply updating (in the CRUD sense) a customer’s address you would send a command saying the customer is moving to a different address, or perhaps a different command if merely correcting a typo in their current address. This captures the intent as well as the change, which allows for far more interesting things to happen later on as that intent is captured in the event raised as a result of processing the command.
HATEOS in a ReSTful web API decouples the client from the server. The client doesn’t need to know what the application rules allow it to do - instead the server guides the client along a path by telling it which possible next steps that it might like to take, just as a website guides a user through it by providing links to click in the browser. If the server business logic changes then the client doesn’t necessarily need to be updated; the server will just change the links it provides to the client to reflect the new valid next steps that the client could take.
In my mind CQRS+ES doesn’t allow for that loose coupling between client and server because the client needs to know about the commands it can send, and they go far beyond the HTTP verbs GET, PUT, DELETE and - arguably - POST. This is the problem that Neil pointed out.
Commands are simply messages with all of the information needed for the command to be executed. So in order to distinguish between a MoveToNewAddress command and a FixTypoInAddress command (badly worded examples, but hey-ho) the client needs to know about each of them and what parameters they require. If these change then either the client needs to change to match or the server needs to maintain support for old versions of the commands. If we stick to the ReSTful style then only the HTTP verbs are allowed and, as we shouldn’t represent verbs as resources, the client can’t discover new commands by being given a new link to follow.
On the read side of CQRS+ES things aren’t so bad because the server can represent entities as resources to support a ReSTful API, but it’s not obvious how PUT and DELETE could work on those resources while still capturing the intent.
My initial thought was that you could represent the commands themselves as resources and POST them to a collection:
POST /commands
The response could be a command resource allowing the client to poll to see if it has completed yet, but that could be hard to do depending on how the command pipeline is implemented. I seem to remember one of Greg’s articles on CQRS+ES suggested a UI with a list of outstanding commands, but I don’t think this is common practice because it would often be hard to get this information without adding a lot of complexity.
Alternatively it could redirect you to the related resource, but the fact that it could be hours before the command is processed (if at all, which is why commands should be idempotent) means that some of the business logic would have to be duplicated in the HTTP facade. So I’m not a big fan of that idea.
Jacob has another:
It could be that this is the answer, but as it’s solving the problem one command at a time I’m not sure yet. For example, instead of having a single address resource for a customer that we PUT a new address to (therefore failing to capture the intent), we could have a collection of addresses that we POST a new address resource to if the customer moves, or we could amend a mistake in an existing address with a PUT on the resource for that address. It’s debatable whether or not that is adequate to capture the intent, but maybe with some more tweaks it could.
I’d be interested to hear how other people have tackled this…
Following on from my mocking framework comparison, I changed Twiddler (my pet Twitter client project) from Moq to NSubstitute, and from xUnit.net’s built-in Asserts to Should.Fluent, and I think both changes greatly improved readability of my tests.
Full diffs are here for comparison:
What’s the collective noun for mocking frameworks? If there is one, then .NET has it!
I’ve used Moq for years but I’m always keen to make my tests more readable, so I thought it was time to compare some of the modern alternatives and see how they perform in a test fixture plucked almost at random from a project I recently worked on. So, here are the same tests using fakes from Moq, NSubstitute and FakeItEasy, all of which are available from the NuGet gallery. If there’s another hot framework you think compares well then let me know and I’ll try it out too.
Disclaimer: I’m quite familar with Moq but this is my first time with NSubstitute and FakeItEasy, so I’m not necesarily using the best option for these frameworks. If you spot something that would be better done in another way, fork the Gist on GitHub and let me know!
I use xUnit.net which works slightly differently to most of the other test frameworks: it creates a new instance of your test fixture class for each test, which allows you to use field initializers and a constructor to set up your fakes. I take advantage of this in the following snippets.
Most of the time we can use a LINQ query to set up a fake with Moq. This leads to nice and clean test set up as it can be done from a field initializer if you have setup that applies to all the tests in the fixture:
Quite succinct, no constructor needed, but a little bit ugly.
The instantiation of the fakes is very similar to Moq, but there is no equivalent to the LINQ setup so we need a constructor:
I find this very readable, and it’s probably easier to understand than the LINQ setup.
Slightly different take on instantiation, and again the setup needs to be done in a constructor:
The setup reads well, but is verbose when compared to NSubstitute.
In the following snippets we have a field referencing the fake object, and we want to verify that a method was called on it.
We get the Mock
for the fake object and verify:
Getting the Mock
degrades readability a bit, but not bad.
Wow. Couldn’t really be any shorter, could it? My only criticism is that it doesn’t scream “ASSERTION!!!” to me.
I like the way the fake is incorporated into the the call, it’s much
less intrusive than Moq. Ending with MustHaveHappened
makes a pretty
clear statement that verification is happening here.
In this test I want to check that the presenter raises its
PropertyChanged
event when the model’s PropertyChanged
event is
raised. I’m making use of a handy extension method from
Caliburn.Testability
that lets me write
AssertThatChangeNotificationIsRaisedBy([property]).When([something happens])
.
In this case [something happens]
is going to be the
PropertyChanged
event being raised on the model, and we’re going to
see how that is done with the different mocking frameworks.
In the interests of staying DRY, I usually make an extension method of
my own to raise PropertyChanged
, but I won’t here so we can see how
the frameworks work!
Again we have to get the Mock
, then we call Raise
on it:
Still not keen on getting the Mock
, and it’s a shame we have to write
+= null
just to make a valid expression. Bit long and nasty.
So close to being very nice, but spoiled by having to supply the generic
argument to Raise
. This isn’t always the case, but as the
PropertyChanged
event is declared with a delegate it’s necessary here.
Still, it reads fairly well, certainly better than having += null
in
the middle.
A lot shorter than NSubstitute and more readable than Moq, I think
that’s quite good. Although at first glance the Now
on the end seems a
bit odd.
Each of these frameworks has a lot more to offer than I’ve touched on here, covering just about anything you could do to an object (and probably a few things you wouldn’t want to!). I wanted to see what the basic, everyday scenarios look like as those are the ones that really matter to me, and after this I will definitely try NSubstitute out on a real project.
It’s amazing how far mocking frameworks have come in the last couple of years!
There’s a good reason why the test-driven development cycle says you should always watch a test fail before you write the production code that makes it pass. I was taught a lesson in this today, “school of hard knocks” style…
I have a simple class which implements INotifyPropertyChanged
and has
property:
public class MyClass : INotifyPropertyChanged
{
private string _name;
public string Name
{
get { return _name; }
set
{
_name = value;
PropertyChanged(this, new PropertyChangedEventArgs("Name"));
}
}
public event PropertyChangedEventHandler PropertyChanged = delegate { };
}
To test that the PropertyChanged event is raised at the appropriate time
I use Caliburn’s handy (and very
readable) PropertyHasChangedAssertion
:
[TestFixture]
public class MyClassTestFixture
{
[Test]
public void Name_WhenSet_RaisesPropertyChanged()
{
var test = new MyClass();
test.AssertThatChangeNotificationIsRaisedBy(x => x.Name);
}
}
At the time I obviously thought this was too simple to worry about, saw the test passed as expected and moved on. All good… Or so it seemed! Fast forward a week or two. I started to get some strange errors – not test failures - in my MSBuild output:
error : Internal error: An unhandled exception occurred. error : System.Exception: No context was provided to test the notification, use When(Action affectProperty) to provide a context. error : at Caliburn.Testability.Assertions.PropertyHasChangedAssertion`2.Finalize()
Not only is the message a bit cryptic without any context (e.g. a test)
but the error was intermittent. Oh joy! After a bit of detective work
(more than you might think!) I realised that this is because the
Caliburn’s PropertyHasChangedAssertion
checks that you called its
When(Action affectProperty)
method in its finalizer:
~PropertyHasChangedAssertion()
{
if(!_isValidAssertion)
throw new Exception(
"No context was provided to test the notification, use When(Action affectProperty) to provide a context.");
}
While this makes the test extremely readable (which is, of course,
extremely important), if you forget to call When()
, if and when you
get an error is up to the non-deterministic finalization gods. An easy
one to fix:
[Test]
public void Name_WhenSet_RaisesPropertyChanged()
{
var test = new MyClass();
test.AssertThatChangeNotificationIsRaisedBy(x => x.Name)
.When(() => test.Name = "New name");
}
But I didn’t get the feedback that I should have done from doing TDD properly, and wasted time as a result.
Here is how TDD is supposed to be performed:
Because the code was trivial, I skipped the second step and didn’t check that the test failed before I carried on and implemented the property. If I hadn’t skipped this step, there’s a good chance that the following sequence of events would have occurred:
Maybe I would have got the error when I ran the test, and that would have given me another clue about the cause of the problem.
In the week or two since I made the mistake I could have carried on to
use AssertThatChangeNotificationIsRaisedBy
in tens or hundreds of
other tests, which would have made it much harder to find the one with
the missing When()
call. I was lucky that there were only a few uses
in my tests.
I hope so! When time is short it can be hard to make yourself go through these steps over and over again, but they are all there for a reason – to stop us writing code that doesn’t do what we think it does. I will be trying especially hard to stick to the steps, but we’ll have to wait and see how it goes.
Growing Object-Oriented Software, Guided by Tests by Steve Freeman and Nat Pryce is a book about test-driven development. Here are a few notes on my experiences of following its methods.
I was fortunate enough to start work on a new desktop application in the middle of last year, around the time I read through the freely-available online version of the book before it was finally published in November. This was an ideal opportunity to put TDD into practice so I started by building a “walking skeleton” using Prism, CruiseControl.NET, WiX, Gallio, MbUnit, NCover and White as a wrapper around UI Automation for the acceptance tests, and took it from there. I’ll admit that there was a slow start (WPF/Prism and White/UI Automation were new to me too) but development speed has been steadily increasing ever since, and now I’m able to get what feels like a lot done each day. And that’s pretty much every day; it’s been a long time since I’ve had to halt progress for a significant amount of time in order to squash a bug or redo a chunk of work.
I’m still learning. It’s easy to slip back into changing code then updating the tests to match, and I do find myself doing that sometimes. I’m also finding it hard to perform only one refactoring step at a time (oh, let me just rename that class while I’m here…), and the acceptance tests can be brittle and sometimes feel like a burden to write. But what doesn’t kill you makes you stronger, right? It’s getting noticeably easier as I learn and improve, and every bit of pain along the way has been worth it.
For me, yes. Test-driven development feels so right that I don’t think I could ever go back to hacking stuff together without building the safety net of tests to fall back on as I go. I am sure that my design is much better than anything I have produced before, and that I have far fewer bugs than usual, too :) So this experience has been nothing short of (professional) life-changing. I have read similar stuff before, but GOOS was the one that finally made me “get it.”