Use Case Driven Scrum

In today’s software development community, Use Cases are often frowned upon.  A quick search on Google for “Use Cases Scrum” and you quickly find that they are put up against User Stories and quickly lose the fight.  I believe in Use Cases because they force stakeholders and the development team to have the right discussions in a structured way.  They also expose many things you will not think about when writing requirements in other ways.

But the art of writing Use Cases is dying.  “Uncle Bob” Martin has said that it shouldn’t take longer than 15 minutes to teach someone how to write use cases[1].  He’s wrong and unfortunately hyperbolic.  But these are the Agile times we live in, when everything invented before the Protestant-like reformation is looked upon as sacrilege.

I believe in Scrum.  I think it can wholly benefit organizations with small teams that need to be more nimble or agile.  But I don’t think Scrum is exclusive from Use Cases.  Here is the definition of a product backlog from the Bible of Scrum, The Scrum Guide:

The Product Backlog is an ordered list of everything that might be needed in the product and is the single source of requirements for any changes to be made to the product.

Notice it says requirementsThe Scrum Guide does not say how to do requirements (User Stories come from XP), it just says that they need to be in the Product Backlog.

So this is where my proposal for Use Case – Driven Scrum starts.  Put your Use Cases in the Product Backlog.  Now one of the criticisms of Use Cases is that they are too much documentation and take too long to write.  Well, don’t write them out then!  Just start by identifying the Use Cases you should do (give only their title).  For example, put the Use Case “Log into system” into the backlog, but don’t bother to detail it out at first.

Scrum practitioners know that undefined product backlog items belong at the bottom and as they move up in priority, they become better groomed as the following picture illustrates.


This leads to the second part of my proposal.  Refine the Use Cases as they move up the backlog.  Add the basic flow or maybe the primary actors.  This becomes part of your Product Backlog grooming.

Finally, most full use cases with all their basic and alternative flows will not fit into one sprint.  So the final part is to break them down into scenarios that will fit into one sprint.  Mind you that use case flows and scenarios are not the same!  The basic flow is always a scenario, but mixing in the alternative flows is where it gets interesting. J

The tactics of breaking product backlog items up really depends on the tool you use for tracking your work.  Spreadsheets, Rally, and Team Foundation Server all have different ways to do this.  I hope you’ve enjoyed this article and would love to hear your feedback below.  Good luck in your journeys of software development!


Software Engineering is all about Modelling

There is a great article over at O’reilly entitled “Striking parallels between mathematics and software engineering”.  I’ve never really  thought about the parallels of Math and Software Engineering.  I’ve thought about Civil Engineering, Medicine, and the Law; but not Mathematics.  To summarize, the author says that Mathematics is really about modelling and that is what we do in software engineering continually, especially when following object-oriented paradigms.  It is striking and has opened my eyes to a whole other avenue to explore when it comes to Software Engineering.  Just thought I’d share :)

Turing Software

I have founded a new Software Engineering company, Turing Software, LLC.  Please head over there to attain services such as consulting, training, and custom application development.  I’m also going to start blogging over there, so if you enjoy these articles please continue to read  them over there.  I’ll also post here, but it will be more of a personal nature and probably less frequent.  Thanks for reading my blog!

Azure Development Virtual Machines

So I tried the new virtual machines on Azure for Visual Studio.  I’ve always dreamed of using a vm to do my development on, but never really trusted it because Visual Studio (VS) is such a performance hog.  Well, here are my results.  I downloaded “ImageResizer” from Codeplex, a popular C# program, and then built it on my local machine and the Visual Studio VM.  My local machine runs 64-bit Win 8.1 Pro with a Intel i5 4670K CPU @ 3.4 GHz and 8 GB of RAM.  It is also on VS Ultimate 2012.  The Azure VM has a AMD Opteron Processor 4171 HE at 2.10 GHz and 3.5 GB of RAM on 64-bit Windows Server 2012 R2 Datacenter.  It is running VS Pro 14 CTP (the latest and greatest).

Now, the results.

My local machine built it in ~1.1 seconds.  

The VM built it in ~3.1 seconds.

A factor of 3.  Not great, but not that bad either.  I could see myself doing it, maybe….  lots of advantages (clean machine, always running the latest and greatest, etc.).  But it still feels like it’s on the cusp of prime time.

Latest ACM Turing Award Winner Announced!

Lesile Lamport, a Principal Researcher at Microsoft Research, has been announced by the Association for Computing Machinists (ACM) as the winner of the 2013 Turing Award winner.  The Turing Award is the equivalent of the Nobel Prize in computing.  I always look at these prize winners as “Gods” of the computing world and I think it’s important that we remember and honor our history if we are to progress as a profession.  He won it for his his work in distributed computing, which I can tell you from my graduate course, is some very difficult stuff.  He also created LaTeX among other things.  Congratulations!

Official News Release

What is Software Engineering in the 21st Century?

In February of 2001, 17 people met in Colorado to discuss how to move the Software Engineering discipline forward.  They were frustrated that their more lightweight, adaptive methods were not being tried while watching heavier methodologies continue to fail with over budget and over schedule projects.  They correctly surmised that a revolution would be needed to make the Software Engineering community hear and, more importantly, implement their solutions.

They could have left in disagreement and disarray over non-essential questions like: What’s better, Scrum or Feature Driven Development?  Fortunately for the Software Engineering field, they didn’t.  They wrote a Manifesto which is shown below:

We are uncovering better ways of developing software by doing it and helping others do it.
Through this work we have come to value:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

That is, while there is value in the items on the right, we value the items on the left more.

Their Manifesto worked beyond their wildest dreams!  Like any revolution, there was resistance, especially in the establishment.  But after 10 years, it was clear that they had won the war.  The establishment could not resist any more and all discussions on Software Engineering now have to include Agile practices.

After the American Revolution, the American founding fathers were faced with a whole new set of problems that were in many respects bigger and more complex than winning the revolution in the first place.  The Agile Revolution finds itself today in a similar situation.  Questions such as, “What is the best way to scale Agile to an entire Enterprise?” are what need to be answered today.  The Agile movement would be wise to not throw away everything that came before it in Software Engineering, but rather mix and modify it to set our discipline on a new course.  This is what the Americans did when they took the King concept, mixed it with democracy, and came up with the modern presidency.  I’m looking forward to the next 10 years to see where this mixture leads us!

Why is not working? (from a technical perspective)

Most web applications have a architecture like the below.  There are of course nuances and exceptions, but for the layperson, this will suffice.

The “Presentation Layer” handles all the graphical packaging of content in web pages presented back to the user.  This article in the Atlantic has a good description of the Presentation Layer for  This is definitely NOT the problem as pages of just content come back very fast without problems.  Clicking 90% of the links in the sitemap come back in under a second.  BUT, a web application with just a good Presentation Layer is like a book with a nice cover design and nice pictures inside; no one will care if the text is not good.  The rest of the web application is the text and is seems to be horribly bad at the moment.

The next part of the system is the “Business Logic” layer.  In this is called the “Data Hub” and is described here.  There is a tremendous amount of coordination between different web services (Social Security Administration, IRS, Insurers, etc.) to make sure you get the insurance you’re supposed to.  Unfortunately, this is where the software engineers for have the least control over what happens, because they are dependent on these other services to relay data back to them quickly.

Finally, we have the “Data Access Layer” and “Data Source”.  This is where all the data is stored (e.g. your name, address, age, etc.).  The data that has to collect and then connect to other relevant pieces is tremendously complex and it is very possible that many of the problems lie here as well.  Fortunately, this is one place where you can “throw more servers at the problem” to alleviate performance problems somewhat.

While the answer to the question why is failing is not entirely clear, I hope you have gained an appreciation for the complexity of this very important web application and how one problem in any of it parts can make the whole application slow down.  Unfortunately, I predict that many of these problems will not be fixed quickly because of their logical complexity.  Throwing servers at the system will only alleviate a small percentage of the problems and ultimately does not substitute for quality software.  Throwing more people at this problem violates one of the few laws we have in Software Engineering — “adding manpower to a late software project makes it later”.


Get every new post delivered to your Inbox.