Thursday, September 18, 2014

How our scrum team went from a blank page to a reasonable plan in a half day

Iterative development is a lot easier when you have something upon which to iterate.  Unfortunately, that isn't always the case. 


Pulling items from the backlog is one thing, but starting from scratch with a new development project has always given our team some difficulties.  The approach we took with our current project isn't revolutionary -- it is really just a flavor of a release planning process.  But I was so pleased with how quickly it came together (in roughly 4 hours from a blank whiteboard to a release plan, an almost fully planned sprint, and a prioritized backlog), I thought I'd share.  

The steps we followed:


  1. Based on what we had on hand--an existing site we were migrating/uplifting, several discussions with the clients, some initial feedback from a user survey we'd done for users of the current site--we white-boarded the epic-level bodies of work for the new development. These varied widely by the size of the effort required to deliver them.
  2. Once we'd exhausted our knowledge and thought we had all the "cards" required to deliver the project, we put them all in a Google Spreadsheet along the top row.  
  3. Then, collectively, we all went into the spreadsheet and filled in (under each feature or other body of work) what user stories we could immediately conceive as required to deliver that feature/function.  Some of these were spikes or proof-of-concept user stories that would inevitably lead to other stories, but we filled in all we could imagine it would take to deliver that body of work.
  4. This process led to a lot of discussion about each feature as we looked over our stories and those of our teammates. This discussion took most of the time we spent together.  It took very little time to populate a large number of user stories in this live, collaborative  approach. We prompted each other to expand on ideas or create new stories we hadn't considered solely on our own.  Also, because we worked on multiple features simultaneously, there was a synergy that came from considering multiple facets of the system at once and how they might interact.
  5. Having all the stories in front of us, we talked with the product owner and defined a set of staged deliveries of the project, framing up four releases until a minimum viable product could be released to the client. We understood this could change as we develop the project, but it was a conceptual starting point we could revisit after the first release.
  6. Then, we didn't try to put every user story into a release, though the release for some of them was implied.  We simply shaded all of the user stories that weren't going to be in the initial release in one color, and shaded those we needed to meet the product owner's initial release in a different color.  The ones that weren't ready to go will make up our backlog and are being captured in our tracking tool (we use Rally). 
  7. We then worked to decompose those initial release user stories to make sure they were sized appropriately for the sprint(s), did planning poker, and copied them into Rally with our definition of done and placeholders for acceptance criteria (which we'll fill in tomorrow).

What made this seem productive and fast?


I think the key steps were taking our time in step #1, which helped us really get our arms around scope, then working simultaneously and collaboratively in steps #3-4, which helped us get a lot of user stories defined in a very short period of time.  

We'll learn in time--with demos and stakeholder feedback, releases and retrospectives, if our approach put us in a good starting point for success or if it let us overlook key project needs.  But out of the gate, it felt very productive and the team feels positive about the work ahead.

Tuesday, September 16, 2014

Questions QA/Test can ask to ensure a better user story

When teams are immature in Agile, they sometimes start working on user stories that are more like user vignettes


While Agile texts/blogs may show beautiful examples of fully elaborated user stories, it is common that, for teams struggling with Agile, user stories won't be in that form on planning day.

from @CatUserStories
 If your Product Owner is working ahead of the team and filling in the details, definition of done, and conditions of satisfaction/acceptance criteria, then that is great, but I have seen (and, at conferences, I've heard from many other teams from different companies) that one-liner user stories remain prevalent in the daily lives of Agile teams.

Ideally, the entire team will discuss the one-liner either in planning or shortly thereafter and add these conditions of satisfaction to the user story.  If this conversation doesn't happen, one or more of the following issues is certain to follow:
  • The developer(s) will make assumptions about the scope and requirements for the story. It is a recipe for a wasted sprint, since the resulting code will not meet the desired expectations.
  • Unit tests will be faulty because they can't be congruous with acceptance criteria if those criteria are unstated.  
  • If you have QA resources, they will have to wait until later in the sprint to start work, and they'll still be working from their assumptions about the user story rather than an explicit, documented understanding of how this code should behave.



If you are in this situation, it definitely indicates a deeper problem with your Agile planning processes.  But, given that, can anything be done? 


Early in our team's Agile journey, our QA/test team independently worked to ensure these criteria were discussed and documented.  Since a QA team needs to write test cases (either for automated or manual testing), they can own raising the right questions with the development team and making sure the answers get documented in the user story.  

For each user story, QA team members would work either with the team or the individual user story owner to put flesh around the skeletal user story.  Some questions our QA team asked were:
  • Are there any sorts of requirement documents, wireframes, or other sources of information you're using to develop this story? (these can be discussion points for expected behaviors)
  • If this user story is successful, what specifically will be different about the behavior of the application?  What new behaviors will be available? What behaviors will no longer be available?
  • What other behaviors of the system will be altered by this change?  
  • Are there other system behaviors that might be broken based on the parts of the code you're touching for this change?
  • Will this be English-only or localized? If English-only, will it be localized later? (a striking number of times, this question has alerted developers to consider localization in the design when it might have been overlooked before)
These are all questions a tester needs to ask to understand how to validate a user story. And based on these sorts of questions, the tester and developer can sketch out a set of acceptance criteria that can be documented on the user story and then reviewed by the product owner. 

It is definitely end-around on the process, but if your team is struggling with getting all the details in place during planing, it can have a practical benefit on your maturing team.