, , ,

Most commonly known as Bug Bash, Break the App session, but they are all really just few names of this practice where we crowdsource* the testing to the project teams.

It’s been a while since I have been following this practice of crowd-sourcing the testing to the project team, occasionally outside project team as well and found that it is the most easiest and cost effective way to implement exploratory and usability testing prior to the release every iteration.

Team who have built a feature would always like someone else to have a look at it prior release, it will get them honest, unbiased feedback which is very valuable before the release, it also saves lot of time and money for the project teams. Throughout the iteration teams test the stories and try to capture maximum defects and prevent them to go ahead in the final product but just like other team members (developers, business analysts, ux) they must have been 100% focused on the feature and there is a possibility that they know something too well and overlook the obvious.

What is needed are fresh pairs of eyes provided with a mission to play with the feature developed and see what they think. An end user, someone who is playing with an application for the first time and asked What do they think about it? This also gives the feedback as we consider first impression lasts, something that either isn’t intuitive, doesn’t work as expected or just looks crap which is not going to live it’s life in production.

On the other side if the same task is given outside organisation to the team of experts onshore/offshore or usability experts, it will definitely get the job done but it’s an additional cost to the company versus in this case if you choose to crowdsource it internally, it’s faster cheaper and quicker solution.

How does it work

It is a time boxed activity where teams are given approximately 30 – 40 minutes to play with another team’s new feature and provide feedback within that timeframe, related team member then review the feedback and act on them, few things needs to be taken care are:

  • The exercise should ideally take 30 mins but should not exceed more than an hour, based on when is it being performed as time is precious.
  • Inform the teams involved prior to this exercise of what they’re going to look at and precisely guide them to provide focus but also allow people the freedom to do as they wish within that guidance.
  • Everyone available can join in.
  • All testing is performed on the same test environment.
  • Freeze the build.
  • If it’s a web app decide browsers and devices to be used and divide them accordingly.
  • No discussion is allowed between team members.
  • Feedback can consist of bugs found, questions, comments, general feedback, whatever they like.
  • The remit is to “play with the feature and see what you find/think”.
  • After 30 minutes, everyone returns to their normal work and send their feedback to the task owner.

If company has existing tool to collect feedback then it can be used. I have used Mingle, Jira and spreadsheet on various occasions, constraint is that it may waste time if people don’t already know these tools, and here our main objective is to collect feedback so even if someone likes to write on post-it’s then allow them and collect it at the end, so that time blocked can be utilised completely for testing.

Benefits of this practice:

  • It’s a short and time-boxed activity where technical people provide focused information on a new feature and its integration with all others in the system.
  • It also acts as an unofficial demonstration for each team and as a learning exercise for all.
  • This activity mostly uncover all glitches in functionality, recommendations for future work and overall opinions.
  • Users performing this activity will also build knowledge about the features being built in the organisation.

One thing to be noted is the timing of this activity during the iteration, it should not be done near to a release, else you will not have time to incorporate the feedback from this activity. If something really bad is found then it can be scheduled for resolution in the same iteration or next iteration and importantly it has been prevented from going live.

*crowdsourcing: It is a distributed problem-solving and production model. In the classic use of the term, problems are broadcast to an unknown group of solvers in the form of an open call for solutions. Users—also known as the crowd—submit solutions. Solutions are then owned by the entity that broadcast the problem in the first place—the crowdsourcer.