When you use Webmock or VCR or both together along with Cucumber, you will face an issue with your tests not being able to record as webmock and vcr blocks all outbound HTTP requests.
You will see error somewhat like this:
__identify__ path so it knows when it has finished booting.
There are few ways to tackle this problem based on your situation
1. If you are only using only Webmock
If you don’t need it, remove it from the Gemfile. If you do need it, then you may need to configure it more precisely to your needs, one thing worked for me is adding this snippet to my env.rb
By doing so, you allow real web access to your localhost.
2. If you are using vcr & webmock both
If you’re using VCR, you don’t need to configure webmock with the
require 'webmock/cucumber' line and the
WebMock.allow_net_connect! line. VCR takes care of any necessary WebMock configuration for you.
VCR includes support for ignoring localhost requests so that it won’t interfere with this. The relish docs, but in short you can use this snippet in VCR configuration to get it working.
Create vcr.rb inside feature/support folder and add this:
If this doesn’t work then follow the detailed documentation at Relish to get it working with cucumber
If you are using Cucumber with Rails 3 or Rails 4, you might face this issue.
You would have obviously followed the Cucumber guide to set up cucumber with Rails 4.
When you run cucumber it throws error “No known ORM was detected! Is ActiveRecord, DataMapper, MongoMapper, Mongoid, or CouchPotato loaded? (DatabaseCleaner::NoORMDetected)”.
What causes this problem
When you follow cucumber guide, it ask you to add this snippet in Gemfile
And when you execute command to generate cucumber, it creates env.rb inside features/support folder
Entries in gemfile and env.rb file for database doesn’t allow you to run your cucumber tests and always throw error for not finding the database, snippet of env.rb is shown below
How to resolve
If you are not using any database in your application and using Rails 3.x or 4.x with latest version of cucumber, you will have to do following things.
- Commented out the lines shown above in env.rb
- Uninstall database cleaner by running command “gem uninstall database_cleaner“
- Also remove database cleaner from your gemfile, so that next time you do bundle install, this problem will not haunt you.
- Some of the group forums also suggest to mark “DatabaseCleaner.strategy = :none“, but it didn’t work in my case as “none” is not a valid input for strategy, it only accept “transaction” and “truncation”; So I would rather comment it out or delete those lines.
Hope this helps, my motivation behind documenting it is unable to find straight forward solution anywhere.
Moving between jobs and setting up new machine can be quite painful at times, I recently moved into new assignment and faced the challenge so to avoid pain for other just summarising the steps I followed for smooth Installation.
1. Install Xcode and the Command Line Tools
If you don’t have Xcode installed I suggest to install Xcode from the Mac App Store and install the command line tools in Xcode Preferences -> Downloads.
2. Install Homebrew
- ruby -e “$(curl -fsSL https://raw.github.com/mxcl/homebrew/go)”
Now, run brew doctor to check everything is as expected before we continue.:
- brew doctor
If you get error: No such file or directory – /usr/local/Cellar
Run the following:
- sudo mkdir /usr/local/Cellar
- sudo chown -R `whoami` /usr/local
3. Install RVM
- curl -L get.rvm.io | bash
After this is complete, execute the following in order to use RVM in your current shell
- source ~/.rvm/scripts/rvm
4. Setup ~/.profile
Add the following to your ~/.profile in order to source RVM everytime you run Terminal.app:
- [[ -s "$HOME/.rvm/scripts/rvm" ]] && source “$HOME/.rvm/scripts/rvm”
5. Check RVM Requirements
- rvm requirements
If you have any missing required packages you will need to install them before continuing by running brew install .
If you need to install apple-gcc42 and get an error: No available formula for apple-gcc42
Run the following:
- brew tap homebrew/dupes
If you get an error after running this: Already tapped!
Instead of repairing, run the following:
- brew tap –repair homebrew/dupes
Now you can continue and install apple-gcc42:
From the “Missing required packages: …” line above, for example I can now execute the following:
- brew install autoconf automake libtool pkg-config apple-gcc42 libyaml readline libxml2 libxslt libksba openssl sqlite
6. Run brew doctor again to make sure everything checks out
Execute the following:
- brew doctor
If you get an error or warning about your PATH. Such as:
- Warning: /usr/bin occurs before /usr/local/bin
Open your ~/.profile and add the following line to the top (or bottom, it shouldn’t matter):
- export PATH=/usr/local/bin:$PATH
Or Augment your .bash_profile to have
- export PATH=”/usr/local/bin:/usr/local/sbin:~/bin:$PATH”
Now run: brew doctor
7. It’s now time to install Ruby 2.0.0:
First run the following commands:
- rvm get head
- rvm requirements
If you don’t get any errors you can finally install Ruby 2.0.0:
- rvm install 2.0.0
To set as your current version of Ruby run the following command:
- rvm use 2.0.0
To make it the default Ruby:
- rvm default 2.0.0
Now every time you open Terminal.app Ruby 2.0.0 will be default. You can always check which version of Ruby you have using the following command:
- ruby -v
and where it’s located executing the command:
- which ruby
You are now setup to run Ruby 2.0.0 on Mac OS X 10.8 Mountain Lion.
Most commonly known as Bug Bash, Break the App session, but they are all really just few names of this practice where we crowdsource* the testing to the project teams.
It’s been a while since I have been following this practice of crowd-sourcing the testing to the project team, occasionally outside project team as well and found that it is the most easiest and cost effective way to implement exploratory and usability testing prior to the release every iteration.
Team who have built a feature would always like someone else to have a look at it prior release, it will get them honest, unbiased feedback which is very valuable before the release, it also saves lot of time and money for the project teams. Throughout the iteration teams test the stories and try to capture maximum defects and prevent them to go ahead in the final product but just like other team members (developers, business analysts, ux) they must have been 100% focused on the feature and there is a possibility that they know something too well and overlook the obvious.
What is needed are fresh pairs of eyes provided with a mission to play with the feature developed and see what they think. An end user, someone who is playing with an application for the first time and asked What do they think about it? This also gives the feedback as we consider first impression lasts, something that either isn’t intuitive, doesn’t work as expected or just looks crap which is not going to live it’s life in production.
On the other side if the same task is given outside organisation to the team of experts onshore/offshore or usability experts, it will definitely get the job done but it’s an additional cost to the company versus in this case if you choose to crowdsource it internally, it’s faster cheaper and quicker solution.
How does it work
It is a time boxed activity where teams are given approximately 30 – 40 minutes to play with another team’s new feature and provide feedback within that timeframe, related team member then review the feedback and act on them, few things needs to be taken care are:
- The exercise should ideally take 30 mins but should not exceed more than an hour, based on when is it being performed as time is precious.
- Inform the teams involved prior to this exercise of what they’re going to look at and precisely guide them to provide focus but also allow people the freedom to do as they wish within that guidance.
- Everyone available can join in.
- All testing is performed on the same test environment.
- Freeze the build.
- If it’s a web app decide browsers and devices to be used and divide them accordingly.
- No discussion is allowed between team members.
- Feedback can consist of bugs found, questions, comments, general feedback, whatever they like.
- The remit is to “play with the feature and see what you find/think”.
- After 30 minutes, everyone returns to their normal work and send their feedback to the task owner.
If company has existing tool to collect feedback then it can be used. I have used Mingle, Jira and spreadsheet on various occasions, constraint is that it may waste time if people don’t already know these tools, and here our main objective is to collect feedback so even if someone likes to write on post-it’s then allow them and collect it at the end, so that time blocked can be utilised completely for testing.
Benefits of this practice:
- It’s a short and time-boxed activity where technical people provide focused information on a new feature and its integration with all others in the system.
- It also acts as an unofficial demonstration for each team and as a learning exercise for all.
- This activity mostly uncover all glitches in functionality, recommendations for future work and overall opinions.
- Users performing this activity will also build knowledge about the features being built in the organisation.
One thing to be noted is the timing of this activity during the iteration, it should not be done near to a release, else you will not have time to incorporate the feedback from this activity. If something really bad is found then it can be scheduled for resolution in the same iteration or next iteration and importantly it has been prevented from going live.
*crowdsourcing: It is a distributed problem-solving and production model. In the classic use of the term, problems are broadcast to an unknown group of solvers in the form of an open call for solutions. Users—also known as the crowd—submit solutions. Solutions are then owned by the entity that broadcast the problem in the first place—the crowdsourcer.
Attributes of shared understanding
- Clear business requirement
- Concise & Precise
- Well defined features
- Well defines scenarios
- Clear business goals
Qualities of good acceptance test
- Closely coupled with business requirement
- Shared understanding
- Should restrict to scenarios
- Use of examples
- Data should not be part of the tests
- Should be inline with business requirement
- Single source of truth
- Avoid technical jargons
- Consistent terminology should be used
- Easy to access
Dev Box testing, name comes from the practice where initial round of testing is done on developer’s(Dev) machine(Box) before the developer checks in the code to the source repository and mark the task (story or defect) as development complete.
I feel this is one of the crucial step towards getting faster feedback, every task done by the developers gets tested on their machine by BA’s & QA’s and feedback is provided on the spot, it sounds very simple and yes it is simple if followed properly else it could turn out to be chaotic.
How it should be done ??
There is no formal process of how it should be done so it’s best left to the team, few ways I have done it so far.
- The developer demonstrates the functionality implemented and run through the changes that has been made and impact on other functional areas to BA & QA who took the responsibility of that defect or story.
- Developer handover his/her machine to QA or BA to run through scenarios, mainly happy path scenarios and make sure goal of the task is accomplished.
- Time spent for this should range between 10 to 15 minutes based on the complexity of the story or defect implemented.
Where does it fall within iteration ?
Why teams should follow:
- Faster feedback, it reduces the significant time to find defect and get it fixed, in usual iteration model where this process is not followed feedback will come very late and if defect found it will get into priority queue and developer available at that point in time will fix it, turn around time in this case is quite high.
- BA gets an opportunity to ensure that implementation meets business requirements.
- Developers & QA’s can share knowledge and context around how QA’s test complex stories beyond UI e.g. with service calls, databases, etc and developers can share better ways of doing the same thing.
- Most importantly, if issues are found developer can quickly fix them and get it tested faster.
- Time utilisation, developers can utilise the time when they run the build locally before check-in.
Innovation and failure go hand in hand, fearing failure stifles creativity and progress. If you’re not failing, you’re not going to innovate.
We have always read about testing should start early in the life cycle of development, here is some analysis why.
Irrespective of Waterfall Model or Agile, we usually fall in trap of bringing in Testing in our last phase, benefit in Agile methodology is you failing faster then waterfall projects but still not faster as you can.
Usual software development life cycle:
- Planning: requirements are expressed, relevant people are contacted, few meetings takes place. Then the decision is made that we are going to do this project
- Analysis & Design: BA does Analysis and designs are prepared.
- Code/Build:Now developers write the code and hand it over to QA, in waterfall model after months and in Agile mostly at end of iteration.
- Testing: Now it’s your turn: you can start testing.
The earlier you find a bug, the cheaper it is to fix it.
If you are able to find the bug in the requirements gathering phase, fix is going to be of nearly of no cost versus if found in testing phase or post release, fixing them is 100 times expensive.
Conclusion: Start testing early & Collaborate
This is what you should do:
- Make testing part of each phase.
- Start test planning the moment the project starts.
- Start finding the bug the moment the requirements are defined.
- Keep on doing that during analysis and design phase.
- Make sure testing becomes part of the development process.
- And make sure all test preparation is done before you start final testing.
“I have not failed, I’ve just found 10,000 ways that won’t work” - Thomas Edison
I have been working in distributed teams for close to 5 years now, here are few things about distributed teams.
A distributed team (also known as a geographically dispersed team) is a group of individuals who work across time, space and organizational boundaries.
Working with distributed teams gives companies access to talent that they may have otherwise not have access locally. Additionally, companies gains experience in working with different global markets. Moreover, a project can be completed in a faster manner if people in different time zones are continuously working on a particular project. That’s not all, companies can obtain significant cost savings if they work in distributed team environment.
Advantages of Distributed teams
- Minimal infrastructure
- Cost Savings
- Work – life balance
- Individual control
Forming the Fully Distributed Team
- Shared ownership from the start
- Decide architecture together
- Get to know the client and domain
- Form personal relationship
- Communication is the key
Distributed Team Meetings
- Video Conferencing is must for all the meetings like Daily standups, Planning and Retrospective
- Same rule applies for all teams
- Planning poker over video or digital tool
- Digital wall, it should be kept updated all the time.
- Get extended day when you work in distributed teams, project can be completed in a faster manner if people in different time zones are continuously working on a particular project
How to start with Distributed Teams
- Start with one location and bring few people onshore and then those people go back ad set up a distributed team.
- Onshore team can measure velocity across few iterations and then almost same velocity should continue.
- Quality is still the same as you still write Unit test, Acceptance tests and get stories tested within iteration.
Understand that not everything can be distributed
- Enterprise architecture often does not.
- Software architecture distributes easily enough.
- Initial reluctance to comunicate extra
- Culture makes it hard to get aligned, misunderstandings about priority and value
- Local team taking aggressive ownership
- Not enough context information offsite
- Both sides need to adjust
When to start with distribution
- Get your local organisation capable of running Agile projects
- Get quality up with XP practices
- Stop thrashing, focus people (Stop trying to do too much)
- Think of scaling distributed team
- Working successfully in a distributed way is all about handling the ‘distance’ between people
- Classical approach is with more detailed instructions and control, not suited for knowledge workers
- Agile can tie people together across distances
- Agile benefits (Time to market, performance, quality ) mixed with offshoring benefits is a killer combo
- Introducing Agile and distribution at the same time is often too much to take in
Fully distributed teams has more value then localised agile teams.
- Skype for continuous video conferencing
- Swapping people onshore & offshore – share context
- Mingle for digital wall
- Go for Continuous Integration
- Showcases on join.me
This question is often raised by testing teams, what should we automate, or should we automate everything.
I would recommend teams to find answer themselves by asking the question what value will all the automated tests provide vs cost & time to be invested to build the automation suite. We automate to get faster feedback and shortened the testing time, which leads to a huge saving in terms of time and money for the team.
The automation requirements define what needs to be automated looking into various aspects. The specific requirements can vary based on product, time and situation, but still I am trying to sum-up few generic tips.
Test cases to be automated
- Tests that need to be run with every build of the application (sanity check, regression)
- Tests that use multiple data values for the same actions (data driven tests)
- Complex and time consuming tests
- Tests requiring a great deal of precision
- Tests involving many simple, repetitive steps
- Testing needed on multiple combinations of OS, DBMS & Browsers
- Creation of Data & Test Beds
Test cases not to be automated
- Usability testing – “How easy is the application to use?”
- One-time testing
- “ASAP” testing – “We need to test NOW!”
- Ad hoc/random testing – based on intuition and knowledge of application
- Device Interface testing
- Back-end testing