As a version’s release date approaches, and bugs and untested features are looming large, there is a temptation (not usually in the QA department) to recruit people from outside the QA to help the testing effort. This, I’m going to argue, is not at all a good idea. It relies on false assumptions, is not at all cost-effective and is easily surpassed by the alternative: bringing in extra testers.
The False Assumptions
There are two false assumptions at the base of this idea: that testing is easy, and that all employees know the product well enough to test.
The truth of testing is that following a well-written test script is easy; seeing everything that’s wrong in and around a bad script is hard. A non-tester is unlikely to spot a badly designed test, or correctly follow a badly written one. And if you’re this stressed for time, it’s a fair bet that your test scripts – if they even exist – are incomplete and out of synch with last-minute development changes.
More than that, a non-tester will see almost nothing except what the script says. UI wrinkles? Slow, cumbersome processes? You can’t expect them to spot these. You know this; it’s the reason you hire testers in the first place, and prefer those with training and experience. Even worse, how the non-tester understands tests in a hastily written script is anyone’s guess. I’ve seen people who didn’t understand that a Submit button that gets them to a 404 page is a bad thing because the script simply said “Submit” and the button worked; they didn’t guess that they were also supposed to check that the form submission worked. Hastily-written scripts are full of these hidden assumptions.
And if you don’t have test scripts, then sitting a non-tester in front of the product and saying “test this” is a guaranteed failure. Chances are they’ll never get further than seeing if the buttons work and if they can write in the input fields. Your business logic will remain undisturbed by them, as will your data integrity.
The second assumption, that all employees know the product well enough to test, is more understandable but still, usually, wrong. Developers know their little corner of the product and, if pressed, may recall a few features they once helped with. In some places I’ve worked with, testers had to generate data for the developers who worked on the data-crunching processes; the developers didn’t know where in the interface to do it. And they certainly can’t test their own features – any blind spots they had while writing the code they’ll have while testing it. How could they possibly test a product?
As for the rest: project managers are often so fuzzy about features that pre-date their own involvement with the project that it’s a form of genius that they manage to work with the product at all. Most of the other departments have little to do with the product in their day-to-day; your only real hope is technical support, but their knowledge runs backwards: you can ask them to re-test old features, but they don’t know the new features.
Is Something Better Than Nothing?
Still, you might argue, something is better than nothing at all. I’ll certainly not go so far as to say that non-testers can’t find any bugs; anyone can find a few bugs, if given enough time and especially if given testing scripts, however badly written. But a little is only better than nothing if its cost is lower than its benefit. Are your randomly collected employees really capable of working independently, or will they require constant help from the professional testers and the developers?
Most likely, you’re actually slowing testing down by using non-testers. They’ll have too many questions, their bugs will contain too many false-positives, their recreation instructions will be unclear (even when based on a testing script), they’ll deviate off the plan and so on. These issues are now distracting the testers and wasting their time, so that in total the number of good tests performed on the version falls, rather than rise.
Something Is Better Than Nothing
The obvious solution is to postpone deployment, but that’s often contractually impossible. So you’re better off bringing in project-based testers. They have no knowledge of your product, but they know how to explore an unfamiliar product, how to follow a test script or work without one, and how to describe a bug. They should surpass your non-testers in a couple of days of hard work.
Try to recognize the need as early as possible – pay attention to the balance between the version’s content and the testing time, and act as soon as the balance is lost. You can find freelancers online or thorough your friends in the industry, or you can contact an outsource company.
In either case, be *very* specific in your requirements, because you don’t have time to interview dozens of candidates and you don’t have time to work around serious knowledge gaps. If you have time to prepare and check a written test, don’t make it a random one – give them a section of your product on a laptop and see what they can do.
Don’t let an outsource company desperate for your business talk you into accepting green testers for this project. I’ve seen it happen, and the result is always the same: no matter how good their training, if they have no real experience testing they can’t give you what you need in this situation. Green testers are a good long-term investment, but if you’re looking for someone to dive straight into a two-week project, move on to the next outsource company or fish for freelancers.
I don’t think you’d let your graphic designers handle technical support, or your project managers help out the IT when the servers fall; letting them do your QA is not a world apart from that. You are setting them, and the product, up to fail. If you can’t postpone deployment, or cut back on the version’s content, bring in extra testers. They’ll give you a much better product, and a much smaller headache.