Quality Assurance (QA) Testing is an extremely important part of the development process. I would argue that it’s as important as software development, because low quality software is as desirable as a headache. Granted, this may be debatable. What isn’t debatable, however, is that without a unified QA Testing effort, software quality will suffer. Unfortunately, when schedules get tough, many are tempted into cutting down QA Testing as a means to meet demanding deadlines. There are many ways to expedite development, but cutting QA should never be one of them.
Devoting the right amount of QA Testing to a software project is paramount to its success. But QA Testing is not an informal practice; it’s an organized effort, and there are right ways and wrong ways to do it. So… what is the right way to perform QA Testing? How much time & effort should be dedicated to QA? Who are the right people to perform QA? These are questions that we will examine throughout this article.
For the remainder of this article, QA Testing and QA will be used interchangeably, and they will mean the same thing.
More Human than Human
Software is an extremely human thing, because software is designed for humans to use (this is a hope rather than a certainty, because most developers suck at Usability!). Therefore, software must be designed to be tolerant of human mistakes AND of human creativeness, because people will always find new and creative ways to use your software. This is why it’s so important for the right people to perform the QA: so that the software is subjected to tests that approximate the actions normal end-users would take. And of course, Software Developers are (usually) NOT normal end-users: we know too much about how software works, and this is a terrible bias when performing QA.
Certainly, Software Developers must test their work extensively to ensure that it doesn’t crash, that it doesn’t interfere negatively with other parts of the software, that it meets all requirements, that it functions properly, etc. Developer testing is but the first step in the QA process. There are many more steps, most of which should be done by dedicated QA Testers (non-developers): this includes Usability Testing, Boundary Testing, Integration Testing, Functional Testing, Deployment Testing, among others. That’s a lot of testing!
Who’s the best person to perform QA?
Software Developers are extremely biased, because they know exactly how they programmed the software to work. When it comes to QA, this bias has a very negative impact. The right person to perform QA is someone that is not biased (i.e. doesn’t have an inside scoop as to how the software should work); has high empathy (they can place themselves in the shoes of end-users); is very organized (documents every test they run, whether it fails or succeeds); knows about good Usability Design and can apply that knowledge (to discern good Usability from bad); has great communication skills (can communicate suggestions, errors, effectively and to the right people); and is very (VERY) patient, because QA can be a very tedious and repetitive task.
QA requires much patience, much organization, and much communication. A good QA tester will have all of these skills, plus possess the skills necessary to run, install, deploy, tear-down, etc. the software being tested. It’s a unique combination of skills and aptitudes. If the person has patience and empathy, then the rest can be learned.
What is the right amount of QA?
From many years of practical experience working in the Software Development industry, I’ve seen the results of company’s that don’t do any QA whatsoever (the result of which is bug-infested code that is never truly done, because it’s always in debug mode). And I’ve also seen the results of company’s that do way too much QA (paralysis from over-testing and never launching anything from fear that it’s not QA’d enough).
Both extremes are bad, though it’s better to err on the side of doing too much rather than not doing enough. So what is the sweet spot? There are certainly no such thing as a one-size-fits-all solution, though a rule-of-thumb can be generalized. The complexity of the software and of the features being QA’d should provide an inkling to the amount of QA necessary. Furthermore, I find that a 1 to 1 (1:1) ratio of time spent running QA and Software Development is a good practice, though this won’t always be the case — there may be situations where a 15 minute modification may in turn require several hours of QA (but never the reverse!) — the situation will make clear what is required.
Generally speaking, you should spend at least as much time doing QA as doing Software Development. Do this, and you’ll see the quality of your software improve drastically.
Organizing for QA
Good QA isn’t just about occasionally asking someone to provide feedback. Like everything else in Software Development, it requires discipline and organization. The best QA efforts work in tandem with Software Development, and continuously test every feature, new & old, that gets created, modified, or removed. It’s important for such efforts to be documented and cataloged for future reference.
It’s important to know what features, functions, and results each part of the software can potentially have, in order to be able to discreetly determine whether or not they are accomplished, whether they work well, and whether they fail well when wrong data is input (crashing is not an acceptable way of failing, nor is keeping silent; appropriate feedback must be delivered when error conditions are encountered).
Different Types of QA
As stated earlier, the first step in the QA Process is developer testing. When a developer writes any code, that developer is expected to test it, to verify that it doesn’t crash, that it works well in it’s surrounding environment, that it doesn’t negatively impact other parts of the software, and that it conforms to the requirements.
It sounds arduous, but in reality, it’s a rote activity for most developers: write a little bit of code, test it, write more code, test it, etc. and ad nauseam. You get the picture. Great developers will have this ingrained into their DNA’s, and no code that is untested will ever be committed to the repository.
Unit Testing is a higher form of Developer Testing. Instead of testing code in general, it is very specific because very specific Functions are tested in isolation by each Unit Test.
Effective Unit Testing requires crafting effective Unit Tests, which can range from Trivial to Complex. For each Functional unit being developed, a Unit Test is devised that tests a certain aspect or aspects of it, and then is ran using specific inputs and use cases.
For example, if there’s a function that withdraws money from a certain account, then a Unit Test will be written to make sure that the money is withdrawn correctly. There’ll be another Unit Test to make sure that the remaining amount plus the withdrawn amount is equal to the original amount, and another Unit Test to make sure that there’s enough balance to withdraw (or fail otherwise).
That’s a lot of Unit Tests, and all get executed in a certain order, and sometimes some Unit Tests depend on others to perform their job. The effectiveness of Unit Testing depends on the effectiveness of the person that writes them, so just because Unit Tests pass doesn’t mean that the software is bug free. In fact, Unit Testing is just a step to ensure the quality of the code, not the quality of the software as a whole.
There are many extant libraries for Unit Testing that will make a developer’s life easier, some may even generate Unit Tests to a certain degree. But almost always, Unit Testing is the domain of software developers.
Functional testing is a system-wide kind of test, done to make sure that every feature in the software conforms to the requirements and the expected outputs. The name may be confusing, because it reminds us of the “functions” used when programming; however this is not the case. Functional Testing refers to a high-level requirements test of the software as a whole, to make sure that each requirement is met and performs its functions properly.
Functional testing is performed by QA testers, and is usually done one feature at a time. Functional Testing may be to QA Testers what Unit Testing is to Software Developers: a series of piece-meal tests to ensure proper functioning of every component, from the smallest to the largest as a whole.
User Interfaces (UI) are what we Humans use to communicate with software. It’s where we load information, input data, perform actions to transform that data, and finally create & receive final output.
Creating software that conforms to the functional requirements is relatively simple. Creating software that conforms to the functional requirements while being easy to use is very complicated. It’s necessary to understand how people use computers, to use standard practice where necessary, and organize everything on the screen so that it’s easy to find, deciding what to keep on the screen and what to put into menus, how to break up different views / dialogs / forms, etc.
A QA Tester performs Usability Testing to make sure that the software is easy to understand, easy to use, and that it’s obvious. When bad UI is encountered, a QA Tester will submit a report that identifies the problem, and suggest several solutions which can then be implemented by the Development Team.
Software is made for Humans. Humans make mistakes. Hence, software must be built to be tolerant of Human mistakes, and handle them gracefully without crashing.
Boundary testing makes sure that, even when operating under sub-optimal conditions, the software still performs in a constant and predictive manner, without crashing.
Some examples are:
- If a user inputs letters where numbers are necessary
- If a user inputs more text than can fit, or less text than is necessary
- If a user uploads the wrong kind of file, or the wrong size of file
- If a user doesn’t fill in a required field
- If a user inputs the wrong value(s)
- If a user operates the software in strange conditions (over a VPN network; in a low-memory environment; using Microsoft IE; using too small or too large screen dimensions; etc.)
Lets say the individual components have already passed every QA test, but when we combine them will they perform as good as they do on their own? The interaction between them is just as important and needs its own testing when everything is integrated as a whole.
Even when small changes are added on later stages, they can mess with the interactions already happening in our software.
It’s one thing to use the software in isolation. But what if 1,000 users, or 1,000,000 users are simultaneously making use of it? There are tools to help simulate this load, and QA Testers use them to make sure that the software withstands the stress.
Another type of Stress testing is to use the software in such a way that tries to max out the available computer memory (RAM). A well-written software will handle the situation without crashing; a lazily-written software will not handle the situation at all and will simply crash.
It’s important to test the software to its limits, to make sure that it works well in Good, Bad, and Ugly scenarios.
Many of the testings are first performed inside a safe testing environment before actually releasing the product to the real users. However, this won’t assure us it will perform the same way when it reaches the stage we call “production”. Moving our software from the test environment to the actual environment in which it will operate can result on missing components for example.
Deployment testing is a must to ensure the final product will perform as needed when we release it to the world.
General Breakage Testing
This kind of testing resembles Load/Stress testing because it’s all about creating situations where the software may break. These situations can include, for instance, repeatedly issuing a command or several commands; starting and cancelling operations repeatedly; uploading many many files; interrupting internet connection while in the middle of operations; refreshing the page while the software is processing; etc.
And when things break, report them with lots of detail so that developers can recreate the scenario and handle it.
How we do QA at LionMane Software
At LionMane Software, we have a QA team working in parallel with our developers, so that everything our developers create gets properly tested, documented, and even improved with suggestions & feedback. We use several softwares to organize ourselves, which I’ll describe.
It all starts with Github. For every project, we have a respective repository. And for every change, feature, bugfix, etc that gets done, we have a respective Issue (read: task) within Github to track it. All code commits are associated with a specific Github Issue, so that there’s not a single code change that can’t be later traced back to a specific work order.
We build on top of this with a software called Zenhub, which is a Chrome extension for Github. This software lets us use Kanban-style boards with our Github Issues, which is extremely helpful for us because now we can organize every Issue (task) into specific pipelines. Of these pipelines, the most important ones are: Backlog, In Progress, and Internal QA.
Tasks that are about to be worked on. Our development team is responsible for picking issues (in order) from the Backlog pipeline, and moving them to the In Progress pipeline whenever they start working on them, and our project managers are responsible of keeping the Backlog pipeline fully loaded with Next Issues that need to be worked on.
In Progress pipeline
These are tasks that are actively being worked on. When a developer finishes working on a task she/he moves that Issue to the Internal QA pipeline.
These are tasks that have “completed” development, and are now in the process of being QA’d. Once an Issue makes it to the Internal QA pipeline, it becomes the responsibility of our QA team. Our issues are documented in such a way that all pertinent information about expected usage & expected results are there, so our QA team can get to work asap. Each Issue gets extensively tested in a wide variety of environments (Windows, Mac, Linux, etc) as well as conditions (High or Low internet connectivity, High or Low memory, etc). When a feature fails and problems become apparent, these get documented directly into the respective Issue, with screenshots and any other information necessary to explain the problem. The Issue then gets labelled as “QA Failed,” and it’s moved back into the Backlog pipeline so that the Development Team can take it up again to fix the problems.
This process goes on until no more problems can be found, at which time the Issue gets labelled as “QA Approved” and it’s ready to proceed to production. That is how we do it at LionMane Software, and we’ve found this as an extremely effective way of organizing our QA effort. Over the past several months since we implemented this system, we (and our clients) have noticed a steady increase in quality (less bugs, better compliance to requirements, and easier to use).