Since we're on a bit of a theme here, I thought I'd continue to explore the topic of testing.
For years now, I have looked for the optimum way to do quality assurance for a complex product (like 3D software components). Having managed both a QA and a development team and worked in both roles as well, I have seen the problem from multiple points of view.
In reading books, forums and talking to software engineers outside Spatial, the traditional standalone QA tester group seems to still be most commonly used.
We had such a group for many years at Spatial but found it to have some problems:
- Development tends to rely somewhat on the safety cushion of knowing that another group is checking their work. While Spatial developers take pride in product quality and make efforts to overcome this tendency, it is natural and reinforced by the rest of the organization that sees two separate groups with separate responsibilities and applies pressure according to this structure.
- The schedule seems to naturally separate into a waterfall structure with implementation done first and QA done 2nd. Despite best intentions, this usually leads to schedule compression and quality compromises at the end of the release cycle.
- For an extremely complex product (3D modeling and CAD translation libraries), the people that develop the product are the ones that understand it's behavior the best. Often requirements are not well understood until our developers work together with customer developers to find the line between our components and their applications. Injecting an additional person, no matter how competent, into this dialog and expecting them to keep up has always been hard. This can also be seen by noting the pattern I discussed in my last post - our best tests come from customers rather than from us.
What is the alternative?
Rather than a standalone group, we made each development team, including a QA engineer, responsible for the overall quality of their output. This had a number of positive benefits. First, I think ownership for quality of output did increase. Our testing has definitely advanced since then, both in coverage and in level of automation. Another benefit was that the schedule problem was eliminated entirely because the team includes testing in all of their planning.
However, we have found that this approach also one big drawback: QA work is always done in the context of whatever project the development team is implementing. Which is great! because the project gets full attention. But unfortunately nobody has time to sit down with the overall product and play with it, poke it, break it, create samples, assess its usability and consistency as a whole. In my opinion, Agile completely fails to answer this question. While it has a huge emphasis on developer testing (or at least XP does), which is great, I've only read passing references to the fact that you also need system level QA … no further clarification. Where do you put it? How do they get really involved in the process? How do they keep up with the highly technical developer to developer discussions without transitioning into a development role themselves (which has happened in a few cases)?
So what IS the answer?
I've actually become pretty comfortable in the belief that there is no perfect answer (which, if you know me, you'd realize is not an easy conclusion for me to accept). I think a healthy tension can be created by accepting and being mindful of the limitations of each approach and oscillating back and forth between embedded versus standalone testing. A good example is that our RADF team develops a framework on top of our components for application development. Is that really so different than product level, standalone testing? We have somehow restarted our standalone QA efforts without even knowing it!