Quantcast
Channel: Testing – Shaun Abram
Viewing all articles
Browse latest Browse all 18

Blog post summary: We need to talk about testing

$
0
0

I liked this “We need to talk about testing” post from Dan North. It’s about what testing actually means and how programmers and testers can work together. A summary (or copy & paste of the parts that I found most interesting, with some comments) below…

The purpose of testing is to increase confidence for stakeholders through evidence.

Test-driven development

Test-driven development (TDD) is a method of programming where the programmer writes small, executable code examples to guide the design of an API or feature.

These code examples can later also be used as tests to prevent the programmer from introducing regressions.

While these “programmer tests” give useful feedback to the programmer, these executable examples don’t necessarily make good tests. In fact, North argues that the purpose and essence of testing have been diluted by these automated tests. TDD (and BDD etc.) do not replace testing. They are primarily design and development techniques.

Why don’t we just automate all the testing?

Unit tests are typically single-sample tests of functionality, of what the code does, rather than any of other important dimensions such as security, accessibility, compliance, etc. These are more coding guides than tests.

Even with good unit testing coverage, we still need testers to help us think about testing. Understanding risk and its potential impact along multiple dimensions, and surfacing that all-important information, is a full-time discipline in its own right.

The axis of automated vs. manual is one of the least interesting to obsess about. Automation is just one of many useful tools in a good tester’s tool belt.

So, we should write automated tests, especially for something we are likely to do again and again. But the insights and feedback a good tester can produce make hands-on testing a valuable and essential ongoing activity.

Is test coverage a useful metric?

(Note, I mixed some of Dan North’s thoughts with my own here…)

Low test coverage tells you something about your project – specifically that you have few automated tests. That itself is not necessarily a concern however if we know that we are verifying the code in other ways.

And while high test coverage is good to aim for, it in itself doesn’t tell you a lot. You could for example have 100% test coverage without a single test assertion (which I guess you could call code coverage theatre). The tests may also not tell you anything about code quality, whether edge conditions are covered, security vulnerabilities, or whether it breaches regulations.

What does it mean to “shift testing left”?

[Note that my own thinking on shifting left is…

Your tests should find an issue as early in the process as possible. For example, if you find an issue in

  • pre-release manual testing, can you turn that into an automated browser-based test?
  • if you find an issue in a browser-based test, can you turn it into an integration test
  • if you find an issue in an integration test, can you turn it into a unit test?

Each of those is a shift to the left.

Unit tests (at the farthest “left”) tend to run more frequently and faster. Browser-based tests are slower and generally run less frequently. Manual testing (at the furthest “right”) is the slowest and most expensive and is done very infrequently.

The further left we go, the earlier tests run and the faster tests tend to be, hence the faster and sooner the team gets the feedback we need.]

 

North used to think shifting left meant starting all these testing activities earlier in the process but now realizes it is more than that: it means doing different things. Shifting left on testing means thinking about architecture and design differently, considering different stakeholders early and continually. Which in turn means shifting left on security, accessibility, and all the other dimensions of quality that we should care about. So shifting left on testing motivates all kinds of assurance activities.

We aren’t doing traditional testing earlier, we are setting ourselves up to never need it at all.

Concluding thoughts

We should all be thinking about how testing and testers fit into software development. The purpose and principles of testing can mesh well with agile delivery methods. If we side-line testing and testers, we miss out on the opportunity of genuine high-performing teams. We can write better quality software faster, and more sustainably, by reintroducing some of these ideas, and enjoy the rare win-win-win of “better, faster, cheaper.”


Viewing all articles
Browse latest Browse all 18

Latest Images

Trending Articles





Latest Images