Wasting Your Time By Not Writing Tests

Automation is the key to successful testing. This post explains why other ways of software verification are a waste of time.

This topic is described in more details in my book:
"Practical Unit Testing
with TestNG and Mockito"

If You Don't Test Then ...?

Even if you don’t write tests you surely perform some other operations to verify that your code works properly. In order to make sure sure that the code does what you expect it to do and to find bugs you can:

  • have a debugging sessions (with help of your great IDE),
  • add a lot of log messages so you can later browse log files,
  • click through the user interface of your application,
  • preform frequent code reviews.

All of the above techniques have their legitimate use. Visual inspection is useful. Debugger and logs can save your life sometimes. Clicking through GUI will help you feel like your user. Code reviews will help to find various weaknesses in your code. But if these techniques are the only ways you verify the proper functioning of your code then you are doing it wrong.

Time Is Money

The main problem is that all these actions are very time consuming. Even if your experience allows you to use debugger very effectively or you have strong Linux shell skills that makes finding stuff in tons of logs trivial it still takes some time. Clicking GUI can’t really be accelerated - you wait for application to fetch data, for browser to render them, and for your brain to locate the right place to click. And a decent code review can’t be done in two minutes.

The time factor is crucial. This simply implies that you will not have time to repeat the process. You will check once that it works and voila! - finished. After you go back to this code (i.e. to add/update functionality) you will skip the already tested part (because it simply hurts to think that you have to do the logs browsing again).

Remember, your time and skills are too precious to waste them on simple, repeatable tasks that can be done more effectively by machines.

Your Brain Is Not Good Enough, Sorry

The second problem is that you are trusting your senses, your judgement and your honesty here. This brings few problems:

  • you can overlook something (e.g. omit a line of log files, forget to check everything that should be checked etc.),
  • you can misunderstand or forget criteria of verification and accept failed tests as passed,
  • you can cheat yourself that it works even if it doesn’t.

Yes I know, you don’t make mistakes so you can not possibly miss a single log line, and of course you are 100% honest to yourself… Well, but if you don’t make mistakes then how did the bug you are looking for happened in the first place? And about honesty… I can speak only for myself, but it happens that I see only things that I want to see and ignore any signals that speak against my wishful thinking. Has it ever happened to you? No? Good boy!

What makes it ever more painful is that clicking GUI again and again or browsing log files for the n-th time is so boring! Your mind will scream at you "get me out of here, I want to do some coding!" and it will likely tell you that "everything works fine" just to move to some more interesting tasks.

Some Conclusions

In short, verification methods that are not automated suffer from the following:

  • they are time-consuming, and as such, they are first candidates to be abandonned when deadline is getting near,
  • the criteria of verification might not be clear and the result of verrification can be skewed by human error,
  • they are boring which makes people do them weakly or avoid them at all,
  • might be hard to repeat them in exactly the same way (it is easy to omit some steps - in configuration or execution phase of test),
  • is might be hard to deduce from log files where is the source of a bug and sometimes long investigation is required to find it,
  • they are usually not included in build process and are run after some time the new features or changes were introduced into software which makes the feedback they give less valuable (in other words: it costs much more to repair the damaged parts that were not discovered right after they were damaged).

You Need A Safety Net Of Automated Tests


My experience is that the most of errors occur when the code is changed, not when it is written for the first time. When you implement a new functionality the list of requirements is usually clear. You simply implement them one by one, making sure that all works fine. This is easy. More problems emerge when you are asked to introduce some changes. Ooops, the original list of requirements is long gone, the people who wrote the original code are not around anymore and your manager assumes that "adding this small piece of functionality shouldn’t take long, should it?". And then, if you don’t have a safety net of automated tests you are in trouble. While adding or changing new functionality you are likely to break something that used to work fine. They call them regression bugs. Automation of tests make it harder for them to creep in.

Hi Tomek, I like your posts

Hi Tomek,

I like your posts and totally agree with your comments and feel automation and manual testing in the right proportion is the way to go ahead...

This surely makes it a candidate on my blog which has the best articles and posts across the web.

Do mail me at aditya_kalra@ymail.com and let me know if it is fine i add one of these to my blog.

My blog: http://go-gaga-over-testing.blogspot.com/

Best Regards,
Aditya Kalra

Automation of testing is hard work, let a tester help you.

The first time I read this post I thought: "Is this man serious? Is this a joke and a bit of cynical blogging here?"
But after reading it a few times I started to understand (I think) what it is about.

It is about a developer that is seriously thinking about testing and doing it good, even if the deadlines are creeping up the project. That's great and should be admired.

The recognition that checking your own work will in the end have it's flaws because of the 'bind spots'. My work is great, it is working and I will not see my own faults. A human mind will have this in one or another degree built in. Unconciously cheating on yourself, because of one being proud of his work.

Although some questions arise in reading this post:
- Writers let their work be reviewd by others, airplane pilots have checklist and check each other. A lot of people will forget to ask their co-workers to check their work. Even testers will forget to do this. But that should a big recommendation in this blog. Let your work be checked by a co-worker. He/she will find it less painfull to check your work, and will check the blind spots for you, because it is not his/hers. It will be less boring for the other person. Wouldn't you agree?

- Secondly, why wait for a tester when you are finished, ask the tester to check your work NOW, not when you're finished. The tester will love to check the logs a few times. Shouldn't you ask a tester to work at the same time and same code you're working on?

- The brain is not good enough for repeating the same checks? Well it isn't indeed, but isn't the brain much better in creative thinking and able to find more issues than a repeated script? A repeated script will only repeat the same thing, while a brain will find more because it will change the tests every time slightly. Isn't that a better way to find new defects?

- The requirements is usually clear? I would say it is completely different. In general (in the IT world) the requirements are not clear, they're not specific enough or do not cover the bussiness needs. Clearity of requirements is not generic, wouldn't you agree?

- Even if the requirements are clear and the programming is exactly representing the requirements, a check should also be done with the users / bussiness to ensure the requirements are respresenting them. Many defects found later on in IT systems are there because of different thoughts about the product and miscommunication. Isn't this another reason to get a tester with you to work on that?

- My own experience is that the first time code is written, it has may defects running the first time. And if changing the code later on, the same repeating tests will not find new defects. Only new automated tests will find new defects, or a creative tester's brain.

- If the requirements are not updated, how can you be sure you are programming the right thing and write testcripts that will check the right behaviour?

- Isn't writing automated tests the same as writing manual tests, but with more work on automation and keeping the automated scripts updated?

I know it can be done, good automation of tests and making sure they will work with every change, but this is almost a second project within your project and will take more time than expected. When deadlines are coming your way also automated testing will be forgotten and the only safety net you got then is the tester with all his limitations because of being hooked on too late.

Automated testing by the developer is great, and it should be encouraged, but also let a good software tester help you with some testing principles and known pittfalls of software testing.

He can even help you to deal with your manager and get you some more more time to do your work ;-)

Just some thoughts with this post from a testers perspective ;-)

very interesting comment, a lot to think and talk about

First of all, thank you very much for you comment! I read it with real pleasure, and I appreciate
sharing your thoughts on this subject.

I think that I was not clear enough in my post, and took for granted that my readers will guess what was on my mind. :) First of all I concentrated on the work of development team. I just didn’t mention what happens on the later phases.
I agree that the job of developers should be tested by testers, but for me this is an additional point, not crucial. Automated tests are crucial, human tester is a bonus. A valuable one, but only a bonus.

For some kind of applications (that my team develops) there isn’t really much for human testers to do. We code backend solutions. There is no UI layer, only some REST webservices or OSGi API that you can call. A tester should have a really deep understanding of technology to help us. And frankly, I feel we would spent hours supporting his attempts to solve various technical stuff before he would be able to test anything. And btw, I don’t think there is much job here for creative exploratory testing.

But I have also coded some applications that did have frontend. And you know what, thanks to automated tests (using WebTest Canoo or Selenium) we were really successful at providing working stuff. We had some testers and I admit they helped to discover some bugs (mainly related to some browser-specific issues, but also in the verification logic and some corner cases). But without automated tests that did regression checks every few hours we wouldn’t be able to proceed with development. The feedback loop with automated tests is very short. That is very crucial. To know right away something is wrong. Invaluable indeed.

>But after reading it a few times I started to understand (I think) what it is about.
>It is about a developer that is seriously thinking about testing and doing it good, even if the
>deadlines are creeping up the project. That's great and should be admired.
You did understand me right. This is exactly it. Please see the “definition of done” (http://kaczanowscy.pl/tomek/2010-05/what-is-done) that we have on our team. One of the points there is that the thing is not done, until it is tested. Because the software we are working on right now is backend only, we have only developer level tests (unit & integration). If we were also implementing some UI, we would also have some tests “from user point of view”.

> [..] Let your work be checked by a co-worker. [..] Wouldn't you agree?
I would! In fact, this is also one of the points on our “what-is-done” checklist. A colleague of mine must check my code (production code & tests) before a task can be considered to be “finished”.

>Secondly, why wait for a tester when you are finished, ask the tester to check your work
>NOW, not when you're finished. The tester will love to check the logs a few times. Shouldn't
>you ask a tester to work at the same time and same code you're working on?
Well, I have never worked like this and find it hard to imagine that it would work well (considering the applications that I work with - mainly backend stuff).

>The brain is not good enough for repeating the same checks? Well it isn't indeed, but isn't the
> brain much better in creative thinking and able to find more issues than a repeated script? A
>repeated script will only repeat the same thing, while a brain will find more because it will
>change the tests every time slightly. Isn't that a better way to find new defects?
The brain is definitely much better for creative tasks. And I don’t neglect the importance of human conducted exploratory tests. But I think it is a waste of brain to check twenty times that “if I add new client John Doe then it will appear within search results for ‘John’ but not for ‘Jim’”. This should be automated. Howgh!
The idea of changing “the tests every time slightly” is new to me. I don’t think it could be easily applied to the applications I work on right now but for more frontend-oriented stuff, it might work well. Sounds interesting, but let me ask you a counter question. What is the cost of running (same) regression tests every few hours (or after each commit) compared to paying human tester performing these slightly changing tests every few days? And I don’t think any tester could do regression testing more than few times cause he would die because of boredom. And btw. I already stressed the importance of short feedback loop. No. I prefer automated stuff. :)

>The requirements is usually clear? I would say it is completely different. In general (in the IT
>world) the requirements are not clear, they're not specific enough or do not cover the
>bussiness needs. Clearity of requirements is not generic, wouldn't you agree?
At the level that I’m concerned with, the requirements have to be clear to start coding. Sorry but you can’t write a line of code without having clear idea on what this piece of code should do. And first thing you do, you write requirements down in form of test. That makes you know when your work finishes. When test passes, then you are done (well, that is simplification - you need to do some static code checking, write documentation etc. etc.). This is test-first approach, and it works wonders.

>- Even if the requirements are clear and the programming is exactly representing the
>requirements, a check should also be done with the users / bussiness to ensure the
>requirements are respresenting them. Many defects found later on in IT systems are there
>because of different thoughts about the product and miscommunication. Isn't this another
>reason to get a tester with you to work on that?
Agreed. But I never said there should be no tester. I just didn’t mention what happens on the later phases. I concentrated on the work of development team.

>My own experience is that the first time code is written, it has may defects running the first
>time.
Was this code written in the test-first manner?

>And if changing the code later on, the same repeating tests will not find new defects.
>Only new automated tests will find new defects, or a creative tester's brain.
Not true. If you changed code and it broke the old functionality the automated tests will spot it right away.
The value of automated tests lies not in their creativity, because there is not much of it, but in fact, that they fend off regression bugs like no human tester can.

>If the requirements are not updated, how can you be sure you are programming the right
>thing and write testcripts that will check the right behaviour?
Going back to test-first. If requirements changes, then you change tests. Then you update the code so the new (changed) tests pass.

>Isn't writing automated tests the same as writing manual tests, but with more work on
>automation and keeping the automated scripts updated?
If you fire them only one time then automated tests require more work. But if you execute them 1000 times then … Well, it is obvious I think.

>I know it can be done, good automation of tests and making sure they will work with every
>change, but this is almost a second project within your project and will take more time than
>expected. When deadlines are coming your way also automated testing will be forgotten and
>the only safety net you got then is the tester with all his limitations because of being hooked
>on too late.
If you go test-first you can’t forget about tests, because you basically write no code without having a failing test. But I understand and partially agree with you - if quality is not taken seriously then automated tests are first to be sacrificed for the short term gains (alas!).

>Automated testing by the developer is great, and it should be encouraged, but also let a good
>software tester help you with some testing principles and known pittfalls of software testing.
>He can even help you to deal with your manager and get you some more more time to do your
>work ;-)
Agreed !

>Just some thoughts with this post from a testers perspective ;-)
Thanks once again! That was really interesting. You gave me a lot to think about.

--
Cheers,
Tomek

Thank you for answering my questions, Tomek

Little time over here, quick answer:

Some paradigma shift for me of course, after reading your fine answers, complete different situation than I've ever met. So thanks for explaining some more, I understand the situation and can see your approach being very important in your context.

> What is the cost of running (same) regression tests every few hours (or after each commit) compared to paying human tester performing these slightly changing tests every few days? And I don’t think any tester could do regression testing more than few times cause he would die because of boredom. And btw. I already stressed the importance of short feedback loop. No. I prefer automated stuff. :)
Doing regression testing 10 times a day would indeed be too much to do every day, even for a super-tester. Maybe automated scripts that regress and create scripts for their own with some change in it. But that's a completely different line of work and creating some automatied structure for that a full time job for a while. But interesting enough for some research about that. (if having time to do that).

>My own experience is that the first time code is written, it has may defects running the first time.
> Was this code written in the test-first manner?
No, it wasn't ;-) TDD would improve the first run in theory, so also in practice according to your experience. That's good news.

Thank you for answering my lonnnggg comment, hope to hear more on your blog about testing, maybe some practical tips for developers and automation of scripting?

Rob van Steenbergen, tester from The Netherlands.

is this really a time waste?

How can you know what to test in advance? Is hard to know what will go wrong and how your code will get after refactoring etc

Regression tests are tests that make sure previous bugs don't repeat but how can you find them for the first time? of course using those debugging sessions, log messages etc

Maintaining a large test suite is more time consuming than testing when needed, and no matter how big your test suite it will never cover all possible tests so you still have to use the old approach to find out those bugs.

In theory everything sounds fine, but in practice you need to ship and for this you need to get the thing done, adding another thing to maintain will get you a step farther away from this, is it worth it?

[quote]How can you know what

[quote]How can you know what to test in advance?[/quote]

If you do not know what you should do, how can you start coding?

In the Test Driven Development cycle, firstly you write the test (What's this functionality should do), compile it, run it and make sure the test fails. Secondly, you code the functionality as simple as you can, compile it and run the test that should pass. At last, you can refactor the test and/or the code to improve it.

For mid and large projects, test automation is crucial and long term investment. When you have thousands classes and API that are modified day by day, automated tests provide you a faster feedback on potential regressions/bugs. This helps you to save time and keep energy on implementing requirements instead of exhausting debugging/manual testing tasks.

"Automation is the key to successful testing"

"Automation is the key to successful testing" - I would tend to disagree with this. Automation is a great tool which can be used for regression and doing it quickly when it's done correctly. Bad Automation just wastes everyone's time and you're probably better off doing it manually. To me, the key to successful testing is having a tester take the time and effort to thrash about in the software, not tools and processes.

"you don’t have a safety net of automated tests you are in trouble" I'd be a bit cautious of using your Automated tests as “safety net”. Just because your Automation didn’t catch any bugs, doesn’t mean there aren’t any. True, that it would ensure a lot of the stuff still works after functionality has been changed but I wouldn't blindingly trust what the results say. Automation does make it harder for bugs to creep in, but only in the functionality that’s been automated. What about the stuff that hasn’t?

Regards,

Adam
http://testing.gobanana.co.uk
@brownie490

surely automation can be done wrong too

Hello Adam,

you are right that automation can also be done wrong. That is obvious. In my post, I spoke about "automation done right" which in my opinion is not so hard to achieve (hmm... that is a bold statement), and really makes a difference.

I don't like that you try to classify me into "tools and processes" side. I never said anything against having a tester that will do the final checks. Yet I believe that having a safety net of tests (on all levels) is crucial for the team to develop with good performance. Later on, a tester should do his job, and surely some bugs will be found. But not many of them, because all the major paths, all the major requirements were covered by tests. Automated tests (safety net) gives you rapid feedback which something worth dying for. :)

And regarding your last comment: "Automation does make it harder for bugs to creep in, but only in the functionality that’s been automated. What about the stuff that hasn’t?" Well, if you take TDD approach seriously (which sometimes is VERY HARD, I admit) then the amount of untested stuff is near zero. The real problem is to find the right balance of automatically tested to untested parts in case when the software testing is not easily automated.

--
Cheers,
Tomek

 
 
 
This used to be my blog. I moved to http://tomek.kaczanowscy.pl long time ago.

 
 
 

Please comment using