I may be repeating myself, but today we’ll try to think a little bit about a matter of testing outside the box and companies’ reputation on the market.
Most of you, my Dear Readers, work on big software projects. Not all of you in agile ones. As I found out recently, there are still tons of projects that in the field of testing follow very structured and organized test plans, cases etc. It is all great because without your work major bugs would never be discovered.
Today I would like to highlight edge-case testing – all those “what if…” situations that may be not spotted in regular test scenarios. You can always take a step back to see the purpose of your job. I mean it. When you rush in the project work, it is always to little time to look around you. It’s possible, that sometimes you are too focused (or focused enough) to obtain your in-company, in-project testing goals, so you are missing some important factors that make software GOOD.
My purpose today is to convince you that sometimes is good to take a step back and have look on the product, to test outside your comfort zone or, possibly, to ask somebody else to pair with you in testing. Such activities might improve the overall product quality, and make live users – like me – happy and calm.
When I was at the beginning of my career as a software tester, I thought (and I believe
I was also told so), that if I try hard enough, my product will be bugproof and bug-free. The more projects I took part in – the more I realized that it is not necessarily true.
At least some of ISTQB statements are right – software will never be bug-free, and you are not able to stop testing. There always will be something to test, something to improve. On the other hand, some of us – testers – often fall in the trap of checking.
No matter if we write automated test scripts or just repeat series of activities on well know the product. Unintentionally we suffer from pesticide paradox in our projects.
I was in such situations several times by know, when, after a period of really hard project work, I thought that I did everything I could to improve the quality. How big my surprise was when the bugs come back from live customers.
Not thinking outside the box is one of the hardest testers’ sin. Let me give you a recent example. Some of you may have heard that Apple experiences issues with their product updates. There were situations reported online about devices going on fire after an update (wow).
It sounds spectacular and not as breathtaking as this reply from Apple Support (wow):
I believe that the end user could have been upset and even frightened.
Does it mean that Apple doesn’t test the updating process of their own software? I don’t believe that. But, on the other hand, even such big companies seem to omit some edge cases in their testing process, in the great rush of introducing innovation.
There is more this week from Apple Support. One of my favorite Polish travel vloggers – BezPlanu – put a message on Facebook:
This Sunday I was supposed to present the last chapter form Venezuela. Unfortunately, using fast Chile’s internet, I’ve decided to install the newest iOS update – Mojave – on my MacBook. My computer requested it for two weeks. I’ve ended in the position where I’ve lost everything – including half-done new film (plus part of my Lima recordings and part of Santiago recordings) – they have not been archived on an external disc. After 4 days of the fight and dozens of phone calls with Apple consultants and other specialists, I’ve acknowledged that “it happens when updating” and “you have to back up everything” and it is normal that a computer worth 22 000 PLN could not handle the OS update…
A piece of advice from Apple care was to back up your data before software update. Sure.
And again, it doesn’t mean that they have never tested that, maybe they have, but overall it influences the whole product reputation. The product reputation is not only the matter of marketing but something that identifies the work of all employees, developer, testers, and people involved in creating this software.
It reminded me also of the same situation I had with my Samsung a few Android updates ago. My SIM Card died after the update. The ‘funny’ fact was that no one reported such situation online before. I just got used to such situations in my tester’s life. Lessons learned – always do the backup in case somebody omitted update tests on your device / SIM card model.
Why am I pointing that? Because I believe that, as the software testers, quality evangelists, we are responsible for thinking outside the box and trying to fit in user’s skin. What the user would do with our software? What might be the craziest thing that comes to my mind? What if I…? and so on. It may allow us to avoid some expensive mistakes. I think that we should always consider event the least likely “what ifs” – especially when we work for the well-known company because the equation is simple – the better known the company is – the more expensive are our mistakes. They cost money and – most harmfully – reputation.
Tiny last example from last week.
I’ve started attending German classes in order to improve my language skills. One of the tests on the course was to change the language on my mobile into German. The fun has begun. Not only some of my applications started to throw errors – they event were lost in the UI layer.
It seems that Instagram is not dealing properly with German names that might be long and uncomfortable for UI, especially mobile UI. Fun! But embarrassing. But Fun 🙂
As a summing up – I would like to encourage you today – in your own projects – go tomorrow to work and test something unexpected: backup, installation, simulate network change. Do something potentially crazy, something outside your range of responsibilities. Pair with your non-technical friend or let somebody else test your software (if possible). It is possible that such action, from time to time, may save your company’s reputation and some money.
Good luck bug hunters!
In case of any comments – stalk me on Twitter.