The best of: September #neverstoptesting

Hi Folks!

Let me cheer you up with a bunch of my daily crashes – more of it, as always -> @kingatest with a #neverstoptesting hashtag.

  1. 500 and 400 errors.

Why oh why always the same. Would you be so kind and read UI Heuristics and implement them in your life.

You don’t have to be over creative in such activities – but at least do not display a piece of an error stating that there is a 500 at your end when there is 500 at your end. Pretty pleeeese!

Gmina Miękinia

2. UI errors

in some online shops, and PL government websites, as we usually have 😀 Adorable, aren’t they? CSS is awesome


4. Winter is comming

ZIMA means WINTER in Polish.

I think you’ve liked this one the most in September. The famous quote speaks to all of us, isn’t it?

The “list” comes from a website displayed on a mobile device. Funny part – empty dots move to the sub pages as well!

Sudety info

5. Disappearing “Add to bucket”

Heuristics again!

What is the problem here? The popup with a chatbot overlap the “Add to cart” button. There is an x icon, that shall lead to a page without poup, but unfortunately, it does nothing.

At this point – it is not possible to buy anything – not the best message for the retailer, right?


6. Disappearing images.

IKEA deserves a separated chapter this month, or maybe even a post. The number of issues on the PL website of this retailer left me exploding from anger.

For now – disappearing images.

IKEA Polska

Did you spot something interesting last month? Share it on Twitter using #neverstoptesting and @kingatest
Have a lovely October!

Testing Essentials: What to test when you run out of time?

This question came from a job interview.

The answer can vary greatly depending on the application and project.

From the 7 rules of testing, we know that testing everything is not possible (skip the memes) unless it is a very simple system or functionality. In practice, however, rather than attempting thorough testing, you should target your testing efforts appropriately to the application of risk analysis, testing techniques, and prioritization.

A few days ago Michael Bolton wrote on Linkedin:

As testers, it’s our job to ask “What could possibly go wrong?” and then to perform experiments to show that it can happen. It’s not really our job to show that “it works on my machine”; any programmer worth her salt has already seen the product working.

Michael Bolton

And this is what we do – we think about the quality, we try to break as much as we can before the software comes out of our comfort zone – the dev environment. We test, try, and create the new test cases or explore risk areas.

But what if, apart from the complex system, the tester is also very limited in the time spent on testing? What if tested functionality is critical or extremely important from the perspective of the entire system?

1st – You don’t have the Time Turner

Not many of us have a Time Stone like Dr. Strange. Perhaps only he had it. Therefore, if you hear the sentence “this functionality is critical – you have 3 hours for testing” – it usually means that you have 3 hours for testing and this time cannot be stretched.
Count how many people are in your team. Divide the tasks. Plan your assignments and get them done!

Only you – Tester – know your system and just you can assess what is the most critical in a given situation.

2nd – FOCUS

Many factors distract us in our daily work – e-mails, matters not related to the project, private problems, smartphones. If you are in a time-limited and high-risk situation – your attention must be focused.
It takes forever for a minute if you concentrate on NOW.

3rd – first things first

Prioritize areas of the software that have the biggest impact on your users. What will you be testing, and why exactly THIS is important for your application?

What happens if you don’t test it?

What will be the impact of errors in this area on the user?

4th Design a set of tests to check the functionality

Of course, I don’t mean the 3-hour planning- rather a division of tasks in time.

5th Log results

Record what you are doing – just like exploratory testing. Note down questions, doubts, errors, their criticality, and possible inaccuracies with the documentation. Record defects in the tool your team uses (Jira, Bugzilla, HPALM etc.).

6th Prioritize defects based on their severity

In a situation where time for testing and fixing defects is short, it is important to determine which of the errors found are critical to the operation of the application and which can be resolved later.

It is worth starting with the so-called – low hanging fruits – meaning important bugs that are relatively easy to fix, then move on to the most valid but more complicated ones to repair and test.

7th Have the highest severity defect fixed


In one of the projects I participated in, a defect about the incorrect shade of green on the “Send” button, reported by the test team as “Enhancement”, was considered “Critical” by the client, because the success of the entire sales process, that the user has been through on the website, depending on it .

So, make sure what the real priority of found bugs is and then decide which should be fixed first.

And the most important – don’t forget about RETESTING.

8th Collect lower-impact defects to resolve them later

Many times I have encountered the situation that if the manager / PO / PM says – “Test this critical functionality in no longer than X” – and it is Friday afternoon or the end of an important development phase – usually it has “and do not report any defects” OR in the alternative version” and we will fix the reported bugs later. “

Even the ISTQB syllabus convinces us that the belief that finding and fixing a large number of defects will ensure a successful implementation of the system is incorrect. For example, very thorough testing of all specified requirements and fixing all defects found may still not save us from building a system that is difficult to use. It may be an application which will not meet the requirements and expectations of users or will have worse parameters than competing solutions.

You have to accept the fact (I know it’s hard, breathe) that not all defects will be fixed.

Some of them will not even be even found! It’s important not to give up and NOT to blame each other for failures.

My final thoughts

I understand terms such as “contractual penalty”, “good of the project”, “bonus for the manager for delivering the project on time”, but I am a software tester, usually quite stubborn in my views on product quality. That’s why I often suffer from selective deafness and report defects, I insist on fixing them, and negotiate the time that is not there.

I encourage you to care about the quality of the software that passes through your hands. Perhaps it will not be you who will be ashamed if errors appear in production.

Perhaps, the end user will suffer from crashing application.

Do you have experience in testing under a herd pressure?