Reflections on Testing

Some Insights gained as a Test Engineer

Note: This post was also included in Software Test Weekly. Many other amazing testing related articles are also listed there!

  • Testing is a matter of imagination. Testing requires critical thinking and suspicion. Testing cannot be automated just like how programming cannot be automated.
  • Checks can be automated. Automated Testing should be called Automated Checking.
  • Testing does not improve quality. Testing does not guarantee quality. Testers collect evidence and drop inferences to infer the confidence in the products.
  • Mindsets fundamentally guide people’s decision making process and the resulting stream of actions
  • Some super experienced engineers still insisted on writing solid unit tests for each implementation method after reading Kent Dodd’s Testing Trophy
  • Some developers strongly believe that all tests should be automated, even when features and product directions are likely to change
  • Some experienced non-technical testers without insights into APIs and architectures only gain sufficient confidence after manually executing the whole testing suite on every single browser/ device before each release. It takes more than programming skills and testing skills to convince them otherwise.
  • Testers are not frictional gateways that slow down release cadence. They light the way towards speedy, reliable releases — when done right.
  • Exploratory testing does not mean going all free style without keeping any structured records. Session based test management, like everything, takes lots of practice to be done properly. Script testing can be a helpful tool for guiding tests.
  • Everything is human centric: agile, devops, development, testing. We use tools and processes, not bound by them.
  • Tacit Knowledge vs Explicit Knowledge — Just like how reading a Design Pattern book does not make a junior dev a master of abstracting code, having a super detailed test suite does not make any person good at testing.
  • My prior testing experience does not make me more effective in finding my own bugs
  • Testers mindsets and Developers mindsets are very different. It takes effort and time to switch mindsets or wear different thinking hats.
  • Testing: defocused. “Under what other conditions could it break?”
  • Development: focused. “How do I build this?”
  • The critical distance helps people in the respective roles taking the appropriate mindsets when performing their jobs — have a “Ship It!” mindset can make people blind to bugs

Reflecting on my work / interview experiences, where has my testing experience proved to be useful?

  • Got hired into Lightbox from due to my testing experiences and Github projects. After 2 months of manual regressions with the team, I was able to assess the risks and efforts and propose a testing strategy that reduces the effort by 50%
  • In an interview with a startup, I was asked “Do you think we should automate all of the tests? How many of the tests should be automated?”
  • In another interview, I was given the scenario: our product sells well and we are looking to expand the team and add more features. How will you do?
  • In one other interview, I was asked to share a topic that I know well, with examples on my past experiences. Naturally, I picked Testing.

Some Heuristics on Setting Testing Strategies

  • It is a trade-off between risks and efforts. There is a diminishing return in the confidence level provided with the ever increasing testing efforts. Find the area where efforts are duplicated for little increase in confidence. e.g. Ask Qs such as: is cross-browser testing necessarily on this particular suite of test cases?
  • Always to get buy-ins from key stakeholders to get consensus on the risk vs confidence level on the presented test strategy
  • For the most bang of the bucks, start with automated e2e tests on the happy paths if the system lacks any form of automated tests. It catches regression on the most fundamental part of the system
  • e2e tests are usually slow and flaky. It is best to write the test as the whole full workflow with multiple state changes and mimic the user behaviour in real life instead of isolated individual test cases with independent states
  • Write specification by examples for e2e. It can later evolve to be the living documentation for the application
  • If it is too difficult to set up the full e2e testing environment for FE to talk to BE in the nightly build, mock the services. Justify the efforts vs risks. Tests should be run on the nightly build, automatically. Some tests run are better than no tests run.
  • Write automated tests should be part of DoD. An independent tester can usually better assess the quality and thoroughness of the tests than devs who already spend a good chunk of their time reviewing the code changes
  • Be wary of the statement: We should automate all the tests. When? At which level? If functionalities are likely to change in near future, are you still going to double the effort and automate the tests? Always make decisions based on the risks vs efforts of the particular context.

Some Questions to Ponder

  • The guidelines in setting the testing strategies are pretty straight forward. I think, if people do put in sometime and really think about what they want to get out of testing, they would come up with something similar. Why do I still have an edge in this case? What makes my understanding of testing standout? People are too busy focusing on their daily activities to think thoroughly about testing?
  • Likewise, how can I always take a step back and make assessments (design /tools etc) from the fundamentals?
  • How can I further solidify my edge in testing?
  • Any other skills like testing that I can develop and add value?

Some References

A software engineer passionate about learning and growth