Monday, December 26, 2011

Being an excellent tester


Today, I like to talk about how important it is to be an excellent testr.

I believe Software Testing is the most stereotyped, mislead and judged discipline in Software industry. I’ve talked to (or argued with) many different people (PMs, Devs, Tests, Managers, and Executives in software industry) about software testing, and I found out that people have many different opinions about Software Testing.  Of course, people have their own perspectives about software but I think there is more to it than people’s perspectives.

We need help them understand that software testing is not simple. Good domain/product knowledge does not guarantee better testing results. It is generally good thing but it can easily put testers  in a box of assumptions. Fresh eyes and different perspectives could lead better testing results in some projects. Test automation does not guarantee better testing either. ROI of test automation is sometimes very ambiguous. Furthermore, testers are often confused with software development coding patterns or practices when they write testing codes. There are some overlapping coding  and design practices between test automation and production code, but there are quite of differences (purposes, the fact that automation code is never used by our user, and etc.) There are dozens of different yet equally valuable testing strategies. Not only you cannot apply all of those strategies in every project, but also choosing the right testing strategy is hard. Metrics around software testing is very complex as well. Various main factors in a project (resources, timeline, budget, business goal, feature, and etc.) are keep changing and testing metrics seems going up and down randomly. Testing process also does not guarantee better testing result. Poor testing most likely remains poor whether it was done in Agile process or water fall process. And testers need to clearly understand the value of different process provides.

So basically what it boils down to is YOU. Testing results are determined by your testing ability, your thinking process, your strategy, your execution, your dedication and your learning. 

Let’s step a little further. Do you think a tester can accelerate software development process?  (I stole a story from one of GTAC speakers here.) Do you remember the time we had 5.25 inch floppy disk? And 3.5 inch disk? I don’t think there were much of technical evolution moving from 5.25 inch to 3.5 inch disk. But there were much more on protection added by having thick plastic covers. I think excellent testers can help project team deliver 3.5 inch disk product when they come up with 5.25 inch disk product. Software testing should not be passive or submissive work. Can you challenge your architect? Can you challenge development team? I also believe software testing should be proactive. You can save 3-4 hot fixes. Software testing is fun and rewarding if you believe software testing is the most cognitive and intellectual activities you’ve ever done in your life.

Happy Holidays

Saturday, December 3, 2011

Test automation design (Chain of verifiers)

Today, I like to write something related to the test automation design. I called it "Chain of Verifiers". It would be nice if I can get some feedback on this design. I've got good feedback on this design from one test architect but that does not mean this is good. Let me know if you have any comments.

Motivation for pattern is to come up with simple, robust and yet flexible and scalable test automation design especially in SOA test automation context. Basic idea for Chain of Verifiers pattern is separating one test case execution into three logical pieces. Those are “Test Operation”, “Gathering verification source” and “test verification”. 

Test Operation
Test Operation is activity or activities that are required to do for each test case (i.e. creating a request, send request, get response)
All the logic to construct the request, sending request and parsing response, and etc. belongs to this part

Gathering Verification Source
Gathering verification source part is responsible for gather all the data or information that will be used in verification.
This part also should be able to take care of parsing data. If logs are generated during the operation and it can be parsed in certain object or data. (It is better to do here so that verifiers do not have to parse the log or convert to certain class type it needs) 
The expected results are not necessarily required for this part. It could be if some input used for creating request is expected value. However, main focus is to gather data or info that will be used in verification 
Create one container (i.e. VerifySource) that contains all the data gathered from the operation (req/res pairs, inputs used, expected result(in some cases), parsed logs) 

Executing Test Verification
Executing Test Verification is the part that contains logics that what verification need to run (based on each test case’ purpose) and actually verifying the results
Selecting verifiers for verification logic and executing verifiers are decoupled. (will explain more details later on)
VerifierFactory: This is responsible for gathering verifiers and creating chain for certain aspect of verification. (It will be refined based on each test case needs)
Verifiers: This is responsible for verify one logical part that each test case needs.

Overview


More Details of Executing Test Verification
  • VerifierFactory holds the logic of gathering set of verifiers that each test case intends to verify, which mean VerifierFactory logic need to be interated through out the test automation development process. 
  • VerifierFactory can return two ways of verification process (similar to Decorator pattern)
    • Sequential verifier chain: previous verifier failure cause the test failure immediately (i.e. cases where following verifiers are meaningless to perform like response contains error when it is not expected)
    • Distinctive verifier set:  execute all the verifiers in the chain and return the result of “AND” operation of each verifier result. This is used when previous verifier failure not necessarily casue the failures of following verifiers
  • Sequential and distinctive set of verifiers both can be used in the test code

Benefits of Chain of verifiers pattern
  • It is follows Open-Close principle (Open for extension and close for modification)
  • Each verifier can be implmented and run independently.
  • Independent implementation effort can be easily distributed to different testers, which means even new person join the team, he/she can easily write Verifiers for given part without truly understanding of entire automation or project
  • Each verifiers can be used for hybrid test execution (manual execution + automated verification)
  • It’easy to add, remove, change verifiers in the factory along with change.
  • If verification logic detail changes, you can only change the Verifier (less side effect of code change)
  • If verification logic changes, you can only change the VerifierFactory.
  • It’s easy to test one verifier (newly added ) by itself by changing VerifierFactory 

Friday, December 2, 2011

Software Testing knowledge management

Today, I like to discuss about software testing with some interesting perspective.

That is Knowledge Management. I should say this idea came from one of James Bach's lecture. He did not explain it but I kind of did my own research and came up with this. One paper that was interesting was this. (Tacit knowledge vs. Explicit knowledge)

I'm not going to explain details about Tacit knowledge and Explicit knowledge here but it's pretty short paper so you can read them all within 10-15 mins.

Here are some common arguments that I hear from other testers.
"You should document all your work (test plan, detailed test case steps, and etc.) because if you hit by a car one day, we loose all the work you've done!"
"You should document all your work because next person who will take over can easily start his/her work"

If you read the paper that I linked, you can definitely say that comments above is related to explicit knowledge management. I'm not sure if those are the examples of Explicit knowledge management.

Paper clearly mentioned pros and cons of each knowledge management and conclude that we need to find best possible hybrid(tacit + explicit) knowledge management. And I agree with that

So, let's break down some software testing work and analyze them by perspectives of proper tacit or explicit knowledge. Let's not consider of process (waterfall, v-model, w-model or agile) for this.

  • Understand project requirements
  • Review and test spec (generated by PM)
  • Review and test dev design (generated by Dev)
  • Come up with test strategy
  • Create test plan
  • Generate test cases
  • Manual test execution
  • Design test automation
  • Test automation implementation and execution
  • Report bugs
  • Debugging, investigation and trouble shooting
  • Time management (deadline)
  • Learn and use supporting tools (testing related)

Think about the list for a moment. Which ones would you put tacit knowledge management and which one would you put in explicit knowledge management?

Or which knowledge management bucket contains more? I would say tacit knowledge management.

I would say that for software testing, tacit knowledge management is greatly more influential than explicit knowledge management because every projects are different and individual tester's ability is the key factor of success of testing work. I'm sure there are some cases where explicit knowledge can help but I think that is far less influential compared to tacit knowledge.

Going back to the argument of "hit by a car" and "taking over the work". How much do my documentations help next person's testing work? Next person's testing work is mostly defined by his/her ability/view of testing.

It's kind of hard to conclude something at this moment. I need to chew on this thought a little further. Let me think about it and I'll come back to this thought.