Monday, December 26, 2011

Being an excellent tester


Today, I like to talk about how important it is to be an excellent testr.

I believe Software Testing is the most stereotyped, mislead and judged discipline in Software industry. I’ve talked to (or argued with) many different people (PMs, Devs, Tests, Managers, and Executives in software industry) about software testing, and I found out that people have many different opinions about Software Testing.  Of course, people have their own perspectives about software but I think there is more to it than people’s perspectives.

We need help them understand that software testing is not simple. Good domain/product knowledge does not guarantee better testing results. It is generally good thing but it can easily put testers  in a box of assumptions. Fresh eyes and different perspectives could lead better testing results in some projects. Test automation does not guarantee better testing either. ROI of test automation is sometimes very ambiguous. Furthermore, testers are often confused with software development coding patterns or practices when they write testing codes. There are some overlapping coding  and design practices between test automation and production code, but there are quite of differences (purposes, the fact that automation code is never used by our user, and etc.) There are dozens of different yet equally valuable testing strategies. Not only you cannot apply all of those strategies in every project, but also choosing the right testing strategy is hard. Metrics around software testing is very complex as well. Various main factors in a project (resources, timeline, budget, business goal, feature, and etc.) are keep changing and testing metrics seems going up and down randomly. Testing process also does not guarantee better testing result. Poor testing most likely remains poor whether it was done in Agile process or water fall process. And testers need to clearly understand the value of different process provides.

So basically what it boils down to is YOU. Testing results are determined by your testing ability, your thinking process, your strategy, your execution, your dedication and your learning. 

Let’s step a little further. Do you think a tester can accelerate software development process?  (I stole a story from one of GTAC speakers here.) Do you remember the time we had 5.25 inch floppy disk? And 3.5 inch disk? I don’t think there were much of technical evolution moving from 5.25 inch to 3.5 inch disk. But there were much more on protection added by having thick plastic covers. I think excellent testers can help project team deliver 3.5 inch disk product when they come up with 5.25 inch disk product. Software testing should not be passive or submissive work. Can you challenge your architect? Can you challenge development team? I also believe software testing should be proactive. You can save 3-4 hot fixes. Software testing is fun and rewarding if you believe software testing is the most cognitive and intellectual activities you’ve ever done in your life.

Happy Holidays

Saturday, December 3, 2011

Test automation design (Chain of verifiers)

Today, I like to write something related to the test automation design. I called it "Chain of Verifiers". It would be nice if I can get some feedback on this design. I've got good feedback on this design from one test architect but that does not mean this is good. Let me know if you have any comments.

Motivation for pattern is to come up with simple, robust and yet flexible and scalable test automation design especially in SOA test automation context. Basic idea for Chain of Verifiers pattern is separating one test case execution into three logical pieces. Those are “Test Operation”, “Gathering verification source” and “test verification”. 

Test Operation
Test Operation is activity or activities that are required to do for each test case (i.e. creating a request, send request, get response)
All the logic to construct the request, sending request and parsing response, and etc. belongs to this part

Gathering Verification Source
Gathering verification source part is responsible for gather all the data or information that will be used in verification.
This part also should be able to take care of parsing data. If logs are generated during the operation and it can be parsed in certain object or data. (It is better to do here so that verifiers do not have to parse the log or convert to certain class type it needs) 
The expected results are not necessarily required for this part. It could be if some input used for creating request is expected value. However, main focus is to gather data or info that will be used in verification 
Create one container (i.e. VerifySource) that contains all the data gathered from the operation (req/res pairs, inputs used, expected result(in some cases), parsed logs) 

Executing Test Verification
Executing Test Verification is the part that contains logics that what verification need to run (based on each test case’ purpose) and actually verifying the results
Selecting verifiers for verification logic and executing verifiers are decoupled. (will explain more details later on)
VerifierFactory: This is responsible for gathering verifiers and creating chain for certain aspect of verification. (It will be refined based on each test case needs)
Verifiers: This is responsible for verify one logical part that each test case needs.

Overview


More Details of Executing Test Verification
  • VerifierFactory holds the logic of gathering set of verifiers that each test case intends to verify, which mean VerifierFactory logic need to be interated through out the test automation development process. 
  • VerifierFactory can return two ways of verification process (similar to Decorator pattern)
    • Sequential verifier chain: previous verifier failure cause the test failure immediately (i.e. cases where following verifiers are meaningless to perform like response contains error when it is not expected)
    • Distinctive verifier set:  execute all the verifiers in the chain and return the result of “AND” operation of each verifier result. This is used when previous verifier failure not necessarily casue the failures of following verifiers
  • Sequential and distinctive set of verifiers both can be used in the test code

Benefits of Chain of verifiers pattern
  • It is follows Open-Close principle (Open for extension and close for modification)
  • Each verifier can be implmented and run independently.
  • Independent implementation effort can be easily distributed to different testers, which means even new person join the team, he/she can easily write Verifiers for given part without truly understanding of entire automation or project
  • Each verifiers can be used for hybrid test execution (manual execution + automated verification)
  • It’easy to add, remove, change verifiers in the factory along with change.
  • If verification logic detail changes, you can only change the Verifier (less side effect of code change)
  • If verification logic changes, you can only change the VerifierFactory.
  • It’s easy to test one verifier (newly added ) by itself by changing VerifierFactory 

Friday, December 2, 2011

Software Testing knowledge management

Today, I like to discuss about software testing with some interesting perspective.

That is Knowledge Management. I should say this idea came from one of James Bach's lecture. He did not explain it but I kind of did my own research and came up with this. One paper that was interesting was this. (Tacit knowledge vs. Explicit knowledge)

I'm not going to explain details about Tacit knowledge and Explicit knowledge here but it's pretty short paper so you can read them all within 10-15 mins.

Here are some common arguments that I hear from other testers.
"You should document all your work (test plan, detailed test case steps, and etc.) because if you hit by a car one day, we loose all the work you've done!"
"You should document all your work because next person who will take over can easily start his/her work"

If you read the paper that I linked, you can definitely say that comments above is related to explicit knowledge management. I'm not sure if those are the examples of Explicit knowledge management.

Paper clearly mentioned pros and cons of each knowledge management and conclude that we need to find best possible hybrid(tacit + explicit) knowledge management. And I agree with that

So, let's break down some software testing work and analyze them by perspectives of proper tacit or explicit knowledge. Let's not consider of process (waterfall, v-model, w-model or agile) for this.

  • Understand project requirements
  • Review and test spec (generated by PM)
  • Review and test dev design (generated by Dev)
  • Come up with test strategy
  • Create test plan
  • Generate test cases
  • Manual test execution
  • Design test automation
  • Test automation implementation and execution
  • Report bugs
  • Debugging, investigation and trouble shooting
  • Time management (deadline)
  • Learn and use supporting tools (testing related)

Think about the list for a moment. Which ones would you put tacit knowledge management and which one would you put in explicit knowledge management?

Or which knowledge management bucket contains more? I would say tacit knowledge management.

I would say that for software testing, tacit knowledge management is greatly more influential than explicit knowledge management because every projects are different and individual tester's ability is the key factor of success of testing work. I'm sure there are some cases where explicit knowledge can help but I think that is far less influential compared to tacit knowledge.

Going back to the argument of "hit by a car" and "taking over the work". How much do my documentations help next person's testing work? Next person's testing work is mostly defined by his/her ability/view of testing.

It's kind of hard to conclude something at this moment. I need to chew on this thought a little further. Let me think about it and I'll come back to this thought.

Thursday, November 3, 2011

Code coverage metric is NOT an indicator of quality of testing

Today, I like to argue with  people who value code coverage as a good indicator of quality of the testing.

In my opinion, code coverage has nothing to do with quality of testing. You cannot associate these metrics to testing quality. I'll address the benefit of code coverage at the end of this blog.

1. You are saying "this man is tall therefore this man must be good at playing basketball".
If you are saying "testing is done well because of high percentage of code coverage", then you're saying the same thing as "This man is tall therefore he is good at basketball." Code coverage percentage is an illusion. What illusion am I talking about? You are assuming that dev code that you're testing is perfect. Dev code is never perfect. You can have a high percentage of code coverage but application can be still really bad. Let's say you have 95% of code coverage from your testing. If some simple input from a user make the application crash, is that a good testing result? I don't think so. Testing should be judged by users/clients and customers satisfaction not by code coverage percentage.

2. Bug or problem comes from mistake or wrong logic in the code. How the he?? code coverage metrics find that?
This might goes with my first point. Testing is a process of examine the code based on purpose or business needs. This intellectual activities are too huge and great to even compare the value of code coverage. My critical thinking, my creative thinking, diligence, effort, cognitive decision making, experience, and product knowledge add the quality of the testing. Not code coverage percentage.

3. Code coverage tool can never understand the business value of the application. 
I don't 100% believe in 80/20 rules, but most of valuable business cases are in small portion of the application (I could say 20%). What does this mean? This means your testing should be focused on important/critical/money making features of the application. We humans(thinker and tester) know what those are but code coverage tool does not know. We're making conscious decision (not to test or test lightly) on those minor/ very little critical features. We have priorities but code coverage tools does not.

4. You can easily increase the code coverage percentage by mistake or poor testing.
Let's say the code that you're testing has very complicate decision making code. Lot of if-statements and  switch statements (these are basically bad coding practice but let's assume we have). How can you possibly make code coverage high with poor testing? If each if-statements calls some sort of methods to get true or false value, all you can do is put one parameter or input to go all the way down to the last conditional (if, else if ...... esle). You can do it by accident or by poor testing. And now you cover all the method execution used previous if and lots of else if statements. You might have one test case but that accidentally cover all the code path. Wow, you have one test case but you've got a good code coverage. You're good to go. Why don't you ship it without testing?

5. What about integration testing?
If you are in the middle of the stack like in SOA world. You need to examine how your application plays in the entire stack. Code coverage percentage tell you that? Integration testing is one the most important part of the software testing. It works fine individually. Oh well, it can easily break things when these individual components are working together. Can you really tell code coverage percentage is integration testing requirements? Seriously?

6. Your code coverage percentage goal make you waste your time.
Some test team has a goal of code coverage greater than 90% as part of "EXIT CRITERIA." I don't know what exit criteria means but this code coverage percentage make me very sick. You've done your best to say I'm done testing. You carefully consider time and budget and get to the point where you're confident about the application you're testing. Now you code coverage tells you that you're covered 75%. What do you do now? Well the "Exit Criteria" said that I have to cover 90% code coverage, if I don't do that it will affect my reputation and review score. I'm getting on to my desk and write meaningless automation just to get more coverage. Hold on. This is not all that bad. This point leads to some interesting value of code coverage. Let me mention some good part of code coverage.


Benefits of code coverage.
I find code coverage valuable when I understand which area of code were not covered and why. Again not the percentage metrics. I might miss something important. Oh good, This nice tool reports the classes and codes that I did not cover. Let's sit down with dev and discuss about this. "Hm... I did not know that. Hmm. This code need to be verified since it is important." OK.. now I see the value of code coverage tool. Percentage is meaningless. Finding out place where I have not thought of. THIS IS THE VALUE OF CODE COVERAGE!





Monday, October 3, 2011

Mind Map for test case generation

Today, I like to write about mind map.

Mind Map is a interesting concept that I accidentally came across. I attend CAST 2011 and I shared a table with  Henrik Emilsson during evening reception. There were four guys on the table and we, of course, talked about testing. Henrik showed me mind map tool called XMind on his mac. At first, I thought it was kind of interesting. On the next day, I think James Bach used Xmind to present his keynote.

And I came back to work and used it for my new project. At first, it was kind of cool but I did not realized the real value of it. So I started to learn more about mind map.

I'm not going to explain what mind map is here. Here is Wikipedia link for mind map if you want to see the details. Link

I find two very interesting values to software testing world.
First, it helps testers come up with test cases easily, effectively and extensively. Here are some examples that show how mind map helps me generate test cases

Testing write stored procedure for wrtie
1. start with some parameters

 2. I realized that what I'm doing is parameter validation so I changed.

3. Now I realized that I only have parameter validation test cases

This is pretty simplified version of it but you get the idea.
Start with your high level feature you're testing. Create sub-nodes as things come in to your mind. Re-organized your thought process. Expand and refine your testing ideas.

I'll write about another benefit on the next blog

Monday, August 22, 2011

CAST 2011 - context driven testing (Part 1)

OK.. this is part 1 of context driven testing CAST 2011.

Now I'm going to write about CAST 2011 main theme, Context driven testing.

For me, context driven testing is sort of a new term. I guess I have not been in software industry long enough or have not done much of research on software testing. Anyway, this is a term that some people (James Bach, Cem Kaner, and others), who are really serious about software testing, came up with to provide a fundamentally right software testing approach. I'm still trying to fully understand their serious thoughts on this term (I don't want to mislead their deep thoughts to my blog reader). But here is my understanding so far.

Basically your testing (testing strategy, process, execution, assessment, and etc.) has to be based on the context of your current project you're working on. So if I ask them "what is the best way to test login functionality?", they will say, "Well it depends, Give me more context."

Hmm.. This is interesting. Login functionality is pretty much the same in any software. You're asked to type username and password. And those credentials are stored in somewhere in some form. So why does the context matter much if the functionality is the same anyways?

Let me just switch the gear here(You will know why later). One of the main idea of context driven testing is going against "Best practices." More specifically, context driven testing is against adopting best practices without considering the context(I personally think best practice itself has its own informative value). Let me ask you this. Would you do the same testing for web application and desk top application? Would you do the same testing for 6 month project and 1 month project? Would you do the same testing for V1.0 and V1.1? Would you do the same testing for the feature that business values are different? Would you do the same testing if the data storage mechanism change from relational DB to distributed files system? Would you do the same testing for search (Google or Bing search) on type ahead and search result order?

I can go on and on. I hope you got the idea. There are so many variables to consider when you're doing your testing. Would best practice fit all your variables of testing? I doubt it. I think best practices have values. If you search for "SOA service testing best practice", you will find many articles online. Best practices authors spend quite of time analyzing several important part of the SOA services. Service Oriented Architecture has its own benefits and drawbacks. I think it is really useful to understand why those SOA service testing best practices authors write their best practices. So the value of it stays there. Just like any other article explaining what SOA is, SOA security issue or SOA technologies. What matters is "You consider your testing situation, environment, tools, business expectation, business value, testing process, testing resources and testing budget and come up with your own testing with information out there."

Now coming back to login functionality. Would Gmail login and Best Buy login testing be different? Yes, because business expectations are different. Would unix login and web application login be different? Yes, it involves the environment the application sits on. Would testing for login V1.1 and V1.2 be different? Yes, V1.2 might have different feature or improvement or many of them are already covered in V1.1. Would testing for login using SQL DB and Hadoop differnt? Yes, because underlying data management is different. Would testing password strength and multiple login steps involvement for Credit score checking application and my fathers's blog be different? Yes, security level required for those two are different.

I'll write more on context driven testing. Second one will be about why test case, script-based testing is not context driven testing

Wednesday, August 17, 2011

CAST 2011 - personal lesson for testers

I still have this exciting feeling even if CAST 2011 is over. What a conference!

My sort of preparation for the CAST 2011 was "You don't know what you don't know." I wanted to talk to as many test engineers as possible and learn from them. Also I wanted to absorb what those speakers are trying convey in the sessions.

If I have to summarize the conference in one word, it will be PASSION. I could feel the passion about software testing from everyone I talked to and everyone who spoke during the session. Everyone believed that software testing is a unique craft and everyone was proud of that. I was able to refresh myself from the flood of people who are just enjoy talking about and arguing about software testing. I personally thought I was a quite passionate software tester, but I was just nothing compared to them.

Here are some personal lessons which I took from CAST 2011.

1. Being a critical thinker. It is crucial part of being a good software test engineer. In any phase of software development life cycle, a tester should ask herself what is that she is doing in terms of testing. Testers should not follow the process without thinking about it. Why am I executing these test cases? Why is this testing process the most efficient and reliable? Why does this testing process take too long? Do not bear with the situation or process but try to understand why testing has to be done this way or that way. This, "being a critical thinker", reminds my early years of software testing career. I did what I was told. I did not speak up even though I hear something does not make sense to me. I preferred compromise than discussing or arguing. Never challenged senior testers about testing process or strategy. Now I hate to see people just follow the process or work without thinking about why and how.

2. Credibility. Credibility is everything for a tester. This is something we, testers, earn from project team. I think James Bach mentioned that when a tester is credible, people do not care about the metrics. I heard that a lot of companies are still checking the metrics like number of test cases or number of bugs found to measure the progress of the testing.  If a tester is consider very credible for his/her testing work, would metrics like number of bugs or number of test cases become less important? If I say, "I'm confident about this application (whatever I'm testing)", would people around me (project team) take that seriously? Would other project team members want me to be in the project because my testing work is credible? I do have personal opinions about our devs and PMs. They would have the opinions about testers in our group. Yes, credibility is everything!

3. People over process. This lesson might go with critical thinking part as well. Interestingly enough, people do write test plans, document test cases, enter each test cases to test case management tools, and etc. Even for agile process, people come to daily meetings, write story cards, move sticky notes, go to retrospective meetings and etc. How many people would ask why they're doing what they are doing? It's quite a challenge when you introduce new ideas or improvement to the current process if your team has been finishing projects quite successfully. I've worked on multiple projects, sometimes worked on several projects in parallel. I believe I've never asked myself why I'm following this process. It's not a question whether the process is right or wrong. It's about a question, "why are we having this process for our testing?" Even for the test automation, why I've always believe that high percentage automation is the way we want to go? People should control the process, not the other way around.

4. Self learning/improving. What I felt from CAST 2011 self learning is not about the job hunting or staying in the job securely. It's true eager for learning to be a better test engineers. Keeping up with the technology and skills might help me being a good tester. But in my mind, self learning should come from your pure desire to be better test engineer. I noticed James Bach, Michael Bolton and others coming up with some weird buzz words for their theories or strategies on software testing. What really touched me was not those buzz words but their eagerness to learn and improve their thoughts and practice. I totally got this. There is no limit to be a better tester.

5. Care your project. Care your application. I don't think I talked about this with anybody during the conference. It just hit my mind. Caring is some sort of emotional attachment. I call application I tested as my baby. Even if I did not implement the application, I tested it. This emotional attachment is helpful for better testing. Small strange application behaviors catches your attention. Why? Because you care about it. This is just my self-thinking in the conference toilet.


OK.. I'll post context driven testing lesson for the next blog.

Thursday, August 4, 2011

Context driven testing + Risk based testing

Today, I like to talk about my favorite testing strategy.

If you can search for "Testing strategy" online, you will find a lot of articles. And you'll get confused by a lot of pros and cons of each strategy. Or you will get exhausted by so many different kinds of strategies.

What I find the most useful testing strategies are context driven testing and risk based testing. These two are somewhat conflicting strategies. First, context driven testing says there is no such a thing as best practice. Testing is guided depending on what kind of application or feature you're testing. On the other hand, risk based testing says your testing should be executed based on one important factor, RISK, which implicitly saying best practice is to use priority of the risk.

Interestingly, I agree both ideas about testing. My interpretation of each strategy is how and what. Context driven testing help me come up with strategies about "HOW" to test. Risk based testing help me come up with strategies "WHAT" to test.

So let me explain both strategies first. (Note. This is my definition so it might be different from other people's perspectives)

1. Context Driven Testing.
This strategy is all about context. This is "HOW" part of my interpretation. As you might guess, different application should be tested differently because each application has its own context. Strategies for testing desktop application, web application or services(in SOA) should be different because each application has its own context. How the application is used,  what data source is used, how each component interacts each other and what is available for testing are all different. So your testing strategies should based on what is available, what's important, which area tend to be broken, and etc. However, this is just application type specific.

Now in the same application, there are areas or features that are quite different from each other. For example, for web application testing. Testing login functionality and AJAX feature testings are different. UI testing and component wise testing should be different. Again, this is all about context even for the same kind of application. So it really make sense to coming up with test strategies based on the context of what you're testing. I don't think there is any strategy that covers all the context of what testers are testing. Testing DB interaction, testing component interaction, testing usability, testing performance, testing localization, testing complex algorithm or testing large data set, each one of testing should be done differently based on context which explains what's important and what's available for testing


2. Risk based testing
This strategy is all about priority. This is "WHAT" part of my interpretation. In any kind of testing, we normally come up with test cases. Whether you write it down in your test case management system or putting in your sticky note in your Kaban or Srum board, there are list of things to be tested. Risk based testing provides you the priority of each test case. Here are two things you can find priority of each test case. One is from business perspective. When PM comes up with spec that explain all the scenarios and uses cases, you can make all the possible scenarios and uses cases failed and see the business impact of each failure. This will give you a good sense of risk for each test case. The other is from dev perspective. You'll need to discuss with developer to understand how the application is designed and written. You'll have developers to draw circles and arrows that explain the architecture of the application. And you'll ask what happen if this arrow is broken? What happen if DB connection is timed out? And so forth. You can find all sort of likelihood of faults in the system by having a good discussion with the devs. Based on that you can prioritize your test case.


Now if you use both testing strategies. You know "WHAT" test cases to be tested based on priority and "HOW" to execute and validate each test case. How cool is that?

Sunday, July 31, 2011

My advice for new testers

Today, I like to write a bout some advice that I can provide for new testers.

For a brand new testers, who just graduated from college or stared his/her career as a tester, what kind of advice will help him/her be successful in this field?

I can think of three things to start with.

1. Be a thinker. 
This is pretty obvious advice, but it is really important requirement to be a successful test engineer. If a tester does not think and do what was told, he/she is useless in my mind. Can a tester question requirements and specs? Can a tester question application architecture and development design? Can a tester seek to improve the testing process? Can a tester think like a customers/users? Can a tester think about business impact of application failures? Can a tester suggest a better approach in solving problems to devs and PMs?

2. Credibility 
In my opinion, the credibility is everything for  a tester. This is something we, testers, earns from people(dev, PM, testers, managers, and etc) around us in the team. This can be pretty subjective at the beginning, but as the team delivers projects by projects, people notice testers' credibility. I believe testers' credibility not only affects the success of the project but also the career path of the tester. If a tester crying about every single bug or issue he/she found, project team start not to listen to him/her seriously. We need to make sure to provide reasoning for the bugs and issues. Is that a ship stopper? Why and how this issue impact the operation of the application and the business should be clearly prepared and addressed when a tester write a bug or telling the team about the issue. A credible tester finishes testing on time. People have confidence in delivering a good project, when a credible tester is in the project team.

3. Being as Technical as dev
This maybe applicable to SDETs (Software Development Engineer in Test). I believe the technical skill for SDE I level and SDET1 level should be the same. Sr SDE and Sr SDET should have the same technical skills. The only difference between SDE and SDET is the area they're focusing on. SDE focuses on application development, in which design, architecture, optimization of the code, and performance are important. SDET focuses on  application testing, in which test framework design, maintainability of the code, robustness and reliability are important. Knowledge in programming language, coding skills, designing the applications, unit testing, executing DB queries should not be a problem for testers.
 

Monday, July 4, 2011

Let's not under estimate the power of automation

Today, I like to write a little bit about usage of the automation or more specifically the use of programming.

You will find a lot of article about software testing strategies, and you can easily find the sentence "Exhaustive testing is impossible." And people write about equivalent partitioning and some related concepts.

Sometimes, this statements prevent me from thinking about executing all the possible inputs in my automation.

One of my test, I can get all the possible inputs from database. It's about 200,000 rows. Executing one test takes about 2 seconds including accessing the DB. Which means I can run all my tests in 400,000 seconds, which is about 1.8 days. And if I use several machines I can run them all within a day.

I had about 3 weeks to do my testing. Of course, I needed to spend time on other test cases but I did not even think about doing it all. I'm not saying that we need to try all the possible inputs for your testing or exercising all the test inputs would always give you better results. I'm saying we should not prevent ourselves from doing all because the data set is relatively large.

You can kick off your test and go home and check the result the next day.

Interesting things about programming is that automation really helps our testing. Machine cannot do testing but it certainly can helps us do testing. How much do you take advantage of the programming in your testing besides running your tests? You probably have multiple machines that you can use for your testing. Even your desktop machine can do something for you while you are at home enjoying family time. Are you under tight testing schedule? Do you want to lower your testing cost?

Actually you can get a lot of things done even if you're not sitting in front of the computer. Make good use of programming in your testing.

Saturday, June 11, 2011

Flexible validation mechanism in test automation using decorator pattern

Today, I like to talk about having flexible way of validation logic in your test automation.

Are you familiar with design pattern called "Decorator?" I don't want to spend a lot of time explaining what it is in this post, so you can search Decorator pattern on the web (there are a lot of them).

Basically, what it is is that having mechanism of code to process what is responsible for currently object and trigger the next object in line to process its own logic. (you really need to understand what decorator pattern is if you don't know it)

Now, we can apply this pattern to validation logic.
Normally, in your automation there are several different kinds of validations you would put to conclude that the test passed or not. As the automation gets more complicated and scale, you would not want to have one validation class that handles all the validation/verification for all test cases. New and different kind of validation/verification logic might need some other process that previous validation logic not required. So how do you flexibly add new verification logic in to your automation? There will be several ways to handle this but one of the solution is using decorator pattern.

Let me go though the concept and I will provide some example of that.

  1. Define a base abstract class/interface (i.e. Validator) and defined a constructor that takes Validator object and  method called validate().
  2. Now create derived class call Validator1, Validator2,.... that inherit/implements the base class/interface
  3. In Validator1 and Validator2 class implement a private method that process its own logic and have its validate() method to call that. 
  4. After #3, call validate() method of Validator instance variable
The simple code would look like this.

public abstract class Validator
{
    private Validator m_validator;
    
    public Validator(Validator validator)
    {
        m_validator = validator;
    }
    
    public abstract validate();
}


public class Validator1 extends Validator
{
    private Validator m_validator;

    public Validator1(Validator validator)
    {
        m_validator = validator;
    }
 
    public validate()
    {
        validator1logic();
        m_validator.validate();
    }

   private void validator1logic()
   {
        //execute the logic here
    }
}

You can see the idea here. Each derived Valdiator class will have its own logic and execute validate() method whatever Validator object passed.

Now you have several derived Validator classes and you can mix and match any combination as you wish.

Here is the sample code

// 3->2->1 in order
Validator myValidator = new Validator3( new Validator2 ( new Validator1(null)));
myValidator.validate();

// just 2 and 1
Validator myValidator = new Validator3( new Validator1(null));
myValidator.validate();

// 1,2 3
Validator myValidator = new Validator1( new Validator2 ( new Validator3(null)));
myValidator.validate();

Saturday, June 4, 2011

Assumptions in software testing

"Today, I like to talk about assumptions in software testing.

In a daily life, we do make lot of assumptions. Maybe because we're so used to it, or because we've never seen other side of the situation.

In software testing, we should be very careful about these assumptions. PMs, Devs and business owners are making statements with a lot of assumptions when they write or speak about the application. Even if the spec did not mentions about assumption, we, testers, need to figure out what assumptions are made in the project team and understand the impact of that. 

Some assumptions are safe, but sometimes assumptions cost you a lot of money to the company.

I was lucky enough to have a Skype coaching with James Bach. And I learned that we, testers, need to be very careful about assumptions. Since I got permission from James Bach to share his teaching, I'll use his example to show the impacts of assumption.

" Let's say you are carrying a calculator. You drop it. Perhaps it is damaged. What might you do to test it?"

Think about this for a moment. 
Can you question the statement above?

First, I guess we visualize the situation. At least, I did. Wherever you are. And you drop the calculator. The situation seems pretty normal. Our focus seems to test the calculator to make sure it is working correctly.

What he was trying to teach here first, I think, is that do not jump on testing without understanding the situation clearly.

Can you ask what kind of calculator is it?
How did I dropped it? Where does it dropped? Where am I? Who am I?

You can come up with more questions to understand the situation. Right?

Then your test case will be narrow down. Then your testing is more focused on the correct situation.
Would that save the cost of testing?(time, energy and budget)

So now, let's talk about assumption. Assumptions are all that bad then in testing?
Here is what James Bach said.


Have you ever seen a building that was being built? Builders use scaffolding to put it up the scaffolding is not the building. it's a tool used to create the building and at the end they take the scaffolding down.
assumptions in testing is like scaffolding. at least, a LOT of assumptions are like scaffolding
some assumptions are like foundation. Scaffolding is temporary. Foundation is permanent.

The leaning tower of Pisa is leaning because the ground is not as solid as they thought it was... that's like a bad assumption. We use assumptions in testing, and that's a good thing. but you must distinguish between safe assumptions and dangerous assumptions. Assumptions are to testers as bees are to a beekeeper... if the beekeeper never wants to be stung, he better get rid of all the bees
but then he'll have no honey. so he keeps the bees but he's CAREFUL with them.
we make assumptions. but if an assumption is dangerous
we will either:
1. not make it
2. make it but declare it
3. make it but not for long

Now, your assignment is to make a list
of what factors make an assumption dangerous

The End

Sunday, May 29, 2011

Applying Risk Analysis in test planning

Today, I like to talk about applying risk analysis in software testing

By the way, risk analysis is quite a big word. It is not specific to software industry, and depending on what's you're looking for you can get very different results.

So, let me define what the purpose of conducting risk analysis in software testing is. My 2 cents definition would be "activity of understanding the impact of malfunctioning of the application." And the severity of the impact of each malfunction scenario will determine priority and intensity of the testing.

One thing I like to call out is that application malfunctioning is not only about use cases failures defined in spec. There are always implicit requirements that is not written in spec.

But normally, I start with use cases defined in spec.
Go through each use cases and find out the impact of malfunction. For example, you are testing login page. There are several use cases for this.

- Login with activated user with valid user name and password
- Login with invalid username
- Login with invalid password
- Login with not activated user name and password.
- Forgot password
- Register link

Now let's define risk and impact of each malfunction
- Failed to login with activated user name and password (high impact)
- Login success with invalid user name(high impact)
- Login success with invalid password (high impact)
- Login success with non-activated username and password (medium impact)
- Forgot password not working (medium impact)
- Register link is broken (high impact)

This is just an example of this. You can argue with different level of impact on each malfunction. But you have an idea about this approach. Here is another post for checking business impact and fault likelihood.  Let's go into more details
- Login failure message not specific enough (medium impact)
- Already logged user going to login page and still require login (medium impact)
- Login process taking too long (medium)
- Logout link does not work (high)
- Login username and password text field is too small (low)
- Password text field showing the password not dots (medium)
- on and on.....

Now you have three different buckets of test cases (high-medium-low). And then you decide what to test first and how intensively test those test cases.

Sunday, May 8, 2011

Skills required for a test engineer

Today, I like to talk about the skills need for a test engineer.

1. Inference


This is the most important skill you need for being a test engineer. This make software testing more like Art than Science. It requires creativity, intuition, experience, analytical reasoning and extra. What can you infer from requirements or spec? I would say that spec is explicit requirements for the project. However there are always implicit requirements for the given project. For a simple example, "how would you test a soda pop vending machine?" Beside putting money and getting the right soda pop, there are a lot of things you need to consider. Like, how to check the temperature of the soda pop, how to make sure to notify office when power is out on vending machine, how to notify office when soda pop is out, how to make vending machine energy saving, on and on and on.


2. Prioritization

This the second most important skill you need. After you come up with all the possible test cases for the given project, you should be able to put each test case into priority bucket. Risk analysis would be a good tool for you. Based on the risk (what and how big is the impact of this test case failing), you can categorize all the test cases in to priority bucket. As we all know, we have limited time and budget, we need to make conscious decision not to test based on the risk.


3. Troubleshooting

This is another core skill for test engineer. Test engineer should be very good at finding root cause of the problem. Can you eliminate the noise related to the problem? Can you find out the pattern from certain condition? Time is clicking. When you have production issue, how good your troubleshooting skill is directly affecting company's loss

4. Simple and robust coding skill

Now days, software is very complex and large scale. We need to use program to test program. There are a lot of testing tools that helps testing. But fundamentally, there is no other test team that test your testing application. You need to beware of the fact when you write automation code. Besides unit testing, you need to be able to write simple and robust code. Less if statements in your code, less switch statements, avoiding nested if statements. These are all helping you not to make automation bugs.

5. Big picture in mind

If you are a test engineer, you should have better understanding of the system than developer. Developers are normally focused on what they are developing. You need to have end to end mind, system overall operation. Help developer understand how the feature they're developing fits with existing application.


6. Multi-tasking or rapid switching one task to another

Multi-tasking/switching one task to another is another skill you need. People notify testers for so many different problems they're having. Lab environment, live production issue, project testing,and etc. You normally handle several projects at a time and you should be ready for handling production issue, support issue and etc. This can be very frustrating if you are not used to it.

Monday, April 11, 2011

Having business layer in automation

Today I like to write about having business logic layer in test automation.

Now days, I guess test automation is not something new in software testing. Whether your are testing desktop application, web application or SOA application, test automation plays an important role.

I've been writing quite a bit of automation code through out my software testing career. Of course, there are multiple different ways to write automation and it depends highly on the context of application.

What I've seen so far on writing test automation is pretty straightforward. You design your automation based on two separate purpose. Test execution code and helpers.

Test execution code basically represents the steps you need to go through to execute one test case. Normally, test execution code is a method starts with testBlahBlah...(). Helpers are more generic classes or functions that help test execution code. Things like until classes. String Helpers, DB Handlers, File Readers and etc.

I think these separation works out pretty well. I don't see much of problem with this way. Helper classes provide good flexibility to test execution code. Basically, you can mix and match helper classes based on any need of each test case.

And I thought of this business layer between test execution code and helper codes in automation.

Now if you see your one test execution code closely, you can easily tell that it contains three major part. The first one is setting up some situation and cleaning up or setting back to initial state for your test. The second one is executing what you're testing. (i.e. click submit button on your web page, sending request to the service, calling a function or etc. The last one is some sort of verification.

So, I normally have these three business logic classes in my automation, which means test execution code interact with these three business logic classes to do its operation.


So the automation flow is "Test Execution code" -> "Business logic lay class" -> "Helper classes".

The benefit you get from this design is pretty interesting. First, Test execution code can interact with the business logic layer without knowing helper classes, which means test execution code can call business logic layer class with sort of natural language. For example, we want to test user type something on discussion board goes to data base correctly. Now test execution code will do something like this.

public void testUserInputOnDiscussionBoard()
{
MyBusinessLogicClassForDiscussionBoard blc = new MyBusinessLogicClassForDiscussionBOard();
blc.login();
blc.goToDiscussionPage();
blc.typeRandomStringAndSubmit();

MyVerifyingClass vc = new MyVerifyingClass();
vc.verifyingDiscussionBoardInput(blc.getUserId(), blc.getRandomString());

blc.logOut();
}

This is pretty simplified version of it. But my point is that test execution code should not worry about how to login to the page, how to put some text on the page, or even how to check whether db is updated or not.

Second, I can keep the helper classes more generic. Since the business logic class will know what DB to use or what credential to use or what helpers classes to use,helper classes will not have business logic other than doing its own thing. DB Helper will only expose server name and DB name. And execute query code. You do not have to create some child class of some generic helper class.

Third, modifying test case is much easier. Since the business logic stays in one logical place, you can easily find places to change.

Tuesday, February 22, 2011

Risk Analysis for Software testing

Today I like to write about Risk Analysis.

I think Risk Analysis is quite complicated and sophisticated process in any kind of project. We, test engineers, have been exposed to this term projects after projects and I think we already know what it is and what the benefits are.

I would define risk analysis as a process of identifying the risks in the project and prioritize them with severity. So if you work through this priority work items, your project will become less and less risky and even if you are not completely finished the work your project will be less likely to fail. No surprise there.

Now, what's really important questions are "how do you identify all the risks involved with the project?" and "how do you decide which one is riskier than others?"

Here is my practice of risk analysis.

First, I try to find all the risks involves with the project I'm working on into three buckets.

1. Business and entire stack perspective Risks
2. Development Process risk
3. Testing process risk.

For #1, good source for risk assessment would be your PM. Ask her what she think it is the most important and risky part of the feature. And ask what are the impacts when each feature or use case does not work. Then you will find business perspectives of risk.

For #2, good source for risk assessment would be your developer. Ask her to draw how each components and classes interact each other. Ask what part of component in the application has complicated logic or what the dependencies are to find out point of failure. Or just ask her if you were a tester, what would you focus on testing? You will get good feedback from your dev.

For #3, you are a good source for risk assessment. You know your framework and you understands the risk of each business perspective and development perspective. You need to be able to come up with testing process risk.

Now you combine all of your risk analysis data and prioritize them and set some intensity level on each test case.

You can find many different articles about risk analysis. I found these articles very useful.

Heuristic Risk based testing by James Bach
sqa tester.com article

Wednesday, January 12, 2011

Challeges in SOA test automation

Today, I like to mention about testing a service in SOA world.

I don't think I need to explain what is SOA or what are the benefits of Service Oriented Architecture. You can find good articles and papers online.

So let's talk about service testing. I think there are several different kinds of service in SOA world. One of them would be bottom layer service which is normally a wrapper for database. Another one would be sort of middle layer service which consumes other bottom layer services.

In my opinion, normally the motivation for the bottom layer services is that many different parts of the system accessing this one giant database and managing this DB is getting out of control or already out of control. And the solution that architects and business people sit down and come up with is creating a wrapper service for that DB and do all kind of optimization and throttling.

So when we develop this wrapper(bottom layer service) for DB, normally the business logic is already implemented and used in the system. Now, it sounds simple to move this business logic to one place, but the reality is not as simple as it sounds.

First, going through legacy code and figuring out the business logic is really complicated work. What is available in legacy code might not be available in wrapper service. There're lots of corner cases handled in different kinds of conditions, and so forth.

Second, combining these different business logic to common functionality in the service is another challenge. You cannot simply copy the business logic of legacy code to the wrapper service. Sometimes, you need to come up with some new logic that handles DB more efficient or new logic that returns better sets of data.

Third, the testings is another challenge. So if the new service has some new logic that handles DB more efficient and accurate than existing legacy code, then output of the service might not be the same result as current system produces.

Do you see the challenge here? Now you don't have something to assert to. Existing system inputs and outputs are not the same as new wrapper service returns. And your new service might have logic flaw. So you need to make that the new service is going through the logic as designed and returns correct results. And also test the integrity of the result. If you have billions of data in your DB, it's just impossible to handle every inputs and outputs in your test case.

I'll try to find out best approach for this kind of testing challenge. What do you think? I'll post what I think in next blog.