Today, I like to write a bout some advice that I can provide for new testers.
For a brand new testers, who just graduated from college or stared his/her career as a tester, what kind of advice will help him/her be successful in this field?
I can think of three things to start with.
1. Be a thinker.
This is pretty obvious advice, but it is really important requirement to be a successful test engineer. If a tester does not think and do what was told, he/she is useless in my mind. Can a tester question requirements and specs? Can a tester question application architecture and development design? Can a tester seek to improve the testing process? Can a tester think like a customers/users? Can a tester think about business impact of application failures? Can a tester suggest a better approach in solving problems to devs and PMs?
2. Credibility
In my opinion, the credibility is everything for a tester. This is something we, testers, earns from people(dev, PM, testers, managers, and etc) around us in the team. This can be pretty subjective at the beginning, but as the team delivers projects by projects, people notice testers' credibility. I believe testers' credibility not only affects the success of the project but also the career path of the tester. If a tester crying about every single bug or issue he/she found, project team start not to listen to him/her seriously. We need to make sure to provide reasoning for the bugs and issues. Is that a ship stopper? Why and how this issue impact the operation of the application and the business should be clearly prepared and addressed when a tester write a bug or telling the team about the issue. A credible tester finishes testing on time. People have confidence in delivering a good project, when a credible tester is in the project team.
3. Being as Technical as dev
This maybe applicable to SDETs (Software Development Engineer in Test). I believe the technical skill for SDE I level and SDET1 level should be the same. Sr SDE and Sr SDET should have the same technical skills. The only difference between SDE and SDET is the area they're focusing on. SDE focuses on application development, in which design, architecture, optimization of the code, and performance are important. SDET focuses on application testing, in which test framework design, maintainability of the code, robustness and reliability are important. Knowledge in programming language, coding skills, designing the applications, unit testing, executing DB queries should not be a problem for testers.
Software testing, just like other disciplines, requires deep understanding, practice and learning to be an expert. Let's see how far this journey goes.
Sunday, July 31, 2011
Monday, July 4, 2011
Let's not under estimate the power of automation
Today, I like to write a little bit about usage of the automation or more specifically the use of programming.
You will find a lot of article about software testing strategies, and you can easily find the sentence "Exhaustive testing is impossible." And people write about equivalent partitioning and some related concepts.
Sometimes, this statements prevent me from thinking about executing all the possible inputs in my automation.
One of my test, I can get all the possible inputs from database. It's about 200,000 rows. Executing one test takes about 2 seconds including accessing the DB. Which means I can run all my tests in 400,000 seconds, which is about 1.8 days. And if I use several machines I can run them all within a day.
I had about 3 weeks to do my testing. Of course, I needed to spend time on other test cases but I did not even think about doing it all. I'm not saying that we need to try all the possible inputs for your testing or exercising all the test inputs would always give you better results. I'm saying we should not prevent ourselves from doing all because the data set is relatively large.
You can kick off your test and go home and check the result the next day.
Interesting things about programming is that automation really helps our testing. Machine cannot do testing but it certainly can helps us do testing. How much do you take advantage of the programming in your testing besides running your tests? You probably have multiple machines that you can use for your testing. Even your desktop machine can do something for you while you are at home enjoying family time. Are you under tight testing schedule? Do you want to lower your testing cost?
Actually you can get a lot of things done even if you're not sitting in front of the computer. Make good use of programming in your testing.
You will find a lot of article about software testing strategies, and you can easily find the sentence "Exhaustive testing is impossible." And people write about equivalent partitioning and some related concepts.
Sometimes, this statements prevent me from thinking about executing all the possible inputs in my automation.
One of my test, I can get all the possible inputs from database. It's about 200,000 rows. Executing one test takes about 2 seconds including accessing the DB. Which means I can run all my tests in 400,000 seconds, which is about 1.8 days. And if I use several machines I can run them all within a day.
I had about 3 weeks to do my testing. Of course, I needed to spend time on other test cases but I did not even think about doing it all. I'm not saying that we need to try all the possible inputs for your testing or exercising all the test inputs would always give you better results. I'm saying we should not prevent ourselves from doing all because the data set is relatively large.
You can kick off your test and go home and check the result the next day.
Interesting things about programming is that automation really helps our testing. Machine cannot do testing but it certainly can helps us do testing. How much do you take advantage of the programming in your testing besides running your tests? You probably have multiple machines that you can use for your testing. Even your desktop machine can do something for you while you are at home enjoying family time. Are you under tight testing schedule? Do you want to lower your testing cost?
Actually you can get a lot of things done even if you're not sitting in front of the computer. Make good use of programming in your testing.
Saturday, June 11, 2011
Flexible validation mechanism in test automation using decorator pattern
Today, I like to talk about having flexible way of validation logic in your test automation.
Are you familiar with design pattern called "Decorator?" I don't want to spend a lot of time explaining what it is in this post, so you can search Decorator pattern on the web (there are a lot of them).
Basically, what it is is that having mechanism of code to process what is responsible for currently object and trigger the next object in line to process its own logic. (you really need to understand what decorator pattern is if you don't know it)
Now, we can apply this pattern to validation logic.
Normally, in your automation there are several different kinds of validations you would put to conclude that the test passed or not. As the automation gets more complicated and scale, you would not want to have one validation class that handles all the validation/verification for all test cases. New and different kind of validation/verification logic might need some other process that previous validation logic not required. So how do you flexibly add new verification logic in to your automation? There will be several ways to handle this but one of the solution is using decorator pattern.
Let me go though the concept and I will provide some example of that.
public class Validator1 extends Validator
{
private Validator m_validator;
public Validator1(Validator validator)
{
m_validator = validator;
}
public validate()
{
validator1logic();
m_validator.validate();
}
private void validator1logic()
{
//execute the logic here
}
}
You can see the idea here. Each derived Valdiator class will have its own logic and execute validate() method whatever Validator object passed.
Now you have several derived Validator classes and you can mix and match any combination as you wish.
Here is the sample code
// 3->2->1 in order
Validator myValidator = new Validator3( new Validator2 ( new Validator1(null)));
myValidator.validate();
// just 2 and 1
Validator myValidator = new Validator3( new Validator1(null));
// 1,2 3
Are you familiar with design pattern called "Decorator?" I don't want to spend a lot of time explaining what it is in this post, so you can search Decorator pattern on the web (there are a lot of them).
Basically, what it is is that having mechanism of code to process what is responsible for currently object and trigger the next object in line to process its own logic. (you really need to understand what decorator pattern is if you don't know it)
Now, we can apply this pattern to validation logic.
Normally, in your automation there are several different kinds of validations you would put to conclude that the test passed or not. As the automation gets more complicated and scale, you would not want to have one validation class that handles all the validation/verification for all test cases. New and different kind of validation/verification logic might need some other process that previous validation logic not required. So how do you flexibly add new verification logic in to your automation? There will be several ways to handle this but one of the solution is using decorator pattern.
Let me go though the concept and I will provide some example of that.
- Define a base abstract class/interface (i.e. Validator) and defined a constructor that takes Validator object and method called validate().
- Now create derived class call Validator1, Validator2,.... that inherit/implements the base class/interface
- In Validator1 and Validator2 class implement a private method that process its own logic and have its validate() method to call that.
- After #3, call validate() method of Validator instance variable
The simple code would look like this.
public abstract class Validator
{
private Validator m_validator;
public Validator(Validator validator)
{
m_validator = validator;
}
public abstract validate();
}
public class Validator1 extends Validator
{
private Validator m_validator;
public Validator1(Validator validator)
{
m_validator = validator;
}
public validate()
{
validator1logic();
m_validator.validate();
}
private void validator1logic()
{
//execute the logic here
}
}
You can see the idea here. Each derived Valdiator class will have its own logic and execute validate() method whatever Validator object passed.
Now you have several derived Validator classes and you can mix and match any combination as you wish.
Here is the sample code
// 3->2->1 in order
Validator myValidator = new Validator3( new Validator2 ( new Validator1(null)));
myValidator.validate();
// just 2 and 1
Validator myValidator = new Validator3( new Validator1(null));
myValidator.validate();
// 1,2 3
Validator myValidator = new Validator1( new Validator2 ( new Validator3(null)));
myValidator.validate();
Saturday, June 4, 2011
Assumptions in software testing
"Today, I like to talk about assumptions in software testing.
In a daily life, we do make lot of assumptions. Maybe because we're so used to it, or because we've never seen other side of the situation.
In software testing, we should be very careful about these assumptions. PMs, Devs and business owners are making statements with a lot of assumptions when they write or speak about the application. Even if the spec did not mentions about assumption, we, testers, need to figure out what assumptions are made in the project team and understand the impact of that.
Some assumptions are safe, but sometimes assumptions cost you a lot of money to the company.
I was lucky enough to have a Skype coaching with James Bach. And I learned that we, testers, need to be very careful about assumptions. Since I got permission from James Bach to share his teaching, I'll use his example to show the impacts of assumption.
" Let's say you are carrying a calculator. You drop it. Perhaps it is damaged. What might you do to test it?"
Think about this for a moment.
Can you question the statement above?
First, I guess we visualize the situation. At least, I did. Wherever you are. And you drop the calculator. The situation seems pretty normal. Our focus seems to test the calculator to make sure it is working correctly.
What he was trying to teach here first, I think, is that do not jump on testing without understanding the situation clearly.
Can you ask what kind of calculator is it?
How did I dropped it? Where does it dropped? Where am I? Who am I?
You can come up with more questions to understand the situation. Right?
Then your test case will be narrow down. Then your testing is more focused on the correct situation.
Would that save the cost of testing?(time, energy and budget)
So now, let's talk about assumption. Assumptions are all that bad then in testing?
Here is what James Bach said.
The leaning tower of Pisa is leaning because the ground is not as solid as they thought it was... that's like a bad assumption. We use assumptions in testing, and that's a good thing. but you must distinguish between safe assumptions and dangerous assumptions. Assumptions are to testers as bees are to a beekeeper... if the beekeeper never wants to be stung, he better get rid of all the bees
but then he'll have no honey. so he keeps the bees but he's CAREFUL with them.
we make assumptions. but if an assumption is dangerous
we will either:
1. not make it
2. make it but declare it
3. make it but not for long
What he was trying to teach here first, I think, is that do not jump on testing without understanding the situation clearly.
Can you ask what kind of calculator is it?
How did I dropped it? Where does it dropped? Where am I? Who am I?
You can come up with more questions to understand the situation. Right?
Then your test case will be narrow down. Then your testing is more focused on the correct situation.
Would that save the cost of testing?(time, energy and budget)
So now, let's talk about assumption. Assumptions are all that bad then in testing?
Here is what James Bach said.
Have you ever seen a building that was being built? Builders use scaffolding to put it up the scaffolding is not the building. it's a tool used to create the building and at the end they take the scaffolding down.
assumptions in testing is like scaffolding. at least, a LOT of assumptions are like scaffolding
some assumptions are like foundation. Scaffolding is temporary. Foundation is permanent.
assumptions in testing is like scaffolding. at least, a LOT of assumptions are like scaffolding
some assumptions are like foundation. Scaffolding is temporary. Foundation is permanent.
The leaning tower of Pisa is leaning because the ground is not as solid as they thought it was... that's like a bad assumption. We use assumptions in testing, and that's a good thing. but you must distinguish between safe assumptions and dangerous assumptions. Assumptions are to testers as bees are to a beekeeper... if the beekeeper never wants to be stung, he better get rid of all the bees
but then he'll have no honey. so he keeps the bees but he's CAREFUL with them.
we make assumptions. but if an assumption is dangerous
we will either:
1. not make it
2. make it but declare it
3. make it but not for long
Now, your assignment is to make a list
of what factors make an assumption dangerous
of what factors make an assumption dangerous
The End
Sunday, May 29, 2011
Applying Risk Analysis in test planning
Today, I like to talk about applying risk analysis in software testing
By the way, risk analysis is quite a big word. It is not specific to software industry, and depending on what's you're looking for you can get very different results.
So, let me define what the purpose of conducting risk analysis in software testing is. My 2 cents definition would be "activity of understanding the impact of malfunctioning of the application." And the severity of the impact of each malfunction scenario will determine priority and intensity of the testing.
One thing I like to call out is that application malfunctioning is not only about use cases failures defined in spec. There are always implicit requirements that is not written in spec.
But normally, I start with use cases defined in spec.
Go through each use cases and find out the impact of malfunction. For example, you are testing login page. There are several use cases for this.
- Login with activated user with valid user name and password
- Login with invalid username
- Login with invalid password
- Login with not activated user name and password.
- Forgot password
- Register link
Now let's define risk and impact of each malfunction
- Failed to login with activated user name and password (high impact)
- Login success with invalid user name(high impact)
- Login success with invalid password (high impact)
- Login success with non-activated username and password (medium impact)
- Forgot password not working (medium impact)
- Register link is broken (high impact)
This is just an example of this. You can argue with different level of impact on each malfunction. But you have an idea about this approach. Here is another post for checking business impact and fault likelihood. Let's go into more details
- Login failure message not specific enough (medium impact)
- Already logged user going to login page and still require login (medium impact)
- Login process taking too long (medium)
- Logout link does not work (high)
- Login username and password text field is too small (low)
- Password text field showing the password not dots (medium)
- on and on.....
Now you have three different buckets of test cases (high-medium-low). And then you decide what to test first and how intensively test those test cases.
By the way, risk analysis is quite a big word. It is not specific to software industry, and depending on what's you're looking for you can get very different results.
So, let me define what the purpose of conducting risk analysis in software testing is. My 2 cents definition would be "activity of understanding the impact of malfunctioning of the application." And the severity of the impact of each malfunction scenario will determine priority and intensity of the testing.
One thing I like to call out is that application malfunctioning is not only about use cases failures defined in spec. There are always implicit requirements that is not written in spec.
But normally, I start with use cases defined in spec.
Go through each use cases and find out the impact of malfunction. For example, you are testing login page. There are several use cases for this.
- Login with activated user with valid user name and password
- Login with invalid username
- Login with invalid password
- Login with not activated user name and password.
- Forgot password
- Register link
Now let's define risk and impact of each malfunction
- Failed to login with activated user name and password (high impact)
- Login success with invalid user name(high impact)
- Login success with invalid password (high impact)
- Login success with non-activated username and password (medium impact)
- Forgot password not working (medium impact)
- Register link is broken (high impact)
This is just an example of this. You can argue with different level of impact on each malfunction. But you have an idea about this approach. Here is another post for checking business impact and fault likelihood. Let's go into more details
- Login failure message not specific enough (medium impact)
- Already logged user going to login page and still require login (medium impact)
- Login process taking too long (medium)
- Logout link does not work (high)
- Login username and password text field is too small (low)
- Password text field showing the password not dots (medium)
- on and on.....
Now you have three different buckets of test cases (high-medium-low). And then you decide what to test first and how intensively test those test cases.
Subscribe to:
Posts (Atom)