OK.. this is part 1 of context driven testing CAST 2011.
Now I'm going to write about CAST 2011 main theme, Context driven testing.
For me, context driven testing is sort of a new term. I guess I have not been in software industry long enough or have not done much of research on software testing. Anyway, this is a term that some people (James Bach, Cem Kaner, and others), who are really serious about software testing, came up with to provide a fundamentally right software testing approach. I'm still trying to fully understand their serious thoughts on this term (I don't want to mislead their deep thoughts to my blog reader). But here is my understanding so far.
Basically your testing (testing strategy, process, execution, assessment, and etc.) has to be based on the context of your current project you're working on. So if I ask them "what is the best way to test login functionality?", they will say, "Well it depends, Give me more context."
Hmm.. This is interesting. Login functionality is pretty much the same in any software. You're asked to type username and password. And those credentials are stored in somewhere in some form. So why does the context matter much if the functionality is the same anyways?
Let me just switch the gear here(You will know why later). One of the main idea of context driven testing is going against "Best practices." More specifically, context driven testing is against adopting best practices without considering the context(I personally think best practice itself has its own informative value). Let me ask you this. Would you do the same testing for web application and desk top application? Would you do the same testing for 6 month project and 1 month project? Would you do the same testing for V1.0 and V1.1? Would you do the same testing for the feature that business values are different? Would you do the same testing if the data storage mechanism change from relational DB to distributed files system? Would you do the same testing for search (Google or Bing search) on type ahead and search result order?
I can go on and on. I hope you got the idea. There are so many variables to consider when you're doing your testing. Would best practice fit all your variables of testing? I doubt it. I think best practices have values. If you search for "SOA service testing best practice", you will find many articles online. Best practices authors spend quite of time analyzing several important part of the SOA services. Service Oriented Architecture has its own benefits and drawbacks. I think it is really useful to understand why those SOA service testing best practices authors write their best practices. So the value of it stays there. Just like any other article explaining what SOA is, SOA security issue or SOA technologies. What matters is "You consider your testing situation, environment, tools, business expectation, business value, testing process, testing resources and testing budget and come up with your own testing with information out there."
Now coming back to login functionality. Would Gmail login and Best Buy login testing be different? Yes, because business expectations are different. Would unix login and web application login be different? Yes, it involves the environment the application sits on. Would testing for login V1.1 and V1.2 be different? Yes, V1.2 might have different feature or improvement or many of them are already covered in V1.1. Would testing for login using SQL DB and Hadoop differnt? Yes, because underlying data management is different. Would testing password strength and multiple login steps involvement for Credit score checking application and my fathers's blog be different? Yes, security level required for those two are different.
I'll write more on context driven testing. Second one will be about why test case, script-based testing is not context driven testing
Software testing, just like other disciplines, requires deep understanding, practice and learning to be an expert. Let's see how far this journey goes.
Monday, August 22, 2011
Wednesday, August 17, 2011
CAST 2011 - personal lesson for testers
I still have this exciting feeling even if CAST 2011 is over. What a conference!
My sort of preparation for the CAST 2011 was "You don't know what you don't know." I wanted to talk to as many test engineers as possible and learn from them. Also I wanted to absorb what those speakers are trying convey in the sessions.
If I have to summarize the conference in one word, it will be PASSION. I could feel the passion about software testing from everyone I talked to and everyone who spoke during the session. Everyone believed that software testing is a unique craft and everyone was proud of that. I was able to refresh myself from the flood of people who are just enjoy talking about and arguing about software testing. I personally thought I was a quite passionate software tester, but I was just nothing compared to them.
Here are some personal lessons which I took from CAST 2011.
1. Being a critical thinker. It is crucial part of being a good software test engineer. In any phase of software development life cycle, a tester should ask herself what is that she is doing in terms of testing. Testers should not follow the process without thinking about it. Why am I executing these test cases? Why is this testing process the most efficient and reliable? Why does this testing process take too long? Do not bear with the situation or process but try to understand why testing has to be done this way or that way. This, "being a critical thinker", reminds my early years of software testing career. I did what I was told. I did not speak up even though I hear something does not make sense to me. I preferred compromise than discussing or arguing. Never challenged senior testers about testing process or strategy. Now I hate to see people just follow the process or work without thinking about why and how.
2. Credibility. Credibility is everything for a tester. This is something we, testers, earn from project team. I think James Bach mentioned that when a tester is credible, people do not care about the metrics. I heard that a lot of companies are still checking the metrics like number of test cases or number of bugs found to measure the progress of the testing. If a tester is consider very credible for his/her testing work, would metrics like number of bugs or number of test cases become less important? If I say, "I'm confident about this application (whatever I'm testing)", would people around me (project team) take that seriously? Would other project team members want me to be in the project because my testing work is credible? I do have personal opinions about our devs and PMs. They would have the opinions about testers in our group. Yes, credibility is everything!
3. People over process. This lesson might go with critical thinking part as well. Interestingly enough, people do write test plans, document test cases, enter each test cases to test case management tools, and etc. Even for agile process, people come to daily meetings, write story cards, move sticky notes, go to retrospective meetings and etc. How many people would ask why they're doing what they are doing? It's quite a challenge when you introduce new ideas or improvement to the current process if your team has been finishing projects quite successfully. I've worked on multiple projects, sometimes worked on several projects in parallel. I believe I've never asked myself why I'm following this process. It's not a question whether the process is right or wrong. It's about a question, "why are we having this process for our testing?" Even for the test automation, why I've always believe that high percentage automation is the way we want to go? People should control the process, not the other way around.
4. Self learning/improving. What I felt from CAST 2011 self learning is not about the job hunting or staying in the job securely. It's true eager for learning to be a better test engineers. Keeping up with the technology and skills might help me being a good tester. But in my mind, self learning should come from your pure desire to be better test engineer. I noticed James Bach, Michael Bolton and others coming up with some weird buzz words for their theories or strategies on software testing. What really touched me was not those buzz words but their eagerness to learn and improve their thoughts and practice. I totally got this. There is no limit to be a better tester.
5. Care your project. Care your application. I don't think I talked about this with anybody during the conference. It just hit my mind. Caring is some sort of emotional attachment. I call application I tested as my baby. Even if I did not implement the application, I tested it. This emotional attachment is helpful for better testing. Small strange application behaviors catches your attention. Why? Because you care about it. This is just my self-thinking in the conference toilet.
OK.. I'll post context driven testing lesson for the next blog.
My sort of preparation for the CAST 2011 was "You don't know what you don't know." I wanted to talk to as many test engineers as possible and learn from them. Also I wanted to absorb what those speakers are trying convey in the sessions.
If I have to summarize the conference in one word, it will be PASSION. I could feel the passion about software testing from everyone I talked to and everyone who spoke during the session. Everyone believed that software testing is a unique craft and everyone was proud of that. I was able to refresh myself from the flood of people who are just enjoy talking about and arguing about software testing. I personally thought I was a quite passionate software tester, but I was just nothing compared to them.
Here are some personal lessons which I took from CAST 2011.
1. Being a critical thinker. It is crucial part of being a good software test engineer. In any phase of software development life cycle, a tester should ask herself what is that she is doing in terms of testing. Testers should not follow the process without thinking about it. Why am I executing these test cases? Why is this testing process the most efficient and reliable? Why does this testing process take too long? Do not bear with the situation or process but try to understand why testing has to be done this way or that way. This, "being a critical thinker", reminds my early years of software testing career. I did what I was told. I did not speak up even though I hear something does not make sense to me. I preferred compromise than discussing or arguing. Never challenged senior testers about testing process or strategy. Now I hate to see people just follow the process or work without thinking about why and how.
2. Credibility. Credibility is everything for a tester. This is something we, testers, earn from project team. I think James Bach mentioned that when a tester is credible, people do not care about the metrics. I heard that a lot of companies are still checking the metrics like number of test cases or number of bugs found to measure the progress of the testing. If a tester is consider very credible for his/her testing work, would metrics like number of bugs or number of test cases become less important? If I say, "I'm confident about this application (whatever I'm testing)", would people around me (project team) take that seriously? Would other project team members want me to be in the project because my testing work is credible? I do have personal opinions about our devs and PMs. They would have the opinions about testers in our group. Yes, credibility is everything!
3. People over process. This lesson might go with critical thinking part as well. Interestingly enough, people do write test plans, document test cases, enter each test cases to test case management tools, and etc. Even for agile process, people come to daily meetings, write story cards, move sticky notes, go to retrospective meetings and etc. How many people would ask why they're doing what they are doing? It's quite a challenge when you introduce new ideas or improvement to the current process if your team has been finishing projects quite successfully. I've worked on multiple projects, sometimes worked on several projects in parallel. I believe I've never asked myself why I'm following this process. It's not a question whether the process is right or wrong. It's about a question, "why are we having this process for our testing?" Even for the test automation, why I've always believe that high percentage automation is the way we want to go? People should control the process, not the other way around.
4. Self learning/improving. What I felt from CAST 2011 self learning is not about the job hunting or staying in the job securely. It's true eager for learning to be a better test engineers. Keeping up with the technology and skills might help me being a good tester. But in my mind, self learning should come from your pure desire to be better test engineer. I noticed James Bach, Michael Bolton and others coming up with some weird buzz words for their theories or strategies on software testing. What really touched me was not those buzz words but their eagerness to learn and improve their thoughts and practice. I totally got this. There is no limit to be a better tester.
5. Care your project. Care your application. I don't think I talked about this with anybody during the conference. It just hit my mind. Caring is some sort of emotional attachment. I call application I tested as my baby. Even if I did not implement the application, I tested it. This emotional attachment is helpful for better testing. Small strange application behaviors catches your attention. Why? Because you care about it. This is just my self-thinking in the conference toilet.
OK.. I'll post context driven testing lesson for the next blog.
Thursday, August 4, 2011
Context driven testing + Risk based testing
Today, I like to talk about my favorite testing strategy.
If you can search for "Testing strategy" online, you will find a lot of articles. And you'll get confused by a lot of pros and cons of each strategy. Or you will get exhausted by so many different kinds of strategies.
What I find the most useful testing strategies are context driven testing and risk based testing. These two are somewhat conflicting strategies. First, context driven testing says there is no such a thing as best practice. Testing is guided depending on what kind of application or feature you're testing. On the other hand, risk based testing says your testing should be executed based on one important factor, RISK, which implicitly saying best practice is to use priority of the risk.
Interestingly, I agree both ideas about testing. My interpretation of each strategy is how and what. Context driven testing help me come up with strategies about "HOW" to test. Risk based testing help me come up with strategies "WHAT" to test.
So let me explain both strategies first. (Note. This is my definition so it might be different from other people's perspectives)
1. Context Driven Testing.
This strategy is all about context. This is "HOW" part of my interpretation. As you might guess, different application should be tested differently because each application has its own context. Strategies for testing desktop application, web application or services(in SOA) should be different because each application has its own context. How the application is used, what data source is used, how each component interacts each other and what is available for testing are all different. So your testing strategies should based on what is available, what's important, which area tend to be broken, and etc. However, this is just application type specific.
Now in the same application, there are areas or features that are quite different from each other. For example, for web application testing. Testing login functionality and AJAX feature testings are different. UI testing and component wise testing should be different. Again, this is all about context even for the same kind of application. So it really make sense to coming up with test strategies based on the context of what you're testing. I don't think there is any strategy that covers all the context of what testers are testing. Testing DB interaction, testing component interaction, testing usability, testing performance, testing localization, testing complex algorithm or testing large data set, each one of testing should be done differently based on context which explains what's important and what's available for testing
2. Risk based testing
This strategy is all about priority. This is "WHAT" part of my interpretation. In any kind of testing, we normally come up with test cases. Whether you write it down in your test case management system or putting in your sticky note in your Kaban or Srum board, there are list of things to be tested. Risk based testing provides you the priority of each test case. Here are two things you can find priority of each test case. One is from business perspective. When PM comes up with spec that explain all the scenarios and uses cases, you can make all the possible scenarios and uses cases failed and see the business impact of each failure. This will give you a good sense of risk for each test case. The other is from dev perspective. You'll need to discuss with developer to understand how the application is designed and written. You'll have developers to draw circles and arrows that explain the architecture of the application. And you'll ask what happen if this arrow is broken? What happen if DB connection is timed out? And so forth. You can find all sort of likelihood of faults in the system by having a good discussion with the devs. Based on that you can prioritize your test case.
Now if you use both testing strategies. You know "WHAT" test cases to be tested based on priority and "HOW" to execute and validate each test case. How cool is that?
If you can search for "Testing strategy" online, you will find a lot of articles. And you'll get confused by a lot of pros and cons of each strategy. Or you will get exhausted by so many different kinds of strategies.
What I find the most useful testing strategies are context driven testing and risk based testing. These two are somewhat conflicting strategies. First, context driven testing says there is no such a thing as best practice. Testing is guided depending on what kind of application or feature you're testing. On the other hand, risk based testing says your testing should be executed based on one important factor, RISK, which implicitly saying best practice is to use priority of the risk.
Interestingly, I agree both ideas about testing. My interpretation of each strategy is how and what. Context driven testing help me come up with strategies about "HOW" to test. Risk based testing help me come up with strategies "WHAT" to test.
So let me explain both strategies first. (Note. This is my definition so it might be different from other people's perspectives)
1. Context Driven Testing.
This strategy is all about context. This is "HOW" part of my interpretation. As you might guess, different application should be tested differently because each application has its own context. Strategies for testing desktop application, web application or services(in SOA) should be different because each application has its own context. How the application is used, what data source is used, how each component interacts each other and what is available for testing are all different. So your testing strategies should based on what is available, what's important, which area tend to be broken, and etc. However, this is just application type specific.
Now in the same application, there are areas or features that are quite different from each other. For example, for web application testing. Testing login functionality and AJAX feature testings are different. UI testing and component wise testing should be different. Again, this is all about context even for the same kind of application. So it really make sense to coming up with test strategies based on the context of what you're testing. I don't think there is any strategy that covers all the context of what testers are testing. Testing DB interaction, testing component interaction, testing usability, testing performance, testing localization, testing complex algorithm or testing large data set, each one of testing should be done differently based on context which explains what's important and what's available for testing
2. Risk based testing
This strategy is all about priority. This is "WHAT" part of my interpretation. In any kind of testing, we normally come up with test cases. Whether you write it down in your test case management system or putting in your sticky note in your Kaban or Srum board, there are list of things to be tested. Risk based testing provides you the priority of each test case. Here are two things you can find priority of each test case. One is from business perspective. When PM comes up with spec that explain all the scenarios and uses cases, you can make all the possible scenarios and uses cases failed and see the business impact of each failure. This will give you a good sense of risk for each test case. The other is from dev perspective. You'll need to discuss with developer to understand how the application is designed and written. You'll have developers to draw circles and arrows that explain the architecture of the application. And you'll ask what happen if this arrow is broken? What happen if DB connection is timed out? And so forth. You can find all sort of likelihood of faults in the system by having a good discussion with the devs. Based on that you can prioritize your test case.
Now if you use both testing strategies. You know "WHAT" test cases to be tested based on priority and "HOW" to execute and validate each test case. How cool is that?
Subscribe to:
Posts (Atom)