Monday, August 22, 2011

CAST 2011 - context driven testing (Part 1)

OK.. this is part 1 of context driven testing CAST 2011.

Now I'm going to write about CAST 2011 main theme, Context driven testing.

For me, context driven testing is sort of a new term. I guess I have not been in software industry long enough or have not done much of research on software testing. Anyway, this is a term that some people (James Bach, Cem Kaner, and others), who are really serious about software testing, came up with to provide a fundamentally right software testing approach. I'm still trying to fully understand their serious thoughts on this term (I don't want to mislead their deep thoughts to my blog reader). But here is my understanding so far.

Basically your testing (testing strategy, process, execution, assessment, and etc.) has to be based on the context of your current project you're working on. So if I ask them "what is the best way to test login functionality?", they will say, "Well it depends, Give me more context."

Hmm.. This is interesting. Login functionality is pretty much the same in any software. You're asked to type username and password. And those credentials are stored in somewhere in some form. So why does the context matter much if the functionality is the same anyways?

Let me just switch the gear here(You will know why later). One of the main idea of context driven testing is going against "Best practices." More specifically, context driven testing is against adopting best practices without considering the context(I personally think best practice itself has its own informative value). Let me ask you this. Would you do the same testing for web application and desk top application? Would you do the same testing for 6 month project and 1 month project? Would you do the same testing for V1.0 and V1.1? Would you do the same testing for the feature that business values are different? Would you do the same testing if the data storage mechanism change from relational DB to distributed files system? Would you do the same testing for search (Google or Bing search) on type ahead and search result order?

I can go on and on. I hope you got the idea. There are so many variables to consider when you're doing your testing. Would best practice fit all your variables of testing? I doubt it. I think best practices have values. If you search for "SOA service testing best practice", you will find many articles online. Best practices authors spend quite of time analyzing several important part of the SOA services. Service Oriented Architecture has its own benefits and drawbacks. I think it is really useful to understand why those SOA service testing best practices authors write their best practices. So the value of it stays there. Just like any other article explaining what SOA is, SOA security issue or SOA technologies. What matters is "You consider your testing situation, environment, tools, business expectation, business value, testing process, testing resources and testing budget and come up with your own testing with information out there."

Now coming back to login functionality. Would Gmail login and Best Buy login testing be different? Yes, because business expectations are different. Would unix login and web application login be different? Yes, it involves the environment the application sits on. Would testing for login V1.1 and V1.2 be different? Yes, V1.2 might have different feature or improvement or many of them are already covered in V1.1. Would testing for login using SQL DB and Hadoop differnt? Yes, because underlying data management is different. Would testing password strength and multiple login steps involvement for Credit score checking application and my fathers's blog be different? Yes, security level required for those two are different.

I'll write more on context driven testing. Second one will be about why test case, script-based testing is not context driven testing

No comments: