Tuesday, December 25, 2012

Why do you like software testing?

Do you like software testing? Why do you like software testing?

This is my first interview question. I've heard various answers on this question. Of course, the answer will not greatly affect hiring decision but I think it's been hard to come by candidates who really impressed me on this question.

You test engineers. Why do you like software testing?
It will be a same question as to ask to musician why he/she like music. Or ask teachers and professors why they like teaching.

One of the greatest value of America, at least I believe, is that the culture here encourages people do what they like to do. You've seen movies like one become successful by chasing their dreams. Sports player, musician, writer, chef, teacher, and etc.

Is there any kid who wants to be the best test engineer? Not even kid. Will there be any college student major in Computer Science wants to be the best software test engineer? Even myself did not know much about software testing. I started as junior developer and if I did not meet the software test architect who got me into software testing, I might not be in software testing field.

One of the famous test engineers, James Whitaker, (I should say used to be a test engineer), preaching about software testing as dying profession. He is proud of running his org without test engineers. (calling SDETs developers). He's got his own reasons and many of his point make sense.

Anyway, let me address my reason why I like testing.

I like software testing because it is NOT straightforward. It's not like you need to do 100 things for small applications and 10,000 things for bigger applications and done with it. And it's not like you're good if you have a certain number of unit tests, functional tests ,integration tests and system tests. It's not about the quantity; it is actually about the quality. There are so many things to consider when we're testing something. Different aspects of problems that the application solves, technology that is used, business expectations, user behavior, priorities, threats, performance, dead line date, project team, process, and etc. Finding best possible testing strategy and execution around this environment is great fun. And every feature, every story(in agile) provides different testing problems.

I like software testing because of its exploring nature. I like the thinking process of how it(new feature or project) might not work in the system and problem that comes as a consequence. I love to apply the Systems Thinking on this. Many people say going over test cases are boring. But coming up with test cases (written or not) or trying something with some suspicion about the application is great fun. "I try this because I've seen similar things before. I try this because from my understanding this new thing might cause problem on certain features that are already exist." Understanding the interactions in the system and explore the possibilities of unexpected interaction issue is really a beauty of testing. And this process start from spec reviews (sprint planning in agile) and I get to know more and more and exploring goes to many interesting directions.

I like software testing because I have many different options to execute tests and there are tons of tools out there to choose from. I would say this is more of test execution fun. This is not about writing automation code I would say. Coming up with best test execution strategy is great fun. What kind of tool is useful for data driven testing? How about model based testing? What's best test automation strategy for agile process? Experience in several different test framework and tools. Quick experiment on new tools. What if we just choose to go with manual for testing this feature? How can we represent our test results and communicate with the team? This requires endless learning. Learn to design good test framework,  learn new languages and new tools, learn how to represent test results, learn how to optimize test execution time, learn, learn and learn.

How about you? Why do you like software testing?






Thursday, October 4, 2012

Systems thinking and software testing

Today I like to write a little bit about Systems Thinking and how it is related to software testing.

I've got the word Systems Thinking from James Bach. I saw one of his youtube video and he mentioned about reading Introduction to Systems Thinking. So I google it and youtube it.




Monday, September 24, 2012

Systems thinking (Software testing 101)

More to come but now I want to share the video that I watched more than 20 times.

Dr. Ackoff's Systems thinking presentation at TED


Monday, May 28, 2012

A good test automation framework

Today, I like to mention a little about test automation frameworks.

If you're looking at any kind of job posting (company career site, monster.com, and etc) for a software test engineer now, you will see almost all the posting mention some level of experiences in test automation framework as a requirement. And actually I do use test framework for my testing and it is very useful tool that resolves several limitations of manual testing.

So I thought it is worthwhile to put my thought about what a good test automation framework is in my blog. First, let me scope the area of discussion since test automation framework is a pretty big topic. I'm going to set aside the discussion around pros and cons of test automation in software testing since that's not my main point of this article. And I'm talking about your work related test automation framework not the generic programming language specific test framework. For example, my test automation framework is built on top of JUnit or TestNG. JUnit and TestNG are also test automation frameworks, but those frameworks do not have the specific business logic of my testing.

OK. Let's start. What is test automation frameworks?  Wikipedia mentions "A test automation framework is a set of assumptions, concepts and tools that provide support for automated software testing." My simple definition is "a test automation framework is a program that helps testers execute and validate test cases(test scenarios). So it is a program that helps software testing. The meaning of "help" will be saving time, being more efficient in testing process, gathering useful data easily after test execution and etc compared to the manual human test execution. But it is a program, which means someone has to design, implement and maintain. This brings a very important ground rule of test automation. Benefit of using test automation should always be greater than the effort of designing, implementing and maintaining the test automation. Otherwise there is no point using test automation beside you being cool. :)

I believe having a good test automation framework is to maximize the benefits of test automation and minimize the effort of implementing and maintaining it. So let me address the characteristics of a good test automation framework.
  1. Have a clear strategy and everyone should be on the same page. This is the most important and most effective way to maximize the benefit of using the test automation framework. There are many different ways to design and implement test automation frameworks. And it is REALLY hard to come up with one solution that will satisfy every test engineers, which means when a test architect or experienced SDET come up with a framework, he/she should explain the strategy of the test automation and help everyone in the team understand how it works and how to use it. Once everyone is on the same page (know how it works and how to use), implementing test cases (scenarios) is much faster and maintaining the framework is much easier. By following a certain rule or instruction, test writers are less distracted and he/she can understand others code very easily. And when it comes to a situation where the current approach does not fit, it is much easier to notice and notify test automation designer (architect or SDET) to resolve the issue with proper solution.
  2. Favor object composition over class hierarchy. This is one advice from Gang of four design pattern. One of the common mistake with test framework is that some framework base class provide too many functionality and become giant class (a few thousand lines of code). This approach works fine at the beginning but it does not scale very well and becomes very complex code base as number of tests grows to handle various test conditions, error scenarios and exceptional cases. One key characteristics of good test framework is to provide flexibility to test writers. Test writers should be able to choose classes or modules they need to execute one test case. Just Mix and Match for their tests.
  3. Separation between test execution and validation. This is more about validation code being independent from test execution. It also means validation code by itself does not contain any context about test cases. What this allows to the framework is that you can package one logical chunk of  validation into one class. It will takes some parameters as source of validation. You call validate() method and it will simply return true or false. No decision making process here. For example, if error expecting flag is set do this. if some other case do that. NO. JUST DO THE VALIDATION. What does this allow test writers? You gather source of validation during test execution and pick and construct validation instances you need from pool of validation classes. DO NOT let the validation code to be workflow. LET IT BE collection of appropriate validations for each test case. This promotes great re-usability of the code and less complex code.
  4. Build the test framework to support the way test execution steps are defined. The ideal framework, in my opinion, is that defined (written or thought) test execution steps are transformed to test automation code with 1:1 mapping. This seems pretty easy to do but this requires thoughtful design to expose natural language like method names and encapsulate well structured implementation details. One way to think about this is that you're providing framework for test writers who are not as technical as you. This promotes great readability of the code as well. 
  5. Be cautious about re-factoring codes. Re-factoring is useful work. It's a good process that turns spaghetti codes to well-structured code. But often times, test automation writers get confused by this concept where he/she does not understand the difference between test automation code and development code. If you look closely on test automation code, there are two big parts: one is test execution steps related part and the other is test automation framework part that supports test execution step code. Re-factoring should largely applied in the latter part of automation code. Here is the common problem that I've seen. At first, test cases that a tester is implementing is very similar beside some inputs, config, or environment, so he re-factor the common piece in one method. Looks good. On the next iteration or release, he adds more test cases. He notices a little variation of the execution step or input, so he add some conditionals (if/switch statement) to the common method. As more iteration or releases come by, the common method becomes more and more complex. Complexity is not the only problem. His thought process of executing test steps is restricted by existing code base and he is trying to fit his thought on that code path. Also it is very hard for other testers to read and follow the logic. Refactoring test execution steps code can be very dangerous because it restricts testers thought process, makes test automation code less flexible, and makes readability of the code very bad and it hurts other test writers to take over. I admit that it is hard process because sometime it is hard to draw a line between test step code and supporting code. The key is to have test execution steps to be flexible and to have supporting code to be well structured.
  6. Pinpointing the failure is more important than handling error gracefully. Do not be scared of null pointer exception. Automation code is not production code. I don't think this needs a lot of explanations
  7. Minimize bootstrap code. Setup methods like @BeforeClass in JUnit or some sort of prep code runs before the test run is useful. But it also prevent your framework from being flexible. And it also make the framework very heavy. Sometimes you just want to check a simple validation that was used in test automation for other purpose. Well modularized code does not need much bootstrap code. You should be able to run your test in main method if your automation framework is well modularized.
If you want more fundamentals around test automation check this one.  Thoughts around test automation framework
Here is  part2 with more detailed explanation



Wednesday, May 23, 2012

How to innovate in software testing team


  1. Eagerness to be better. An old couple who run hot-dog stand in market place story.
  2. Trying to find solution or ideas against ultimate goal rather than improvement of current process. Broccoli and cupcake story. 
  3. Steal like an artist. Use other academic approach. Car sales man. Writers. 

Friday, March 23, 2012

Why developers create bugs?


Why developers create bugs? I think this is an interesting question to ask to ourselves (testers). Of course software bugs are coming from everywhere and it is impossible to list all the cause of software bugs. However, if we can identify root-causes of majority of important bugs, we can proactively conduct our testing from beginning to end of SDLC. And our testing will be much more efficient and effective.

The common answer I get is like “Developers are humans. And humans make mistakes.” I agree that there are bugs created by developers’ coding mistakes. “Oh. I put that if-statement outside of for-loop. It should have been inside of the for-loop”, “Oh my conditionals should have been ‘greater’ not ‘greater than or equals’ and etc.” However, I don’t think this kind of coding mistake are majority of the bugs that we’re seeing in a project. I don’t have some accurate statistics about percentage of coding mistake bugs, but from my experience it’s about 10%. (I guess the number 10% by itself does not mean much; my point is that coding mistake bugs take very small percentage of the entire number of bugs)

Before we discuss about 90% of bugs, I like to mention why only 10% of bugs are coding mistake bugs. This percentage would have been higher if we’re talking about the software bugs 20 (or 30) years ago. If we talk about the time when we used low level programming languages like C or COBOL to develop applications. The time when less lines of code for the same logic was preferred because of the high memory cost. The time unit testing was not possible to do or  very hard to do. Now, it’s a different story. High level languages are commonly used in most of companies. And memory is dirt cheap. When was your last time to file a bug on memory leak or deadlock? Highly developed and sophisticated IDEs help developers reduce coding errors. And many software companies recommend or force developers to write unit tests.

Now, Let’s talk about majority of the bugs. In my opinion, 90% (majority) of bugs are coming from the nature of software development process. What do I mean by the nature of software development process? If we abstract or simplify the concept of the software development process, we’ll get something like this; software development process is human activities of transitioning some ideas into a software product or a feature. This transition is quite complicated and multidimensional process if you consider the complexity and scale of current software products and their business values. This transition is not simply getting the ideas to work in code. This transition includes making the code good enough to deliver to the customers. Let’s see what makes developers to create bug.

1. Assumption
In my opinion, this is one of the biggest reason why developers are creating bugs. And it is very hard one to catch because humans always think and speak with some assumptions. The calculator testing question is a good example. (You’re carrying a calculator. You dropped it. I might be damaged. What might do you do to test it?). Many people (even testers) answers with assumption that the calculator is a regular calculator (in their mind) and it fell off from hand or pocket on some hard surface. (I did not mention any of this in my calculator question) This kinds of assumptions are made when business people talk about software, project manager writing spec and developers writing code or speak about technical details. And some of assumptions that we make are dangerous assumptions.If we do not realize the dangerous assumption we make, it will bite entire project team badly later.

2. Implicit requirement
Implicit requirements are requirements that are not shown in requirement documents or spec but it is important requirements. Here is an example requirement: this medical device should operate properly between 100 and 150 AV. And let’s say developers make it work in that range. Good. Are we done? I would say no. There are some Implicit requirements we need to check. What happens if voltage going out of normal range? Going to 99, 98,97 AV from 100 or going to 151,152, 153 AV from 150? Will this medical device fail in a way to harm the patients or it fails gracefully? Or do we need alarm when device receives voltage near 105 or 145 AV? Implicit requirements are not outside of requirements. Lacking in business perspective and merely focusing on technical details often cause developers to be blind about implicit requirements.

3. Poor design
What I mean by poor design is not about high level architectural design. Generally speaking software architects put a lot of effort to come up with good design and normally architectural design is pretty sound (reliability, modularity, extensibility and etc.) What I’m talking about here is the design that individual developers come up with for his/her own feature. Normally, this poor design causes existing feature very fragile as they add more code. Good way to figure out poor design is to ask devs to explain how each class or component interact each other to complete a workflow using diagrams. You can easily catch their design flaws (if there is) and it is a good way to do white box testing without looking at the code.

4. Working on someone else’s code
This one has to do with some emotion. Adding new feature or fixing bugs on existing code that was written by someone else is not very fun business. Different coding style, different thought process, not enough comments, different class/method naming and long debugging time make the dev to feel the existing code is crappy. If it is an old legacy code, it gets even worse. When this negative emotion overwhelms the dev, mistake happens.  It’s good information for testers to know if the dev is working on his/her own code or someone else’s.

There are some more but I will stop here. The point I’m trying to make here is that software testing should be proactive not submissive. Verification of use cases are not majority of testing activity (shooting for 10% of  bugs) and verification only happens after dev hand over code to test team. Let’s understand how much software development process has evolved. Let’s understand what kind of software testing problem we’re dealing with NOW. Let’s not be stuck with old testing mind. Testing should evolve as development process evolves.