Monday, January 1, 2018

No more Development testing or QA testing. Just testing

Well.. It is 20018. It's been a while that I wrote something in here. I'm still working as an SDET. There has been lots of progress in software development and software testing. Let me go over the change I've noticed over the several years since last I post (around 2014)

Even if many different companies adopting different development process or strategy, I still believe testing by itself much needed in the development process (code, test and deliver). I still see the value of the unit test, integration tests, e2e test, load/stress tests and etc. I guess Agile development is kind of norm and on top of that many teams wants to go Continuous Integration/Deployment in their development strategy. And there are many success stories about not having  QA organization as part of their software development process.

I think the important question to ask is NOT who should do which kind of testing. Rather, which testing mechanism is placed for which part of development process. This seems obvious but I have not seen many cases where development team leader goes through the entire development process with the team and what kind of tests are required where.

Here is what I think the team should do.

First, the team (engineer leadership) should decide what kind of development process or strategy they are going to use including tests. This will cover from the code getting developed to the code being deployed to the production. All the necessary testing should be identified and decided.

Second, the team needs to confirm all the test infrastructure are all available for them to write and execute. Check if there is a way for developers to test all their new features and bug fixes confidently. Check if proper regression suites are defined and executed properly.

Third, the team should follow the process and do all the things defined for development effort and testing effort. And at the end of the cycle (whichever the team decides), evaluate the make improvement if necessary.

Now, who gets to do what would be up to the team. If QA team exist, you can allocate the work that QA team can do. But the main point is that development process has to come first and then dev team and QA team can take the necessary work to complete that.

Testing is important. All the necessary testing need to be done before going into production. It is just testing not dev team testing or QA team testing.

What about finding bugs? What about regressions caused by bug fixes or new features? I'll cover that on the next blog. But development process and testing associated with the process need to be defined and applied first.


Saturday, May 10, 2014

Thoughts around test automation framework

Today, I like to mention my thoughts around test automation.

Test automation is something that I'm passionate about. I like to continue to learn and make myself better at designing, implementing, and maintaining test automation. Throughout my career, I've seen good successful test automation, and yes, I've seen some bad ones as well. And yes, I've made lots of mistakes and bad decisions. I also have some success stories. Here are some things I consider worth noted.

Goal of test automation is value added to the project team not test automation itself. At the beginning of my career as an SDET, I was very passionate about writing a good automation framework. I read many books about coding.  And I watched lots of YouTube videos about coding practices and design. I was just crazy about being good at writing code. I was obsessed with design patterns. I felt like I knew what a good test automation should be. I challenged senior SDETs on existing automation framework and loved to have serious design discussion with test architects at the company.

As I experience more and more, I started to realized that it's not just about writing well designed, maintainable, scalable and beautiful code. It's actually about understanding the role of test automation in a project or a company and provide maximum value out of it. I started to consider various things when I design test automation framework, such as project timeline, short term and long term solution, coding skills of other SDETs, area of focus, what dev team needs, context of the application or system, testability, lab test infrastructure and etc. Sometimes I had to come up with automation in a couple weeks from scratch with coverage of priority 1 test cases. Sometimes I had to modify existing test automation framework to make it easy for inexperienced SDETs or even for manual testers to use. Those work indeed were the right choices for that given situations. It's possible to write easy to use automation framework. It's possible to build test automation starting with short term solution and transform it into long term solution without major design change. The true masters of test automation understand how to write well designed, maintainable, and scalable code. But that's just a foundation. Their adaptability and execution can bring maximum value to the company in any given situation.

Now, I'm getting some feedback on my test automation framework from other SDETs. "Jae-Jin, I believe we should never use hard-coded value in out automation" or "Jae-Jin, why you don't refactor those repeating code?" "We should use XML for all inputs" Well, I can imagine how those Sr. SDETs felt when I challenge them. My response is "well, let's discuss about that..." Fun.. Fun.. Fun

Don’t forget “Framework” part of test automation framework. So what is framework? To me, framework is an agreement. The agreement on a certain development way or convention the team will use to implement the software. Of course, this agreement is mostly introduced by architects or more experienced engineers. Then what are the benefits of having framework? Obviously, the engineers can be on the same page when it comes to implement the feature. It helps communications among engineers like in code review. And It’s hard for a new comers to make mistakes since the framework defines what code goes to where. The most important outcome of using framework to me is “as more and more feature(for dev) or test cases (for test) comes in, the volume of the code will increase, but the complexity of the code will remain the same.” This is the beauty of using framework.

A good example of development framework would be MVC framework. When a developers create a new feature, they will follow the MVC framework and put codes in the right place. Model for data representation. View for presentation layer. And Controller for orchestration and actual business logic. So when you design test automation framework, you should have some sort of agreement that everyone understands. If you don’t have “framework” nature in your test automation, you might not have test automation framework. You just have test scripts.  Take a look at your test automation. Is it a true framework?

Writing test automation framework requires discipline. Why? It's a bit different from writing production code. Just simply think about it. For production code, there are whole dedicated testers and test teams testing that code. But there is no other test team to test test automation framework code. We're writing code to test other code, which means test automation should be more correct and right. How can we achieve this without someone testing test automation code? Actually this is a big challenge for SDETs out there.

OK then what's the advantage? Generally speaking test automation code does not have as high expectations as production code in performance,  algorithmic efficiency (big O) and memory utilization. Test automation does not always required to handle exceptions or errors gracefully. And normally the test case codes are specific intent and specific expected outcome, and sometimes test execution code can ignore things that are not in scope of that particular test case. So we need to be able to utilize these facts when we're writing test automation framework.

Here are my disciplines. I discipline myself not to be fancy with my test automation code. I know I passed all those crazy interview questions to join the company. I know I am capable of writing complicated code with very efficient algorithm. But when I write test automation code, I discipline myself not to be fancy and go simple. For example, let's say I can implement n-square solution and n-log-n solution for a given problem. If n-square solution is more straightforward to implement and easy to understand, I will go for n-square solution. Yes, this is really hard for me too. But it is important that I write less error prone code.

I discipline myself to minimize the usage of conditionals (if, switch statements). Again, it is to reduce complexity of the code. I've seen crazy test automation method that takes 7-8 boolean, enum, objects as parameters. Oh.. man.. the complexity of that code had gone worse and worse. I rather have two separate method than have one method with boolean parameters. It looks fine at the beginning. But as time goes by, boolean parameters becomes enum parameters. One parameter become 2-3 additional parameters. There are some cases where it is necessary. But most of case, I rather choose redundancy over complexity. And if you try to write code without conditionals, it becomes more object oriented programming.

I discipline myself to be open-minded. I let test cases drives automation structure not my preference. I discipline myself not to be obsessed with my own design. When new test cases comes in and it does not fit in current automation design, I will not force the test cases to fit in current automation. I will change the automation structure to fit in new test cases. In other words, I will not let complexity of code grow because of new test cases. I will change the automation structure or design to keep the complexity about the same level.

Well, I'm getting sleepy.. happy testing!  

Saturday, April 19, 2014

5 minute software testing tips

Today, for the first time, I create a video about software testing on Youtube!
I don't know how this will work out, but hey.. why not.

5-minutes software testing tips.. :)
Will post more and more~~

Thursday, April 10, 2014

Dealing with emotion, people, and yourself in software testing

Today, I like to write about stuff around work place and somewhat related to testing.

We love testing! Don't you love doing testing all day long? I do. I really do. Well, but since it is a job, we all have crappy days and good days. Have you experience these? Developers resolve a bug by-design even if you mentioned all the details of the issue? Someone keeps bouncing bug back to you even if that's not your issue? Someone blames you on something via email and cc his manager, your manager, and some other important people?  You made a huge mistake, but your mistake gone away due to other big issue? You are just totally lost and simply take all the blame? Find a huge bug by accident? You've been keeping test automation or test case really well, but just one time when your manager sees your work, everything went wrong? Troubleshoot issue in a few second and find the root cause? Someone you believe not a very good developer/PM, but they save you from disaster? Give or take some condescending comments? You will be so happy if that guy does not work here. You'll be so happy if we don't need to do this? You get so angry about peer performance review?

I think we're so human being. Don't you think? It's not only you. It's everybody.. Right?
Here are things I think you might consider when you're dealing with people and emotion.

Do not criticize anyone publicly
Nobody likes criticism. Even if you're 100% absolutely right, do not criticize anyone in public. The other person will never appreciate your criticism. He will become very defensive. He'll try to find justification for himself and other to believe. And one day he will criticize you in public in return. I had several incidents around this. I had someone criticized me and cc'ed my manager or group alias. Oh man. It was hard to take. I came up with 100 reasons why that happens to justify the issue to myself. But I also have criticized some people with email. I thought I was cool. I email back with my awesome criticism to senior engineer's original email and yes REPLY ALL. I thought I had confidence in myself and opinions about things.. Like philosophy in testing. As a result, I've got very bad review that year. I can sense people did not like me when I raise my voice to debate. I'm not saying you need to be nice to other people to get good review. I'm saying you being confident/strongly opinionated person and you criticizing other people publicly are different thing. You can be confident and have strong opinion about everything without hurting others' feeling. When you receive dumbest email or unreasonable claim or blame from others, think how you can point out the issue without hurting others' feeling.

Leave your ego under your desk when you argue over a bug
I still get surprised by how people interpret and react to bugs. Some fight over the priority. Some doubt about severity of the bug. I've fought over bugs many many times. I have used "user perspective" cards or "terrible potential risk" cards to convince people. And yeah... I argued sometimes just for my EGO. I respectfully(?) respond to "You tester, stop breaking things!" with "I don't break things. I find things YOU broke!" But I think bugs are not our baby. It's a statement that indicate the issue which may or may not be serious impact to product/customers. It can definitely be interpreted differently to different people. Finding a bug will be our job, but how to react to the bug will be project team's responsibilities. Project management/business perspective has to be understood and development impact or effort should be understood as well. When I talk about the bug, I always try to remember what James Bach said in his talk. "Hey developer, don't think bugs as your mistake. We, testers, do not fight for bug to criticize your work. We want you to shine. It's like mother saying 'you have mustard on your mouth' when you leave the house for a date. We care about you. We want you to wipe out mustard on your mouth and shine." If you leave your ego under the desk, discuss facts around the issue and try to understand other people's perspective, it gets much smoother and easy.

Think big. Think big.
When I was a junior SDET, I was really scared of making mistakes. I get offended by people keep pointing out my mistake. I made some  automation framework changes to make it much more effective. Overall, it was great change, but it was not perfect. There is always someone point out flaws and criticize my work. I was a bit discourage to try new initiatives or change. Sometimes I got to the point "why bother." But as I gain more experience, I started to ignore those criticism. I've done many more presentations even if there are always people find flaws in my idea. I've brought so many new successful initiatives at my work even if there are people in doubts. Dr. Russell Ackoff, who is my favorite modern theorist in Systems thinking, mentioned about errors of commission and errors of omission. Error of commission is a mistake that consists of doing something wrong. Error of omission is a mistake that consists of not doing something you should have done. At work, errors of commission are visible, but errors of omission are not visible at all. But I think we should more concern about errors of omission. We should think big. Do not worry too much about small mistake you make when you doing great work. And don't worry about people's doubt on your idea if you believe it would work. We are living in this advanced and modern world, not because of people who are in doubt, but because of people who challenged status quo.


Tuesday, January 14, 2014

How to think while we're testing

Today, I like to write about something I read recently and address how it might impact our testing work.

First, I want to talk a little bit about how our brain works. Well.. Yes.. I'm no expert in neuroscience and my knowledge is limited to couple of books I read and some Googling. For the reference, what I'm about to explain came from this book, "Thinking, Fast and Slow" by Daniel Kahneman.

I wrote a blog post about assumptions and how it affects testing. link And I used James Bach's calculator testing example. I want to go a little deeper than explaining how assumptions affects testing. I want to understand why we make assumptions.

According to the book, "Thinking, Fast and Slow", humans have two different type of thinking systems. ....(BTW, I'm not saying contents of this book are all 100 % correct. It is based on credible research, and I don't think he won Nobel prize by providing BS) One is called "System 1" and the other one is "System 2". System 1 is so-called heuristic brain and System 2 is so-called rational brain. System 1 is quick, responsive, and sometimes fallible. System 2 is slow, lazy and requires energy and focus.

Here are some examples. System 1 is used when you're doing things like calculating 2+2, driving home from work (you've done many times before), brushing teeth, understand facial expression, instantiating array or string in your most comfortable programming language and etc. Basically, you've known and done so much that it does not require you to think much.

And System 2 is used when you're doing things like calculating 27*69, driving on not familiar road (without GPS), solving puzzles, writing letters, design the architecture of network system and etc. You can imagine these activities require your attention, deriving from some basic knowledge, connecting several dots of your knowledge (active interaction among neurons).       

And usage of system 1 and system 2 is very optimized. When you see 27*69 your system 1 recognize this cannot be done and ask system 2 to take over to process that. And after repetitive usage of task from System 2, System 1 can take some of the task. For example, learning chess game requires System 2 at the beginning, but once you are really good at chess game, lots of move you make can be done through system 1.

So why am I talking about this?
The calculator problem (from this blog) is basically a check for the consequence of fallible System 1. If System 1 process the problem and reaches conclusion, the problem will not move to System 2 unless you're forcing it to. And we, as test engineers, need to practice pushing our thought to System 2 while we're working. Being critical all the time is very exhausting. It requires discipline and lots of, lots of energy.

Sometimes, being on the same project for so long can prevent you from finding important bugs. Your System 1 does not give chance to System 2 to process thoughts without letting you know. You are a domain experts. You've seen lots of bugs along the way. But it can be dangerous.

So how do we force ourselves to use System 2?
I have a couple of suggestions

1. Call out your assumptions. 
It's an interesting check point. Let your brain to process flow of information about the application you're about to test first. You will think about use cases, testing strategies, test cases, test executions, report and etc. And then going over your documents or thought process again with critical mind. Try to find any assumptions you made. You're already using System 2.  

2. Focus on how rather than what.
Build your System 1 with "I need to think how to test this first, not what to test." Yes, it is important that you execute various categories of testing. Functional testing, localization testing, usability testing, load testing, performance testing, test automation, BVT/Smoke test and etc. are all important. And it seems very structured. However, it restricts you thought process to be in that category. Think about what's important, what's critical, what's necessary and how to address those with my testing. You make good use of System 2.  It will help you test is more effectively. And it will be just too fun. 

3. Be a critical thinker.
Here are three question.
"Huh?", "Really?". "So..."