Monday 2 November 2009

Testing Should Be Elegant

When you think of elegant what is the first thing to pop in to your mind? Is it a ballerina during "The Nutcracker"? That's one of the first things that pops into my mind.


When you think about elegant technology what is the first thing that appears in your mind? My first thought is an Apple iPod. Actually a lot of Apple products are very good examples of elegant. Other examples of elegance in technology are Amazon's "1-Click Buy" and Google Wave.

In all of these examples the elegance has given them a very unique selling point and made them leaders in their respective fields. The other thing that they have in common is that they have a lot of work that is done in the background. For example, with Amazon's "1-Click buy" it needs to handle if I buy 5 things in 5 minutes and put them in the same box. As a customer, do I care how it does it? No. As Amazon's COO, do I care? yes! It saves money on postage that the customer will probably refuse to pay because they just spent some money.

Now what does this have to testing? Well, in testing what would you consider elegant? I will make it easier by splitting it to test automation and manual testing. In test automation, what framework would you consider elegant? For me, nothing comes to mind. There are some BDD frameworks that are close but to be honest they aren't elegant. In manual testing what technique for testing would you say is elegant? Exploratory testing...no! Scripted Testing...no!

Why is software testing not elegant then? Well, the thing that makes elegance in the items that I had at the beginning of this post is the removal of decisions, especially redundant decisions. The iPhone goes on to silent with a flick of switch, my Android takes 2 or three steps asking me questions as I go along. Every time I get a question I need to make a decision. Nokia's are even worse with the changing profiles that make something slightly less silent each time.

So now get a developer to test their code and they look for the elegant approach because its out of their comfort zone. Developers write tests so they can produce high quality code but they need to decide which framework will fit the best. Decision number 1. Then when they start working with it they realise that there is something that it can't do against their code so they need to swap it out with a new framework. This means they need to update a number of tests which can take a lot of work. Decision numbers 2 and 3. 2 being the need to change the framework and 3 being the new framework that they chose.

If you are a UI Test automator I am sure that you have had a play with Selenium. Selenium is a perfect example of something that could be perfectly elegant but isn't. Have you ever tried to type something into an input box and failed with Selenium? Have you ever tried to press a button with Selenium and failed? Why do you fail to get it right? It was the decisions that you were forced to make at the beginning and you got it wrong. But because you made the decision you are less likely to blame Selenium than yourself.

Then you get into the situation that a developer, either yourself or a colleague, changes the layout of something that didn't have an ID you suddenly see your Continuous Integration server goes red. The XPath is different so you need to update it in each of the broken tests. Elegant? Far from it!!!

So what can we do about this? Well, most developers have started using Domain Specific Languages(DSLs) to write their tests and let all the hard work be abstracted out. It allows developers to now write tests that are now readable and leading towards testing being elegant. The formation of DSLs is starting to make testing more elegant but we need to do more to make things elegant.

In manual testing we also need to make it more elegant. Take a form that has 6 input fields and you can fill in 1 or all 6 of these fields. There are 720 different combinations and then we have to assume that it only takes 1 type of input and that it doesn't go down a number of different code path because that will add to the number of decisions that a tester has to make. Testing will never remove the decision making but cutting them down to make testers a lot more productive.

There are techniques to help you cut the decisions down, like equivalence partitioning, but there are still lots of decisions to be made. If partitioning suggests we only do 10% of the combinations because those are the combinations that are important we still have 72 decisions to make. If each of those take 10 seconds to execute you suddenly have 1 form that can 12 minutes to do. Elegant? Far from it. This is possibly the other reason why management think that test automation is the be all. Test automation is slightly more elegant but to be honest it isn't as elegant as it should be!

What can we as testers do about this? Well unfortunately I can't answer it but we as a community need to do something to make it all a lot more elegant! So when you start developing you next load of test cases, try think about how you can make the process of testing elegant.

Thursday 22 October 2009

GTAC Day 2

Day 2 of GTAC started out with something that always fascinates me in the computing world. Something more than test automation or the user experience that users are given. The day started by a talk given by Dr Alberto Di Meglio about grid computing.


Opening talk for Day 2 - Dr Alberto Di Meglio

Alberto works for CERN working on the ETICS project. This is part of a project to get Grid computing working so that scientists can use it to complex calculations. This grid is also going to be use by many other scientists around the world use the E-sciencE grid. Alberto started explaining that with new technologies, like the LHC, we need something to process a large amount of data and we need to do it fairly quickly. The LHC can produce in the region of 15 Petabytes worth of data.

All of this work needs to be done and needs to tie in with the emerging standards that are appearing for Grid computing. The next challenge that they have seen is that they need to find a way of doing Quality Assurance on the system. ETICS is a continuous build, test and QA verification system. This system is designed so that developers can work in many different languages but they all access a standard API to get builds and tests done. ETICS is developed against a QA model called A-QCM, which stands for Automated Quality Control Model. They also implement a number of ISO codes and have submitted how they do their work to potentially become its own ISO code.

Tests are done in a multinode environment. This environment is built dynamically for what is needed and when it is needed. This is done by creating VM images on the fly because maintaining a set of VM Images can be quite the prohibitive in maintaining all of this. He also mentioned that there has been a major shift in culture with the ETICS project because the scientists that develop now do so thinking about the quality of their code.

A very enjoyable talk about the issues of grids.

Testing Applications on mobile devices - Doron Reuveni

Doron, the CEO of uTest, started by explaining what his company does and how goes about doing these things. He talked about the story from England where an entire village tried to guess the weight of a bull. Not one person got the correct value but the average of all the entries worked out the weight of the bull to within 2 Grams. This is called the Wisdom of the Crowd.

He then started explaining about the differences between crowdsourcing and in house testers or outsourced testers. He said that crowdsourcing fitted in somewhere in the middle between in-house and outsourced. He then also started describing the differences between scripted manual testing and exploratory testing. He said that a lot of the uTest testers fell in the exploratory testing realm. These testers are really good at finding the edge cases that no scripted tester would find and that they are a lot more creative.

Doron said that all of this is really good when testing mobile apps because the smartphone that will win the market will be the one that has the best apps and has the best entry to market for these apps. He said that the crowd is really good with mobile apps because they can use a lot more providers, phone types and data types than an in house team could.

It was a good insight to crowdsourcing

JSTestDriver - Jeremie Lenfant-Engelman

Jeremies talk was discussing the new JavaScript Testing framework that he and other Googlers have created. The new framework allows people to develop JavaScript in a Test Driven Development environment. The new framework was designed with the need to make tests run really quickly in the same way that JUnit tests do and also to be run in the same way. It needed to remove the need to move from an IDE to a browser and press refresh.

Developers like to see that their code is doing a lot of work and that it is doing this work quickly. JSTestDriver does this by having a server that captures browsers and the browsers then poll the server very regularly to see if there are bits of information out there. When there is it runs it and reports the results. This means that we can either run this really quickly or we add this to a continuous integration server.

It also has a very good code coverage model that is being developed so that we can see what code has been executed. Overall it is a very good framework and since it came out I have been playing with it and have even submitted a number of patches. This is definitely something that we need to watch in the future.

Selenium: to 2.0 and beyond - Simon Stewart and Jason Huggins

This is probably the most anticipated talk of the entire conference. It was quite funny to see the entire conference room get all filled up just before the start of the talk. Jason and Simon started talking about the benefits of Selenium and the benefits of web driver. It is great for doing the work that you want when testing the browsers. They then discussed the different issues with doing this as well.

So the solution? Well to merge the two frame works and make one "Uber" framework. Simon and Jason have been doing a lot of work to get the two development branches merged resulting in the code now being moved over to http://selenium.googlecode.com and the next step was to start merging the bug tracking for this. They are going to be creating a hybrid system that allows developers to write WebDriver tests and run them in Selenium and vice versa.

Looking at Selenium 2.x there is going to be a lot more work on browser support for Webdriver natively and work to get it doing a number of this so that we can move away from the "Uber-jar" that is going to be released shortly. There was also a talk by a guy from Opera, who's name I can't remember, who said that they use WebDriver to help them do all of there rendering tests for new versions of Opera, including mobile versions.

This was a very entertaining talk.

Score one for Quality - Joshua Williams and Ross Smith

This was one of the talks that I was looking forward to for the entire time in GTAC. Joshua and Ross, who both work on Windows 7, talked about how they introduced a number of games to do quality assurance. They started out by showing that some people are Actively Disengaged with there job. This means that they don't always enjoy what they are doing and how they are doing it. They then started introducing games into the work people did but they needed to work out a few things like working out what would excite people. They went through a number of games that testers and developers would play until they found the right mix.

During their talk they even got the group involved by doing a competition for who had the "Best Bug Story" and sharing that with everyone. The showed with this that people are competitive and want to compete all the time and its a natural thing. They also mentioned that new developers and testers are coming from the Gaming Generation so love playing games all the time.

The said that we need to do a lot more to get people engadged with their work and they need to start feeling like they are controbuting to this.

A very good talk all in all from the Microsoft guys!

Automated Performance Test Data Collection and Reporting - David Burns and David Henderson

This, unfortunately, you will have to wait for a seperate blog post for this.

Wednesday 21 October 2009

GTAC Day 1

Today was the first day of Google Test Automation Conference. There have been some really good talks and below I am going to give my thoughts on the talks.


Keynote by Prof. Nicklaus Wirth

Prof. Nicklaus Wirth opened the first day of GTAC. Prof. Wirth is a winner of the Turing award and currently working at ETH Zurich. Prof. Wirth worked through the years of Computer Science showing how things have improved over the years. he explained that through those years you couldn't just pull up a debugger and find the issues.

Prof. Wirth also used the famous Dijkstra quote that "Testing shows the presense of bugs, not the absence of them" so we can't just assume that we are making high quality code. He also had a complaint that universities are not teaching programming to the students. He likened computer programming to playing the piano, its easy to learn to play with 2 fingers but to play with all 10 fingers is extremely difficult so its easy to create programs but hard to make really good applications.

Precondition Satisfaction by Smart Object Selection in Random Testing - Yi Wei and Serge Gebhardt

Yi and Serge did a really good talk about creating random objects to go through testing objects. They were talking about how they go about generating random objects and how they are used within testing.

They talked about how they generate lots of objects and generate lots of object very quickly so they needed to work out what they needed exactly so started working out what objects meet the pre- and post-conditions.

They then started talking about how the objects start working in tests and were left to run for many hours. They found over 500 bugs found in 2 open source libraries by applying this random object testing. They found minor bugs to bugs within a lexer which can be quite scary.

Fighting Layout Bugs - Michael Tamm

Michael Tamm gave a very good presentation on how to fight layout bugs. Michael started by talking about all the basic things that we can do and should we be applying them to the our continuous integration servers. This can be done by collecting the page structure,parsing it and passing it to W3C. This can be done for the both HTML and CSS. Unfortunately CSS validator can be too strict and doesn't like browser specific styles.

Michael then moved on to a project that he has been working on about an Open Source project that he has created. This can be found at http://code.google.com/p/fighting-layout-bugs. This project works by firing a WebDriver instance and then takes a number of screen shots and does a bitwise operation to work out where things are overlapping. It was a very a good presentation and will be watching this project very carefully.

Even Better than the real thing - Lessons learned from GWT application - Nicolas Wettstein

Nicolas gave a very good talk about Google Web Toolkit. He talked through all the different issues that can be found when developing software. He showed what can be really bad code examples and how this can be cleaned up.

Nicolas also talked about that there should be a very good design structure such as MVC/MVP. That way people can start creating very good unit tests because a developer can then fake out the right items. Favourite quote from this talk is "Developers are responsible for the quality of the code"

Automatic Workarounds for web applications - Alessandra Goria and Mauro Pezze

This talk was a very interesting talk about having a system that sat between the browser and the backend server. The have developed a system that when a user encounters a bug you can tell the proxy and it will go off and find a solution from a repository.

They were using bugs in the Google Maps API bug tracker and analyzed the issues and if there were any potential workarounds apply them to the page. Unfortunately this is very academic work and does not meet any of the potential psychological needs of users. It also doesn't differentiate between bugs that could be happening.

The talk was very good but academic so don't think it will ever become a real world application.

Achieving Web test automation with a mixed skills team - Mark Micallef

Mark gave a very good presentation on how to get Test Analysts and Test Engineers working very well together. Mark was showing how you start with a team with lots of tasks but they are all manual and how, with the right tools, you can get them writing basic tools.

Mark said that he got the teams using Ruby and Cucumber to start doing basic ATDD and BDD with the test analysts and then getting the test engineers to fill in the technical aspects. Mark did this by seeing what motivated the team and trying to work with their strings.

Mark said that this is the start of the work but the BBC still has a lot of work to do! It was a very good presentation and makes me wonder what motivates testers out there.


And finally...

The lightning talks were really good. Everyone was very entertained by the talks. When they are out on YouTube I recommend everyone having a watch of them.

Thursday 11 June 2009

Great Expectations

I recently returned from a well deserved holiday with my wife. We went to an area with mountains, mines and sheep (lots of sheep) and while we were there we wanted to get something nice to remember our holiday. The area where we stayed is extremely famous for its mining and thought that a trinket would be an easy find. We were sadly mistaken. We had set our expections and they were not met.

This is something that we all do every day. We have certain expectations that we expect and are expected from us. Working in testing, expections are always high for the quality of work we let through but these expectations come from an age where testers were the gatekeepers. The days where my profession was made up of people trying to get into IT because it was the next big thing.

The main people to have certain unrealistic expectations are our end-users. Or are they just normal expectations? These expectations can mean the difference between a "feature" and a "bug". End-users can spot blatant things that are wrong, like wanting something gorgeous from the mine not just a coaster for my tea. 

Using this analogy in software we would say thats a bug. But when an html editor that doesn't correct really bad html for you when it corrects mildy bad html, is it a bug or just they way it is? I would tend to lean towards the "just they way it is" as long as the editing software didn't crash the browser/computer. That reminds me of a professor at university who had the saying "The rubbish you put into a computer is the rubbish you get out". So trying to make the rubbish that the end-user creates before they do helps create good software.

Out thinking the end-user is something that a tester does naturally...but who is the end-user? Are they customers? They are both internal and external customers and the tester needs keep them both sets happy. External customers are rarely spoken to by the tester. We tend to speak to the Support teams and the Product Owner making sure that they have reasonable expectations. This is because they are very important internal customers to the test team. Spotting the customers of the tester is easy but who are the test team customers of?

I like to think that the test team are the customers of the development team but then in the same breath the development team are customers of the test team. When the developer is the customer of the tester they expect to receive a decent bug report with the error report and a way to recreate the issue. When the tester is the customer of the developer, the tester expects a certain amount of testing to have been completed before it getting to the tester. This is the creation of unit tests and test harnesses to make sure that their code works. If you work in an Agile environment then you should expect that if a developer breaks the build then they need will fix it within a certain time. With Continuous Integration servers when it goes red I expect someone to be looking into the error fairly quickly and then resolve the issue. Maybe I have unrealistic expectations but I like to keep the bar high in that area for the quality of the product.

The approach of treating everyone around you as a customer will hopefully mean that you can set their expectations properly. This will lead to less disappointment and help with the Total Quality Management in the organisation.

Thursday 23 April 2009

Designing and Testing with Behaviours rather than requirements

A few months ago I decided that I wanted to learn something new. I decided that I would have a go on the Google App Engine, learning Python and jQuery. I decided that I would start working on something that would be useful, at least for me, and go from there. I also thought that I would give Behaviour Driven Development a try

I decided that I would work on a project management application. So the first thing I needed was a way to store my backlog and visually see where items were on the stack.

Since I have a stack on bits of paper and want to do this by describing my behaviors for each of my requirements I need a way to describe them that I can easily translate into a test. I went looking through all the different BDD Frameworks (NBehave, Behaviour, JBehave, RBehave,StoryQ, and many many more!) to look for the common phrases and it came back with the phrase that I think is brilliant. It flows off the tongue so easily that getting product owners involved to write them can make an agile project work like clockwork.

The phrase is below. NOTE: The bold items make the main phrase up and the italics make it more readable.

As a product owner
I want functionality
so that something is done
with Scenarios

Scenario 1:
Given
start point
When do something
Then expections will be met
.
.
.
Scenario n:
.
.
.

This can easily be coded into different languages and in some of the languages the natural language above is all that is needed but just a little tweaking to get the functions calls sorted. If you are a Selenium nut like me you could even drive with something like FitNesse. Gojko Adzic has explained it really well on his blog so suggest you have a look there.

The other plus point for BDD in Agile environments is its ability to change the BDD test very quickly if the product owner changes their mind. And since product owners change their minds every time they have a cup of tea, and in the UK it can be very often, you need to be able to keep your tests up to date.

Thursday 26 March 2009

Testing through the Credit Crunch - Part 6 - Innovation

This is the last in the series and I thought that I would finish it off discussing the innovative culture that is needed in today's climate. If you work in an Agile environment trying to create the next big thing you know not all the components you need have been created. And if they haven't been created you have very little to zero chance that you will find a testing tool to automate your application.

You can automated the units of the code, obviously, but there are parts of the application that you can't test with a unit/integration testing framework or your continuous integration server is taking too long to build and test. I know we have all seen the last issue.

So what do we do then? We need think outside of the box to get a solution. A lot of the testing frameworks have grown from the need for something to fit a certain issue.

The image on the left shows the increase in testing projects on sourceforge.net over 9 years. As you can see there has been a nice growth of testing frameworks. A lot of these have been grown from the need to test something and there are no tools out there. One example of this is Selenium.

Selenium started out as an in-house testing framework for ThoughtWorks. Now its one the worlds best functional testing tool. Next tool to talk about is a Selenium derivative. Selenium Grid was born out of the need to make the tests run quicker. The build was taken forever so Philippe Hanrigou and a few other ThoughtWorkers extended Selenium RC to get it to runs tests in parallel without having to worry the infrastructure out there.

I am currently building a system to pull in usability stats from our web application with a developer using 3 free tools that anyone out there can find and making a nice hybrid test tool. I will document this in a future blog post with my colleague. But this has been born out of the need to do something that no other tool seems to do at the moment.

The skill that is needed by testers all across the board is the ability to tackle projects by thinking laterally. Tools are not always going to be available for everything and record-and-replay will not solve all of the worlds issues. There are a number of harder issues that need to be solved like the Testers-Heads-Up-Display that I discussed in my last post.

All of these items can easily be solved with a little innovation by all the testers out there. I guess the question is, what can you offer to the testing world.


Wednesday 11 February 2009

Testing through the Credit Crunch - Part 5 - Visualization

Visualization of testing is one of the new and exciting areas of testing. It is being born out of the need to see results and coverage of testing quickly. Remember the old days of writing test strategy documents at the beginning of a project, then rewriting them a week later, and then a week later until you get to the end of the project and the document looks nothing like it did at the beginning of the project. Oh and did I mention that the document never gets completed?!

Now you have stakeholders just assuming you did the right thing, documents that will just pass an ISO audit and a slight resentment to your project manager for allowing scope creep! Then came along strategy documents in a state diagram, like Graphical Test Strategy I discussed before, that allows you to get your document done in a day and sent out to stakeholders. All of a sudden you a hero to your "customers" as they become more involved in the quality of the end product.

You have drawn a couple pictures to show people what you are going to do but now lets have a look at drawing pictures to actually do the testing. If you carry on breaking down the state diagram you are given a beautiful test case diagram. Translating that into a automated test can be done with a tool called CubicTest. CubicTest is an Eclipse plugin that allows you create abstractions for test cases and then gives you Selenium or Watir Tests using a state diagram. Its a good tool if you are constantly using Eclipse but I never use Eclipse

Visualization also offers a lot of value to code metrics. Unit tests are only of value if they test different scenarios and if you are testing different scenarios you will inheritently get good code coverage. Code Coverage is a good way to see that you are testing properly. Well, that you are testing in the right areas. My tool of choice is NCover but thats because I work in a .NET shop.

All of this can be placed into a CI server and all it can then be reported on easily. I use CruiseControl.NET to do my CI and I love that it shows all the passes,ignores and fails of tests from the web front end. I can check up on the developers without having to get their code out of the source repository. The manual testing guys make their scripts update a database. Thanks to simple charting APIs( Google Charts ) I can see how projects are going from one page. So without too much cost and fuss we can see how testing is going and the stakeholders can feel attached to QA of the product from Development to Delivery! As always tools to do what I have talked have open source alternatives.

Unfortunately none of these tools make the lives of testers easier. It works is showing that we are doing well, or not if thats the case, but how do we know that we have a decent amount of code coverage on core libraries? The Microsoft XBoX team analyse their logs and apply Heat Maps so that they can see where their testers have been doing. But what if you took your normal code coverage report and put that through a heat map but put a weighting on the function. The weighting relates to how bad a bug would be if found in this area and if we are to work it how well its tested by the code coverage and the number of tests we can actually get a feel for what we are developing and testing.

Finally the thing that is desperately needed is a Testers Heads Up Display (T.H.U.D). James Whittaker discussed this in his series on the Future of Testing and in his Keynote at GTAC 2008. It takes the idea from games where the H.U.D. allows gamers to be really good at the game. The T.H.U.D will give you little notes when you are working saying that there are potentially bugs in this area and telling you about previous bugs in that area. My idea of the T.H.U.D would give you the information and then record where you have been so that you get a heat map over your application when you want to check weather you are finished in this area.

The faster the iterations get the more testers will need something like this. The average ratio of developers to testers seem to sit around 3:1 and it takes just one developer not to unit test their code to slow a tester down since they have to look for smaller bugs as well as those hard to find that we testers look forward to finding.

This leads me onto my final post in the series: Innovation

Monday 26 January 2009

Sorry about the spam


I would like to apologize for the RSS Spam that everyone has been getting lately. I have moved my Feedburner account to Google. It appears that Google did not test the migration process. I spammed everyone when sorting my CNAME so I can use http://feeds.theautomatedtester.co.uk/TheAutomatedTester as my Feed URL again instead of it returning 404 messages that Google had decided would be better after the migration. I spammed you guys then and then on the weekend Google decided to spam you with a fix that they added.


Sorry for the annoyance that everyone has suffered. Hopefully everything is now sorted and it won't try resend everything again!!!!

Sorry again for the spam!!!!

Tuesday 13 January 2009

Testing through the Credit Crunch - Part 4 - Value Open Source

By now you will have hopefully realised that running a Test Department doesn't have to be expensive. We have seen that by making the entire company responsible for the quality of the product we take the strain off us doing the developers jobs and lets us find those show stopper bugs.

In this post I am going to discuss why you should value open source tools to automate finding those showstoppers.

Lets start at the small thing to be tools we need at the beginning:

Continuous Integration

How many time in the past have you released something and then suddenly have to roll back the release because not everything works together. Continuous Integration Servers get code out of the source repository. I am a big fan of Subversion as a source reprository, which just happens to be open source!

The top CI server application out there are CruiseControl, Cruise Control.NET, Hudson and TeamCity. 3 of those 4 are Open Source. The one that is not open source, TeamCity, requires a you pay for a licence if you want all the bells and whistles. You should be able to manage on the free version of TeamCity if that the one you decide on.

One of the most important features of all these tools is the ability to automatically run the unit tests that have been created by the developers.

Unit Testing

The de facto testing tools for Unit Testing are NUnit, JUnit and TestNG for .NET and Java. There are loads more out there, like unittest for python, that are used all the time. These tools are great for integrating into Continuous Integration Servers.

In the 2nd post of the series I mentioned that having unit tests allows us to have a safety net to work against. The CI servers will run every developers unit test. This allows developers to do a check-in/commit, once their tests pass, and then get the rest of the tests run. This means that if you make a change you will find out if you have by accident broke something. You will learn to appreciate this safety net but make sure that you don't become lazy and use that as your only testing.

Functional Testing

Web applications have the largest amount of Automation tools for functional testing. Since I also work in this industry I will be very biased towards the following applications.

  • Selenium
    Selenium is slowly becoming the test tool of choice for Rich Internet Applications. Selenium allows you to write functional tests against AJAX applications. Selenium is used by the top Internet application companies and Agile Consultants.

    Its developed using JavaScript and can work in all Browsers on any OS. As you would have noticed I am a big fan!
  • Watir
    Developed in Ruby its has a very good following. Test Scripts need to be developed in Ruby and you just import Gems for the Browser that you want to test.

    I found that it needed a lot of work to test multiple browsers compared to Selenium. I know some people would say its easier to use that Selenium. I guess its down to the user.

Acceptance Testing

More and more companies are starting to use acceptance testing tools to capture their requirements. Acceptance test frameworks can all integrated into Continuous Integration Servers. They take a plain text requirement and turn it into a Test Fixture.
The main tool for acceptance in my opinion is a manual tool commonly known as the customer. Remember that customers are both internal and external to the company! End users are the best acceptance testers as they are the subject matter experts and spot bugs quite quickly.


In the next post of the series I will discuss Visualization of Testing