Peter Pan is one of my favourite stories and the title of this post is one of the most iconic phrases to come out of the book.
It's at the beginning of the book when Wendy and Peter are speaking. Peter explains that when the first baby laughed, its laughter fell and broke into a thousand pieces. This big breakup of laughter means that there "ought to be one fairy for every boy and girl". Wendy notices that Peter said should and questions him.
"Ought to be? Isn't there?"
"Every time a child says, 'I don't believe in fairies,' there is a fairy somewhere that falls down dead." Peter replies.
The same can almost be said about testers when a developers don't check their code works. I don't mean a tester will die every time but a small part of their resolve dies everytime. If its not a testers resolve that dies, it may be real people that die. This may sound quite extreme but its not really.
Since the begiing of the Agile movement there has been a great movement to get the quality of software a lot higher. Aside from all the other major benefits of using scrum and XP, people have noticed the quality of software being delivered go up. They have done it by employing things that make the process more effective. Not all of these processes are efficient but they are extremely effective.
Testers resolve only dies when they feel that something could easily have been caught by the developers being effective at their job. Being efficient can lead to bugs because we are trying to develop a feature within a set timeframe which is normally slightly shorter than it should be. This is not all doom and gloom, there are bugs that testers love to find. Remember the rule for testing that if a test case is good it will find a bug but remember that testers are not there to make sure that you are doing your job properly. They are there to make sure that its fit for purpose!
When developing software always remember that you are killing your testers really slowly when you are not doing your job properly!
"To die will be an awfully big adventure." Peter Pan
This is something that every tester, from load tester to functional tester to security tester, loves to do. Make an application fall to it knees because it we managed to find some big hole. Making an application die is an adventure in many many ways. From going through the process of breaking it, to working out the cause of the breakage.
I have just watched Peter Pan for the umpteenth time and it got me thinking about my profession. Treat your testers like you would Peter Pan and they will be a lot more effective in finding those elusive bugs!
Saturday, 2 January 2010
Every time a child says, 'I don't believe in fairies,' there is a fairy somewhere that falls down dead.
Monday, 2 November 2009
Testing Should Be Elegant
When you think of elegant what is the first thing to pop in to your mind? Is it a ballerina during "The Nutcracker"? That's one of the first things that pops into my mind.
So now get a developer to test their code and they look for the elegant approach because its out of their comfort zone. Developers write tests so they can produce high quality code but they need to decide which framework will fit the best. Decision number 1. Then when they start working with it they realise that there is something that it can't do against their code so they need to swap it out with a new framework. This means they need to update a number of tests which can take a lot of work. Decision numbers 2 and 3. 2 being the need to change the framework and 3 being the new framework that they chose.
What can we as testers do about this? Well unfortunately I can't answer it but we as a community need to do something to make it all a lot more elegant! So when you start developing you next load of test cases, try think about how you can make the process of testing elegant.
Thursday, 22 October 2009
GTAC Day 2
Day 2 of GTAC started out with something that always fascinates me in the computing world. Something more than test automation or the user experience that users are given. The day started by a talk given by Dr Alberto Di Meglio about grid computing.
Opening talk for Day 2 - Dr Alberto Di Meglio
Alberto works for CERN working on the ETICS project. This is part of a project to get Grid computing working so that scientists can use it to complex calculations. This grid is also going to be use by many other scientists around the world use the E-sciencE grid. Alberto started explaining that with new technologies, like the LHC, we need something to process a large amount of data and we need to do it fairly quickly. The LHC can produce in the region of 15 Petabytes worth of data.
All of this work needs to be done and needs to tie in with the emerging standards that are appearing for Grid computing. The next challenge that they have seen is that they need to find a way of doing Quality Assurance on the system. ETICS is a continuous build, test and QA verification system. This system is designed so that developers can work in many different languages but they all access a standard API to get builds and tests done. ETICS is developed against a QA model called A-QCM, which stands for Automated Quality Control Model. They also implement a number of ISO codes and have submitted how they do their work to potentially become its own ISO code.
Tests are done in a multinode environment. This environment is built dynamically for what is needed and when it is needed. This is done by creating VM images on the fly because maintaining a set of VM Images can be quite the prohibitive in maintaining all of this. He also mentioned that there has been a major shift in culture with the ETICS project because the scientists that develop now do so thinking about the quality of their code.
A very enjoyable talk about the issues of grids.
Testing Applications on mobile devices - Doron Reuveni
Doron, the CEO of uTest, started by explaining what his company does and how goes about doing these things. He talked about the story from England where an entire village tried to guess the weight of a bull. Not one person got the correct value but the average of all the entries worked out the weight of the bull to within 2 Grams. This is called the Wisdom of the Crowd.
He then started explaining about the differences between crowdsourcing and in house testers or outsourced testers. He said that crowdsourcing fitted in somewhere in the middle between in-house and outsourced. He then also started describing the differences between scripted manual testing and exploratory testing. He said that a lot of the uTest testers fell in the exploratory testing realm. These testers are really good at finding the edge cases that no scripted tester would find and that they are a lot more creative.
Doron said that all of this is really good when testing mobile apps because the smartphone that will win the market will be the one that has the best apps and has the best entry to market for these apps. He said that the crowd is really good with mobile apps because they can use a lot more providers, phone types and data types than an in house team could.
It was a good insight to crowdsourcing
JSTestDriver - Jeremie Lenfant-Engelman
Jeremies talk was discussing the new JavaScript Testing framework that he and other Googlers have created. The new framework allows people to develop JavaScript in a Test Driven Development environment. The new framework was designed with the need to make tests run really quickly in the same way that JUnit tests do and also to be run in the same way. It needed to remove the need to move from an IDE to a browser and press refresh.
Developers like to see that their code is doing a lot of work and that it is doing this work quickly. JSTestDriver does this by having a server that captures browsers and the browsers then poll the server very regularly to see if there are bits of information out there. When there is it runs it and reports the results. This means that we can either run this really quickly or we add this to a continuous integration server.
It also has a very good code coverage model that is being developed so that we can see what code has been executed. Overall it is a very good framework and since it came out I have been playing with it and have even submitted a number of patches. This is definitely something that we need to watch in the future.
Selenium: to 2.0 and beyond - Simon Stewart and Jason Huggins
This is probably the most anticipated talk of the entire conference. It was quite funny to see the entire conference room get all filled up just before the start of the talk. Jason and Simon started talking about the benefits of Selenium and the benefits of web driver. It is great for doing the work that you want when testing the browsers. They then discussed the different issues with doing this as well.
So the solution? Well to merge the two frame works and make one "Uber" framework. Simon and Jason have been doing a lot of work to get the two development branches merged resulting in the code now being moved over to http://selenium.googlecode.com and the next step was to start merging the bug tracking for this. They are going to be creating a hybrid system that allows developers to write WebDriver tests and run them in Selenium and vice versa.
Looking at Selenium 2.x there is going to be a lot more work on browser support for Webdriver natively and work to get it doing a number of this so that we can move away from the "Uber-jar" that is going to be released shortly. There was also a talk by a guy from Opera, who's name I can't remember, who said that they use WebDriver to help them do all of there rendering tests for new versions of Opera, including mobile versions.
This was a very entertaining talk.
Score one for Quality - Joshua Williams and Ross Smith
This was one of the talks that I was looking forward to for the entire time in GTAC. Joshua and Ross, who both work on Windows 7, talked about how they introduced a number of games to do quality assurance. They started out by showing that some people are Actively Disengaged with there job. This means that they don't always enjoy what they are doing and how they are doing it. They then started introducing games into the work people did but they needed to work out a few things like working out what would excite people. They went through a number of games that testers and developers would play until they found the right mix.
During their talk they even got the group involved by doing a competition for who had the "Best Bug Story" and sharing that with everyone. The showed with this that people are competitive and want to compete all the time and its a natural thing. They also mentioned that new developers and testers are coming from the Gaming Generation so love playing games all the time.
The said that we need to do a lot more to get people engadged with their work and they need to start feeling like they are controbuting to this.
A very good talk all in all from the Microsoft guys!
Automated Performance Test Data Collection and Reporting - David Burns and David Henderson
This, unfortunately, you will have to wait for a seperate blog post for this.
Wednesday, 21 October 2009
GTAC Day 1
Today was the first day of Google Test Automation Conference. There have been some really good talks and below I am going to give my thoughts on the talks.
Yi and Serge did a really good talk about creating random objects to go through testing objects. They were talking about how they go about generating random objects and how they are used within testing.
Fighting Layout Bugs - Michael Tamm
Thursday, 11 June 2009
Great Expectations
Thursday, 23 April 2009
Designing and Testing with Behaviours rather than requirements
A few months ago I decided that I wanted to learn something new. I decided that I would have a go on the Google App Engine, learning Python and jQuery. I decided that I would start working on something that would be useful, at least for me, and go from there. I also thought that I would give Behaviour Driven Development a try
I decided that I would work on a project management application. So the first thing I needed was a way to store my backlog and visually see where items were on the stack.
Since I have a stack on bits of paper and want to do this by describing my behaviors for each of my requirements I need a way to describe them that I can easily translate into a test. I went looking through all the different BDD Frameworks (NBehave, Behaviour, JBehave, RBehave,StoryQ, and many many more!) to look for the common phrases and it came back with the phrase that I think is brilliant. It flows off the tongue so easily that getting product owners involved to write them can make an agile project work like clockwork.
The phrase is below. NOTE: The bold items make the main phrase up and the italics make it more readable.
As a product owner
I want functionality
so that something is done
with Scenarios
Scenario 1:
Given start point
When do something
Then expections will be met
.
.
.
Scenario n:
.
.
.
This can easily be coded into different languages and in some of the languages the natural language above is all that is needed but just a little tweaking to get the functions calls sorted. If you are a Selenium nut like me you could even drive with something like FitNesse. Gojko Adzic has explained it really well on his blog so suggest you have a look there.
The other plus point for BDD in Agile environments is its ability to change the BDD test very quickly if the product owner changes their mind. And since product owners change their minds every time they have a cup of tea, and in the UK it can be very often, you need to be able to keep your tests up to date.
Thursday, 26 March 2009
Testing through the Credit Crunch - Part 6 - Innovation
This is the last in the series and I thought that I would finish it off discussing the innovative culture that is needed in today's climate. If you work in an Agile environment trying to create the next big thing you know not all the components you need have been created. And if they haven't been created you have very little to zero chance that you will find a testing tool to automate your application.
You can automated the units of the code, obviously, but there are parts of the application that you can't test with a unit/integration testing framework or your continuous integration server is taking too long to build and test. I know we have all seen the last issue.
So what do we do then? We need think outside of the box to get a solution. A lot of the testing frameworks have grown from the need for something to fit a certain issue.
The image on the left shows the increase in testing projects on sourceforge.net over 9 years. As you can see there has been a nice growth of testing frameworks. A lot of these have been grown from the need to test something and there are no tools out there. One example of this is Selenium.
Selenium started out as an in-house testing framework for ThoughtWorks. Now its one the worlds best functional testing tool. Next tool to talk about is a Selenium derivative. Selenium Grid was born out of the need to make the tests run quicker. The build was taken forever so Philippe Hanrigou and a few other ThoughtWorkers extended Selenium RC to get it to runs tests in parallel without having to worry the infrastructure out there.
I am currently building a system to pull in usability stats from our web application with a developer using 3 free tools that anyone out there can find and making a nice hybrid test tool. I will document this in a future blog post with my colleague. But this has been born out of the need to do something that no other tool seems to do at the moment.
The skill that is needed by testers all across the board is the ability to tackle projects by thinking laterally. Tools are not always going to be available for everything and record-and-replay will not solve all of the worlds issues. There are a number of harder issues that need to be solved like the Testers-Heads-Up-Display that I discussed in my last post.
All of these items can easily be solved with a little innovation by all the testers out there. I guess the question is, what can you offer to the testing world.



