Saturday, 6 March 2010

Contracts have been signed

Today I officially received my contract to write a book on Selenium. I am going to be writing a beginners guide showing people how they are start test automation working using Selenium IDE then moving on to Selenium RC and Selenium Grid. There will also be chapters on Selenium 2.

The outline I went for was to help people who want to learn Selenium so they can maintain existing tests but also start writing tests using Selenium 2 and migrating their Selenium 1 tests over to Selenium 2.

So look out for it later this year!

Saturday, 20 February 2010

Process.Start() issues with Mono on Mac OS X: My Story

Over the last month or so I have been spending my time writing .NET code for the new Selenium 2 Bindings. I have spent most of my time working on the Google Chrome Driver. But below is the story of 1 Selenium bug and 1 Mono bug and my exploration of how processes work in .NET on a Mac.


I was quite confident about the .NET bindings since it was extremely easy to get them working on my laptop. I admit I got nervous about how it would work on Non-Windows platforms since it was my first time writing .NET code on a platform that isn't windows. I shouldn't have been nervous. It was fairly easy to install Mono and MonoDevelop on my Ubuntu VM and ran the NUnit tests that we have. Google Chrome started and we had tests passing. Brillant because there was little code changes, I just updated the code where to find the Google Chrome application.

I was confident until I started working Mac OS X. I have very little experience on a Mac. Other than using it to run Selenium Grid or using it with a Browser I know very little about how the OS works.

On the Mac I installed Mono and MonoDevelop and fired them up. I went in and updated the code so that if the .NET code was running on a Mac it knew where to find the Google Chrome application and would then start up the process using the standard Process.Start() method from System.Diagnostics library. Feeling extremely confident I fired up NUnit and clicked Run. They all failed. Why? Well the browser wasn't loading. Selenium without a browser, excluding when used with HTMLUnit, isn't going to work.

I started debugging my code to see why the browser wasn't loading. The first issue that I found was the Selenium 2 .NET code wasn't recognising the Mac OS platform. We have logic that checks which platform you are on and we hadn't put a conditional for Mac OS. Updated my code and ran the tests again and they went red again. Why? No browser again.

Debugging again I came across a rather large and very annoying bug in Mono. Mono when running on Mac OS platform reports that it is running on Unix/Linux. This isn't too ridiculous since Mac OS is based on BSD however it means now that when ever the code needs to launch the browser it needs to up to 3 hits to the disk to see if it is where I think its. It also means the fix I added in Selenium earlier doesn't really matter. Anyway the code now has the ability to launch the browser...or does it? It doesn't as my tests are still failing.

I started debugging again. The error I was getting was that I was trying to open the application and open wouldn't accept the arguments I was passing in because they are not valid for Open. Since I have never developed on a Mac I had to do some research and found out that the defaults for Process.Start() want to Open the file rather than execute it. This is perfect if you want to open text files but not applications. So I tried to see about creating a bash script to start up Google Chrome. One obvious downside if it did start up was that I wouldn't have an easy way to kill it. I was starting to think that I was going to be doing Yak Shaving to get this to work.

I next had a look at using Monobjc. It is a library that allows you to write Cocoa code in .NET and there it uses a bridge to get your code working. It is something that one of the Watin guys had done to get Watin working on a Mac. After a lot of pulling my hair out for a while I gave up and asked a question on StackOverflow.

The 5th answer I was given, and the one I accepted, was that I had forgot to set a property called UseShellExecute. My code for starting up went from

chromeProcess = Process.Start(GetChromeFile(), String.Concat(Arguments, serverUrl));

to

chromeProcess = Process.Start(new ProcessStartInfo(GetChromeFile(), String.Concat(Arguments, serverUrl)) { UseShellExecute = false });

The subtle change of explicitly creating a new instance of ProcessStartInfo and updating the property { UseShellExecute = false } meant that my code went from opening an application to executing a application. The difference tells Process not to open a shell according to MSDN but on a Mac means it does what I want. I hope this is helpful for anyone else in the future.

kick it on DotNetKicks.com

Thursday, 4 February 2010

Why should I move to Selenium 2?

As most of you know I am a really big fan of Selenium for testing of Web UI and as of Tuesday I joined the development team working on the next release of the Selenium. I am helping Jim Evans on the .NET bindings.

Selenium 2 is the merging of Selenium and WebDriver to make the whats going to be the best Web UI test framework out there.

Selenium 2 will help both WebDriver and Selenium get over their respective weaknesses. So, for those who have a decent coverage and your tests have been running for a while why should you think about moving your tests to Selenium 2?

Well I will explain why I have started moving my tests. Firstly, I love to dogfood my own code so writing my tests was a natural progression. Secondly, the ability to write really verbose tests without having to try really hard.

I have started moving my Internet Explorer Selenium Tests to use the WebDriver API to test the Editor component of our site. This is one of the hardest areas to automate with Selenium 1 because its using contentEditable on a div. I managed to test this using the components JavaScript API to inject HTML and then manipulate it and then use the API to get the HTML out again and check that it is correct. Running 1 test in Internet Explorer for this would take up to 90 seconds to complete. I would like to add a disclaimer that the test had been optimized as much as it could with no XPath, etc so it was running as fast as possible

Now to start porting the tests over to the new API. After working out the little issues in how to get to certain elements I was able to port over a number of tests. I was able to port over 4 tests in a little bit of work and then thought to run the tests. Any guesses to how long it took to run all 4 tests?

360 seconds?
270 seconds?
120 seconds?
60 seconds?
30 seconds?

The answer is 60 seconds. Well it was 57 seconds but I needed to pick from the list I gave. 4 tests in 2/3 of the time to run the original test to check for the exact same thing. The new test have very little JavaScript pokng in the test because I use the native keystrokes to type into the div where I could not with Selenium 1. This means that the tests do not have to handle the JavaScript scope chain to do basic things and makes the tests lightning fast.

The next test will be to port the WebDriverBackedSelenium code from Java to .NET and then run the first tests again to see how much faster those tests run. So if you want really fast tests then start thinking about moving to Selenium 2.

p.s. If you haven't filled in my questionaire on Selenium 2 presentations and demo's please do so here

Monday, 11 January 2010

Selenium 2 .NET Bindings And How We Can Use Them

Selenium 2 is one of the most anticipated updates to a testing framework in a very long time. The next version of Selenium will see the Selenium and WebDriver code bases merged. It will allow developers to be able to switch between Selenium and WebDriver without having to change your tests.

"So what" you might be thinking? Well, Selenium and WebDriver have their strengths and in a number of places they are in opposite places. For Example, Selenium works on every browser that supports javascript where WebDriver requires a driver for each browser.

The merge, destined for use by Selenium Remote Control and WebDriver users, will allow developers to write developer centric tests tests in their favourite language. So how does it do this at the moment? The Selenium and WebDriver developers have created an uber-jar that houses both the WebDriver and Selenium API's. Developers create their tests as normal and call the same server for both Selenium and WebDriver.

I am sure that you are still going "So what?".


Ever had a situation where you tried to use the down arrow in your Selenium test to only start pulling your hair out! This means if you use the WebDriver backed Selenium, you will be able to get around any issues what Selenium Remote Control may get stuck on. E.g. The most common issue is when trying to do keypresses, Selenium will synthesize the keypress using JavaScript where WebDriver will send through keypresses at the OS level.
We will also be able to create more intuitive tests since WebDriver allows people to create objects in their tests for each object on the page by using WebElements like this : - WebElement elementOnPage = driver.FindElement(By.Name("idOnPage")) and then use it like with typing elementOnPage.SendKeys("Text")

The .NET Bindings have a lot of potential and Jim Evans and the rest of the Selenium-WebDriver Developers are doing a brilliant job! Keep up the work guys!

So to get some people going with the .NET bindings I have created some tutorials:

Once I get around a few Remote WebDriver issues with the .NET Bindings I hope to put up tutorials on how to use them.

Saturday, 2 January 2010

Every time a child says, 'I don't believe in fairies,' there is a fairy somewhere that falls down dead.

Peter Pan is one of my favourite stories and the title of this post is one of the most iconic phrases to come out of the book.

It's at the beginning of the book when Wendy and Peter are speaking. Peter explains that when the first baby laughed, its laughter fell and broke into a thousand pieces. This big breakup of laughter means that there "ought to be one fairy for every boy and girl". Wendy notices that Peter said should and questions him.
"Ought to be? Isn't there?"
"Every time a child says, 'I don't believe in fairies,' there is a fairy somewhere that falls down dead." Peter replies.


The same can almost be said about testers when a developers don't check their code works. I don't mean a tester will die every time but a small part of their resolve dies everytime. If its not a testers resolve that dies, it may be real people that die. This may sound quite extreme but its not really.

Since the begiing of the Agile movement there has been a great movement to get the quality of software a lot higher. Aside from all the other major benefits of using scrum and XP, people have noticed the quality of software being delivered go up. They have done it by employing things that make the process more effective. Not all of these processes are efficient but they are extremely effective.

Testers resolve only dies when they feel that something could easily have been caught by the developers being effective at their job. Being efficient can lead to bugs because we are trying to develop a feature within a set timeframe which is normally slightly shorter than it should be.  This is not all doom and gloom, there are bugs that testers love to find. Remember the rule for testing that if a test case is good it will find a bug but remember that testers are not there to make sure that you are doing your job properly. They are there to make sure that its fit for purpose!

When developing software always remember that you are killing your testers really slowly  when you are not doing your job properly!

"To die will be an awfully big adventure." Peter Pan 

This is something that every tester, from load tester to functional tester to security tester, loves to do. Make an application fall to it knees because it we managed to find some big hole. Making an application die is an adventure in many many ways. From going through the process of breaking it, to working out the cause of the breakage.


I have just watched Peter Pan for the umpteenth time and it got me thinking about my profession. Treat your testers like you would Peter Pan and they will be a lot more effective in finding those elusive bugs!

Monday, 2 November 2009

Testing Should Be Elegant

When you think of elegant what is the first thing to pop in to your mind? Is it a ballerina during "The Nutcracker"? That's one of the first things that pops into my mind.


When you think about elegant technology what is the first thing that appears in your mind? My first thought is an Apple iPod. Actually a lot of Apple products are very good examples of elegant. Other examples of elegance in technology are Amazon's "1-Click Buy" and Google Wave.

In all of these examples the elegance has given them a very unique selling point and made them leaders in their respective fields. The other thing that they have in common is that they have a lot of work that is done in the background. For example, with Amazon's "1-Click buy" it needs to handle if I buy 5 things in 5 minutes and put them in the same box. As a customer, do I care how it does it? No. As Amazon's COO, do I care? yes! It saves money on postage that the customer will probably refuse to pay because they just spent some money.

Now what does this have to testing? Well, in testing what would you consider elegant? I will make it easier by splitting it to test automation and manual testing. In test automation, what framework would you consider elegant? For me, nothing comes to mind. There are some BDD frameworks that are close but to be honest they aren't elegant. In manual testing what technique for testing would you say is elegant? Exploratory testing...no! Scripted Testing...no!

Why is software testing not elegant then? Well, the thing that makes elegance in the items that I had at the beginning of this post is the removal of decisions, especially redundant decisions. The iPhone goes on to silent with a flick of switch, my Android takes 2 or three steps asking me questions as I go along. Every time I get a question I need to make a decision. Nokia's are even worse with the changing profiles that make something slightly less silent each time.

So now get a developer to test their code and they look for the elegant approach because its out of their comfort zone. Developers write tests so they can produce high quality code but they need to decide which framework will fit the best. Decision number 1. Then when they start working with it they realise that there is something that it can't do against their code so they need to swap it out with a new framework. This means they need to update a number of tests which can take a lot of work. Decision numbers 2 and 3. 2 being the need to change the framework and 3 being the new framework that they chose.

If you are a UI Test automator I am sure that you have had a play with Selenium. Selenium is a perfect example of something that could be perfectly elegant but isn't. Have you ever tried to type something into an input box and failed with Selenium? Have you ever tried to press a button with Selenium and failed? Why do you fail to get it right? It was the decisions that you were forced to make at the beginning and you got it wrong. But because you made the decision you are less likely to blame Selenium than yourself.

Then you get into the situation that a developer, either yourself or a colleague, changes the layout of something that didn't have an ID you suddenly see your Continuous Integration server goes red. The XPath is different so you need to update it in each of the broken tests. Elegant? Far from it!!!

So what can we do about this? Well, most developers have started using Domain Specific Languages(DSLs) to write their tests and let all the hard work be abstracted out. It allows developers to now write tests that are now readable and leading towards testing being elegant. The formation of DSLs is starting to make testing more elegant but we need to do more to make things elegant.

In manual testing we also need to make it more elegant. Take a form that has 6 input fields and you can fill in 1 or all 6 of these fields. There are 720 different combinations and then we have to assume that it only takes 1 type of input and that it doesn't go down a number of different code path because that will add to the number of decisions that a tester has to make. Testing will never remove the decision making but cutting them down to make testers a lot more productive.

There are techniques to help you cut the decisions down, like equivalence partitioning, but there are still lots of decisions to be made. If partitioning suggests we only do 10% of the combinations because those are the combinations that are important we still have 72 decisions to make. If each of those take 10 seconds to execute you suddenly have 1 form that can 12 minutes to do. Elegant? Far from it. This is possibly the other reason why management think that test automation is the be all. Test automation is slightly more elegant but to be honest it isn't as elegant as it should be!

What can we as testers do about this? Well unfortunately I can't answer it but we as a community need to do something to make it all a lot more elegant! So when you start developing you next load of test cases, try think about how you can make the process of testing elegant.

Thursday, 22 October 2009

GTAC Day 2

Day 2 of GTAC started out with something that always fascinates me in the computing world. Something more than test automation or the user experience that users are given. The day started by a talk given by Dr Alberto Di Meglio about grid computing.


Opening talk for Day 2 - Dr Alberto Di Meglio

Alberto works for CERN working on the ETICS project. This is part of a project to get Grid computing working so that scientists can use it to complex calculations. This grid is also going to be use by many other scientists around the world use the E-sciencE grid. Alberto started explaining that with new technologies, like the LHC, we need something to process a large amount of data and we need to do it fairly quickly. The LHC can produce in the region of 15 Petabytes worth of data.

All of this work needs to be done and needs to tie in with the emerging standards that are appearing for Grid computing. The next challenge that they have seen is that they need to find a way of doing Quality Assurance on the system. ETICS is a continuous build, test and QA verification system. This system is designed so that developers can work in many different languages but they all access a standard API to get builds and tests done. ETICS is developed against a QA model called A-QCM, which stands for Automated Quality Control Model. They also implement a number of ISO codes and have submitted how they do their work to potentially become its own ISO code.

Tests are done in a multinode environment. This environment is built dynamically for what is needed and when it is needed. This is done by creating VM images on the fly because maintaining a set of VM Images can be quite the prohibitive in maintaining all of this. He also mentioned that there has been a major shift in culture with the ETICS project because the scientists that develop now do so thinking about the quality of their code.

A very enjoyable talk about the issues of grids.

Testing Applications on mobile devices - Doron Reuveni

Doron, the CEO of uTest, started by explaining what his company does and how goes about doing these things. He talked about the story from England where an entire village tried to guess the weight of a bull. Not one person got the correct value but the average of all the entries worked out the weight of the bull to within 2 Grams. This is called the Wisdom of the Crowd.

He then started explaining about the differences between crowdsourcing and in house testers or outsourced testers. He said that crowdsourcing fitted in somewhere in the middle between in-house and outsourced. He then also started describing the differences between scripted manual testing and exploratory testing. He said that a lot of the uTest testers fell in the exploratory testing realm. These testers are really good at finding the edge cases that no scripted tester would find and that they are a lot more creative.

Doron said that all of this is really good when testing mobile apps because the smartphone that will win the market will be the one that has the best apps and has the best entry to market for these apps. He said that the crowd is really good with mobile apps because they can use a lot more providers, phone types and data types than an in house team could.

It was a good insight to crowdsourcing

JSTestDriver - Jeremie Lenfant-Engelman

Jeremies talk was discussing the new JavaScript Testing framework that he and other Googlers have created. The new framework allows people to develop JavaScript in a Test Driven Development environment. The new framework was designed with the need to make tests run really quickly in the same way that JUnit tests do and also to be run in the same way. It needed to remove the need to move from an IDE to a browser and press refresh.

Developers like to see that their code is doing a lot of work and that it is doing this work quickly. JSTestDriver does this by having a server that captures browsers and the browsers then poll the server very regularly to see if there are bits of information out there. When there is it runs it and reports the results. This means that we can either run this really quickly or we add this to a continuous integration server.

It also has a very good code coverage model that is being developed so that we can see what code has been executed. Overall it is a very good framework and since it came out I have been playing with it and have even submitted a number of patches. This is definitely something that we need to watch in the future.

Selenium: to 2.0 and beyond - Simon Stewart and Jason Huggins

This is probably the most anticipated talk of the entire conference. It was quite funny to see the entire conference room get all filled up just before the start of the talk. Jason and Simon started talking about the benefits of Selenium and the benefits of web driver. It is great for doing the work that you want when testing the browsers. They then discussed the different issues with doing this as well.

So the solution? Well to merge the two frame works and make one "Uber" framework. Simon and Jason have been doing a lot of work to get the two development branches merged resulting in the code now being moved over to http://selenium.googlecode.com and the next step was to start merging the bug tracking for this. They are going to be creating a hybrid system that allows developers to write WebDriver tests and run them in Selenium and vice versa.

Looking at Selenium 2.x there is going to be a lot more work on browser support for Webdriver natively and work to get it doing a number of this so that we can move away from the "Uber-jar" that is going to be released shortly. There was also a talk by a guy from Opera, who's name I can't remember, who said that they use WebDriver to help them do all of there rendering tests for new versions of Opera, including mobile versions.

This was a very entertaining talk.

Score one for Quality - Joshua Williams and Ross Smith

This was one of the talks that I was looking forward to for the entire time in GTAC. Joshua and Ross, who both work on Windows 7, talked about how they introduced a number of games to do quality assurance. They started out by showing that some people are Actively Disengaged with there job. This means that they don't always enjoy what they are doing and how they are doing it. They then started introducing games into the work people did but they needed to work out a few things like working out what would excite people. They went through a number of games that testers and developers would play until they found the right mix.

During their talk they even got the group involved by doing a competition for who had the "Best Bug Story" and sharing that with everyone. The showed with this that people are competitive and want to compete all the time and its a natural thing. They also mentioned that new developers and testers are coming from the Gaming Generation so love playing games all the time.

The said that we need to do a lot more to get people engadged with their work and they need to start feeling like they are controbuting to this.

A very good talk all in all from the Microsoft guys!

Automated Performance Test Data Collection and Reporting - David Burns and David Henderson

This, unfortunately, you will have to wait for a seperate blog post for this.