Thursday, 18 December 2008

Testing through the Credit Crunch - Part 3 - Virtualization

So far in the series we have learnt how to save money by becoming more Agile and by implementing Test Driven Development. The move to these can take a little while to implement because of the learning curve involved and/or the inability of some people to move out of their comfort zone.

This post is going to discuss virtualizing our test environments. Virtualisation is fast becoming the norm in data centers because it allow infrastructure managers to build up disaster recovery machines fairly quickly. They can also utilize the hardware of the host more efficiently. Good thing when everyone is trying to become 'greener".

So if the infrastructure people think its a good idea then it must be a good idea for testers. Getting real machines for testing is expensive, not only in terms of the hardware and software, but also the human resources to maintain them. Think about having 10 computers with 10 Windows licences that then need to be maintained. Having a quick look at Dell for a basic computer it would cost at least £2990 for the 10 computers and windows licences and just to make it a round number lets say that it will take £100, £10 per machine, to maintain them for a test cycle.

So we now have a cost of at least £3000 to get all the computers and maintain them for one cycle. That doesn't include the electricity to run all of this hardware. Since its the credit crunch we don't that amount of money to waste. The best thing to do is to virtualise the whole lot.

Spend the same amount of money on a low range server . So now we only have one machine that needs to be maintains. Maintenance cost goes from £100 down to £20. I say £20 because maintenance on servers can be a little more because the hardware is slightly more expensive. http://www.vmware.com/go/calculator will give you a better calculation. I did it and for 500000 machines, defaults in the calculator, I saved £350000+ over a time frame.

So whats the best Virtualisation software available? This is a very hard question to answer. I am big fan of free software so use Microsoft Virtual Server 2005 and VMWare ESXi. I have heard good things about XEN if you have a Linux. The main thing is to chose an application that you feel that you can work with quite well. Both MS Virtual Server and VMWare ESXi have APIs that allow you to manipulate the machines that are running on it. This means that you can clone machines and create a new machine by just running a script. As with Test Automation you won't see the saving straight away but it will save time, hence money, in the long run.

In the next post I will discuss visualisation of testing

Sunday, 30 November 2008

Testing through the Credit Crunch - Part 2 - Test Driven Development

In the last post of this series I discussed briefly about Technical Debt and how Test Driven Development can help in dropping the debt and dropping the amount of interest gained over the life of the product.


Test Driven Development is the concept that you write your tests before you write your code. Once your code is finished you refactor with the safety that your newly created test will prevent any stupid bugs that you could have introduced. It's what the TDD community call Red-Green-Refactor. Now to most people writing tests first sounds stupid, "How do you know what your code is going to do until you have finished coding it?"
My retort tends to be "You know what you are expecting back from the call. Your test needs to be really small and since an Assert is a test, Assert what you are expecting your method to return! Image that your code exists and test against it." Below is an example of a small test and when writing tests we should try keep them as small as we physically can.

[Test]
public void creditcrunch_Test(){
int FutureTechnicalDebt = 100;
Assert.LessThan(FutureTechnicalDebt,TechnicalDebt.Current);
}

So we know that we need tests first to give us a good safety net for writing our code.In Technical Debt terms we have spent a little less to create/run and report on the test and we prevent anyone else breaking our code. Cost-- and Interest-- which is what we want. Running the test above will fail because there is no code to run it against. Write some code to make it pass if you have the time.

The other reason why developers are starting to like TDD, other than the removal of the ear ache that testers would give them about throwaway builds, is that challenges them to think about the interface into their new object. We have all sat down with the keyboard in front of us, tea/coffee next to us and found that someone has changed the interface your were developing against! Queue the expletives! $*@%*$£ !@)@)£)&! Luckily for us we have good Test Driven Developers who have have discussed the interface and we by proxy prevented interest being added to our debt. Interest--! Communication is key in Test Driven Developments, especially when it comes to interfaces because there may be some integration bugs that will creep in without you realising!

Knowing what the Interface is going to look like is has another major benefit. Mocks and stubs! If we know what the interface to an assembly is and how its going to react we can write tests to test our code and when it needs to speak to the other assembly, which may not have been created yet or makes our tests run slowly like accessing a database or filesystem, we can return the results back that we want our code to handle using one of them. This could be exceptions,dodgy data or good data. Testing those code paths means we can increase our code coverage therefore increasing the size of our safety net! It will also mean that our tests run a lot quicker because we don't have to interact with slow drives or have anyone elses tests interfere with our data. Cost-- and Interest--.

So by creating this safety net we have prevented stupid mistakes creeping into our code. Its these stupid mistakes that could cost our team/company a lot of money. Since companies don't have much money it could mean your job!

The graph below shows how by doing the, what I call, simple things we have brought down the technical debt that we could potentially owe. NOTE: The values used are just arbitary values I found on the internet with a little searching.



In the next post in the series I will discuss Virtualization and what the it can mean to you as  a tester!

Thursday, 20 November 2008

Testing through the Credit Crunch - Part 1 - Development Environments

The credit crunch has gripped the world and not so long ago Gartner suggested that the investment in technology was going to drop. They have dropped the increase in spending from 5.8% down to 2.3%. This may not seem a lot but when companies spend millions on IT each year it adds up very quickly. Unfortunately past experience has shown that testing departments are the ones who suffer first.

More employers are predicting a rise in redundancies in the UK and the other week a politician predicted a run on British Pound.

This doesn't mean all doom and gloom for IT professionals. Over the next few weeks I am going be writing a series of posts with some best practices that I have found. Lets start this series by discussing the Environments we build and test software.

Environments

This is a very ambiguous term so lets split it into the 2 main things I mean. Development Methodology and Testing Environments.

Development Methodology

When creating products it is starting to become common practise to follow Total Quality Management (TQM). The idea is to make sure that quality is in every process of the organisation. By moving to your company to this principle you will be adding confidence to your product to all stakeholders. More confidence in your stakeholders will mean that your department will get a little more funding and since its the credit crunch it must be good!


A lot of companies actually practise this without realising it. The main thing to take away from TQM is that everyone in the company has a part to play in the quality of the product that is delivered!

Agile or Not!


My first few jobs were at financial institutions and thought it was normal to do testing at the end. I worked as a WinRunner Automated Tester. I then started working on some Open Source projects in Java and was asked "Can we see your unit tests? Did you use JUnit?".
I answered quite sheepishly "I tested it by checking that it did what it needed. I don't have any JUnit tests."


Obviously my code got rejected because it wasn't neat and didn't have proper unit tests. There was a few other issues that they raised and to be honest if I was doing the code review I would have rejected it too!

Now I work for an ASP/ SaaS and we do a little Agile. I have been able to see the value of being agile while developing software. The main thing that I have learnt is that Technical/Design debt is something that is inherently low in an Agile Evironment.


Technical debt is the idea that when you release software you go into debt. If you have cut corners it will affect the level of debt you have. If you leave these items in your code it will start gaining interest on your debt. So if you have any TODO's in there that is interest on your debt. If you have poorly designed interfaces then you have added interest on your debt. Putting testing at the end of a project will definitely add interest to your debt! Doing tests at the beginning of the development process will bring down the "interest" accrued against your debt in the development process.

So to help us get through this credit crunch we need to bring down the debt in our software. The best way to do this in my opinion is to achieve this is to move your development team to a Test Driven Development way of thinking. As I said above if you place your tests earlier in the process you will lessen your technical debt. The simple rule when it comes to testing is the earlier you test the cheaper it is to solve bugs. So putting in automated tests at the beginning of the development process will go far in bring down your actual development costs for that project but also for future developments.


In the next post I will discuss further the value of Test Driven Development.

Friday, 24 October 2008

GTAC 2008 - Day 2

Day 2 of GTAC 2008 has been quite interesting. The talks that have interested me were the talk on Context Driven Testing.

This talk, by Pete Schneider of F5, dealt with what context that tests were being created. It dealt with a common issue that testers and deverlopers have and most of them ignore. It was really interesting in seeing that they were 11 different applications that testers had created and were maintaining but there was a lot of overlap. They found that creating tests in the right context and with the right owners of tests has dramatically made a difference in getting their application tested.

The main thing that I took away from it is; When creating a testing framework ask yourself the following questions

  • Who is going to write the framework
  • Who is going to build and maintain it
  • How are you going to use it
  • How long will the tests live

The next talk that was really interesting was on Automated Model-based testing by Atif Memon and Oluwaseun Akinmade of University of Maryland. It was a really good talk on creating models for test cases, similar to my Graphical Test Strategy post from the beginning of the year. The difference between my post and this talk was the model is created on the fly by the test code that then builds a WebDriver test case. There is one draw back that I noticed and I know I wasn't the only one. This modelling works well on websites that don't have AJAX. Its a big draw back and they did mention that they are working to get it sorted.

Its value for Desktop applications is really good and I would definitely recommend it. It is able to recreate states for the test and make sure that works well and has good coverage.


The next talk was on the value of unit tests by Christopher Semturs. He was advocating the use of Mocks in your unit tests so that you can get better granularity of your tests. This is something that really interests me because I like making my tests fast because if a test is fast a developer is more likely to run the tests. He was advocating the idea of small,medium and large tests. Its an idea from some Google guys where you do unit tests in isolation and then do pair-wise integration tests. This means that you drop your large end to end tests number dramatically.


GTAC 2008 has been really cool with a lot of clever people talking about different aspects of testing and describing what is needed from testing in the future. Its definitely been worth the jetlag!

GTAC 2008 - Day 1

Day one of GTAC has been really cool. There have been a number of really good talks. I will put links to the YouTube videos when I know them.

James Whittaker's opening talk was a really good start to the day. One of the main things he talked about was the visualization of testing for the future. He discussed the way that people can visualise when there are new code differences. This can be very useful in making sure that testers can see what is new and what should be concentrated on. I am always keen on using visual ways to test and make sure that the quality is high.

The next talk that caught my attention was "Advances in Automated Software Technologies". This talk discussed the idea of Autonomous Computing in software testing and the idea that you can auto generate test cases against APIs. It is an interesting idea but the way that it was put across was that you can only use this on items where the requirements are rock solid. This is not really something that can be applied to Agile developments.

The talk on Groovy was really good. I have started playing with Groovy when trying to automate SOA testing using SoapUI. Its a nice language that can be strong typed or weak typed at the same time and really useful for scripting. Since I haven't played with it a lot I did learn a lot like you can hook into all languages that use the JVM.

The next talk that really interested me was the last talk of the day. It was on the testability of code. It was looking to see how easy is it to test and making tests simpler. This is important because it means that your tests become manageable and supportable. Vishal Chowdhary was advocating SOCK:

  • Simplicity of your code
  • Observe how things work and interact
  • Control of your tests
  • Knowledge of what it should be done
He was also saying that we need to make sure that we don't overdesign software because this can add extra complexity when there doesn't need to be!

The talks have been very good and have stimulated some ideas that I want to take back to work! I am now off to the Google Seattle office for a tour the lightning talks.

Thursday, 16 October 2008

.Net Gem - How to Unit Test Internal Methods

I have recently started redoing unit tests for bit of code at work and came across a internal methods and classes that needed to be tested.


This posed a question of: how do I access the internal methods and classes to test them properly from an external assembly?

I found this little gem that a lot of the developers where I work had not heard of, this doesn't mean that its not common knowledge but thought I would share it a little more.

I have put a scenario below of how it might work.

Being a good TDD developer you decide that you want to write your unit tests for an internal method. You create your .Net Class Library structure so you know what your tests need to call to do your asserts. It may looks something like the image below.


So now you want to start writing your tests and notice that your intellisense is not bringing up your internal class. So what do you do next?

Open the AssemblyInfo.cs file that is in your Properties folder. Add the line [assembly:InternalsVisibleTo("Unit.Tests")] somewhere near the top.

Now your Internal Classes and methods are only visible to Unit.Tests assembly.

Your start writing your unit tests and intellisense should be playing Mr. Nice Guy and we should see something like below.




I hope that it has been useful!

Sunday, 21 September 2008

Are regression packs still worthwhile?

Recently I was on forum and noticed the thread "Why do we do Regression testing?". In it the person was complaining about their company's regression pack and asking if it was worth the effort of fixing it?

My answer was a little quick off the mark. I replied saying "Definitely! Its the backbone for each and every build!". But then this got me thinking, How many testers out there DO NOT practice Agile development? How many testers DO NOT work in an environment where automation of tests is 2nd nature?

I do not ask this question in a malicious way, I genuinely would like to know. When I go to SIGiST there are a number of questions in the presentations/workshops by people who have not worked in an Agile environment and, to be honest, seem scared by it. These are the same people when you mention something like "Continuous Integration" and they get that deer-in-headlights look on their face.

This leads me to my next question; Is there a "Rich get richer, Poor get poorer" divide happening in testing? People in conferences mention that the lines between development and testing is blurring. It's called grey-box testing. But your average manual tester in a large non-software institution (Banks, Insurance Companies, etc) have got their "break" in testing by being seconded to the test team for a project and never wanted to leave their secondment. They may not have been given opportunities to learn how to program or may not even have heard about concepts like Agile, TDD and all the other new ways testers do their jobs.


To these people; the answer to my original question is still the same: "Yes! Definitely! Otherwise things that used to work may not work anymore. I would also suggest trying to automate the regression pack as much as possible so that you can concentrate on making testing more efficient. Try automate as much as possible so that you can concentrate on doing testing on those hard to test areas. Quality of software is still the main objective of testing."

Thursday, 18 September 2008

Google Test Automation Conference 2008

I have just got back to work from my summer holidays to find a nice little gift waiting for me in my Inbox!

I have been given an invite to the Google Test Automation Conference(GTAC) in Seattle, USA! GTAC for those who don't know is the annual conference that Google hold. Its an invite only conference where people can just let their geekyness flow!

This year its being held in Seattle, USA. Last year it was held in New York and the year before that it was in London.

Below is a list of the presentations at the conference

Scheduled Presentations

* Atom Publishing Protocol, Testing a Server Implementation by David Calavera
* JInjector: a Coverage and End-To-End Testing Framework for J2ME and RIM by Julian Harty, Olivier Gaillard, and Michael Sama
* Advances in Automated Software Testing Technologies by Elfriede Dustin
* Taming the Beast: How to test an AJAX Application by Markus Clermont and John Thomas
* Automated Model-Based Testing of Web Applications by Oluwaseun Akinmade and Prof. Atif M Memon
* Boosting Your Testing Productivity with Groovy by Andres Almiray
* Practicing Testability in the Real World by Vishal Chowdhary
* The New Genomics: Software Development at Petabyte Scale by Matt Wood
* The Value of Small Tests by Christopher Semturs
* Deployment and Test Automation: Extending the Notion of 'Done' to 'System Tested on a Production Deployment' by Marc-Elian Begin

I am going to try my best to update my blog while I am there so people know how its going. If anyone reading my blog is going conference drop me an email so we can meet up and talk shop!

Thursday, 3 July 2008

Tester 2.0 - Definition of an Agile Tester

I have been to a number of SIGiST Conferences over the last year and have always been impressed by the calibre of the speakers there. There has been a theme lately talking about Agile Development and Agile Testing but the thing that always seems to missing is someone giving their interpretation on Agile Testing. To actually describe what an Agile Tester is! And what they should be doing.

Below I am going give my opinion of what an Agile Tester by describing skill set needed.We are after all only as good as our skill set! I am going to rate them from 1 to 10 with 10 being needed and 1 being desirable.












































Skill

Score

Reason

Creativity

10

A good tester needs to be extremely creative when trying to test applications. Developers like to think that they are clever and get all the bugs before its released. Unfortunately this is not true when a tester starts being creative in the way they test the application and breaks it very quickly!

Innovation

10

A good tester needs to be able to innovative with the tools that they have. An example would be in SaaS(Software as a Service) companies who constantly striving to be the next big thing and competing against the likes of Google, Microsoft, Salesforce, etc. This drives innovation in development which means that it drives innovation in testing!

Test Automation

9

Every Agile tester needs to be able to do some form of automated testing. I am not saying that the tester must know how to use every tool that is out there but they need to understand how to create the automated tests.

Exploratory Testing

7

Agile Testing tends to mean Exploratory testing. I know that some people will disagree with this but because testing tends to happen on the fly and appears ad hoc but it isn't. Exploratory testing does have a set of rules that need to be adhered and will mostly bring a number of bugs to the surface just as well as normal scripted testing.

Communication

7

One of the main pillars of Agile Development is communication. This is very important in my opinion because a good tester must be able to communicate bugs to developers and requirements to end users. A good tester will also be able to challenge technical architects on their ideas. I know the last sentence can be very hard to achieve in large organisations but it doesn't mean that you can't send them an email to make suggestions.

Development

5

An Agile Tester should not be afraid to look at a developers code and if need be, hopefully in extreme cases, go in and correct it. I am not suggesting for one minute that they must be able to code entire applications but must have the confidence to look at a bug, spot the error in the code and either write a unit test to break it or point out the line where they think the issue is.

An example of this is where doing Paired Programming with one developer we shaved of nearly 70% off the time it took to run a unit test test suite just by doing a one line change to their tests which got them to make one other small change.

Ability to work in pairs

3

I am a fan of being able to knock ideas off each other. I wouldn't do paired programming all the time but I like the concept and how it seems to get things done properly with little fuss.

This is not a comprehensive list but these for me are the most important things. Tester 2.0 is something that a number of testers need to strive for if they do not want to get left behind. There is a definite blurring of lines between development and testing. Gone are the days of "I have finished my part, over the wall it goes!". There is also a move to more "Grey Box" testing because of this blurring.

Monday, 9 June 2008

Interpretations of the V-model

I have recently been doing interviews and one question that I like to ask interviewees is "Where do you feel testers should sit within the V-Model?"

The V-Model (as pictured below) is a representation of how most "development" tasks have an equal "testing" task. I like to think of it as testings equivalent to Newton 3rd law.

This should be a good thing because it shows that there are different aspects of testing that need to be done during the software development lifecycle.

All the interviewee's seemed to give a different answer to the question above. Being the interviewer I was hoping for one of 2 answers.

The first answer would be one that made me rethink my thoughts on the V-Model. To be honest as an interviewer I am always looking forward to someone making me rethink a topic.

The other answer is that testing should encompass the entire V-Model. From the beginning of the Business Requirements stage to the end of the Acceptance Testing stage. The reason why I think this is because Quality Assurance should be in every aspect of the software development lifecycle.

I know that this is something from the Waterfall methodology but its a good way to see what needs to be done in the lifecycle and can easily be transformed into an Agile development.

Wednesday, 7 May 2008

ISEB/ISTQB - Is it really worth it?

I have been reading James Bach's blog for a while and in the latest post he has commented on Alan Richardson's commentary on ISEB/ISTQB.

Finally I have someone that agrees with me that ISEB/ISTQB while being good in theory is pretty useless in real life. I know that most people reading this will be saying "That is a very bold statement!". I know that it is and below I will explain why I agree with Alan.

Background
Testers have been fighting for years to remove the stigma that testing software is easy and that any person can do it. This, I feel, is because software testers used to be glorified data inputters with a little technical background. No wonder developers saw a secondment to testing as a demotion. Then Kent Beck came along and wrote Test-Driven Development: By Example. A developer who came along and made testing fun for developers! Something that testers could never get right! So lets have a look at the pros and cons of having a ISEB/ISTQB certificate.

Pros

  • Lets start why I think ISEB/ISTQB is good. I just said that testers have been fighting for years to remove a stigma that testing software is easy. ISEB came along and people who didn't work in the industry suddenly thought that testing was a little harder because you needed a certificate.
  • It also sets out standard names for things so that if one tester speaks to another they are speaking in the same tongue. Its also the latest buzzword so HR people will more than likely to give you an interview when you apply for a job.

Cons
  • The ISEB/ISTQB foundation course teaches the exact same things that I was taught during my first year of university. So not only do you have to pay your way through university but then have to pay from £140 to £700 to get the certificate.
  • I met a ISTQB examiner in my last job and he did not seem to understand testing in the real world. So are the testers that are getting these certificates also getting a working understanding of testing? I honestly don't think so!
  • The other thing, and this is a big thing for me, is that the foundation course book mentions Agile Development 3 times. Those three entries are no more than a quick mention. Is it worth getting a certificate that doesn't give a chapter what is becoming the most common development methodology.

Results
ISEB/ISTQB is unfortunately going the way of the the MSCE certificate of the late 90s. Its becoming no more than a buzz word and I feel is lowering the standard of testing. I know that I am not the only person who thinks this. I read a blog entry not so long ago where testers in India are seen as the poor cousins. Something that we've been working against!

This has to stop and we as testers are the ones who can stop this! Come on testers, lets start a revolt and get certificates to mean something!

Sunday, 27 April 2008

Selenium Grid - Why is this the way of the future?

Update: If you are looking for a Selenium Grid Tutorial I have put it here


I have been working with Selenium Grid since Philippe Hanrigou and others at ThoughtWorks released it at the end of last year. I knew then that it was going to become the standard way for automated GUI web application testing.

As an application it has great potential since it is has all the best bits of Selenium but it does multiple browsers in parallel. Suddenly your testing that used to take n units of time to run are taking 1/n to run. As a practitioner you can get things done quicker, as a manager you can get things done for cheaper! Everyone is a winner.

The other thing that makes the grid work well, and this is a thing that makes Selenium Remote Control work brilliantly, is the ability to use any language to control the test. People use Java, Ruby, .Net, Python and many more to create their test scripts and run it with the appropriate client. If you have the appropriate client application like TestNG for Java or DeepTest for Ruby or PNUnit for .Net you can start running tests with one script in parallel. The one limitation toof using these test tools is that you only really have one browser application, like Firefox, being tested at a time. This is because you only have one instance of the Selenium object in your code.

The way that I have got around this is to have many instances of the Selenium object in my code that each control a certain type. This type could be Internet Explorer 7 on Vista or Firefox 2 on XP.

An example of this would be (Note: I use c# to create my tests and then run them with NUnit)

[Setup]
public void setup(){
//Vista IE7 Instance
VistaIE7selenium =
new DefaultSelenium("hub", 4444, "IE7 on Vista", "http://testserver");

//XP Firefox2 Instance
XPFirefox2selenium = new DefaultSelenium("hub",4444 , "Firefox2 on XP",
"http://testserver");
XPFirefox2selenium.Start();
}


[Test]
public void GridTest(){
VistaIE7selenium.Open("/index.html");
XPFirefox2selenium.Open("/index.html");
}

As you can see it has the ability to create multiple instances of Selenium and then run them. This then leaves the handling of the Selenium Remote Control up to the Grid. As Philippe says, "It allows you to gridify your tests".

The negatives of Selenium Grid is that its still a work in progress. I know, thats hardly a reason to call it a negative. There are also a number of bugs but its growing section of OpenQA.org so give it a chance! I have 7 Selenium Remote Control instances running against my Grid (all virtualised with MS Virtual Server and WMWare) and I haven't had many issues. I have hooked Selenium Grid into our CruiseControl.NET server to test all the different browsers. I still have a lot of work to do with this but I know that it will be worth it in the end!

If you want to automate web testing and you want to do it efficiently use Selenium Grid! It will make your life a lot easier!

Wednesday, 26 March 2008

What makes a good automated test?

I have been asked to try automate the testing of a wider range of for the group. This is going to be quite challenging since I have not automated desktop applications for about 15 months and the software we make is quite unique. I will also hopefully be teaching the other testers how to maintain the automated tests and eventually create their own.

This challenge has made me think back on my automated testing experience and ask "What makes a good automated test?". A good automated test has all the hallmarks of a good manual test except that its comparatively quick and cheap to run. This means that an automated test is successful when it finds bugs and repeatable where ever the software is installed.

In terms of web testing it would be a test script that can be created and then stored in a central place and be run from the central place. Ideally this would be also run on the continuous integration server when doing builds.

The thing that makes a good automated test great is scalability. I recently watched a YouTube video of the Selenium Users Meet-up. There were Google staff there saying that their testing farm handles 51000 tests and most Friday's all of these tests run. They then went on to complain that Selenium struggled to scale to that size. To be honest at that size I am not surprised!




P.S. To those who are waiting for the next Selenium Tutorial I am hoping to have it complete in the next week for everyone to use.

Friday, 7 March 2008

What would people like for the next Selenium Tutorial?

Thanks to everyone who has been using my site and all those who are subscribed to my RSS Feed. I haven't released another tutorial for couple of weeks and have started thinking of what would people like next for a Selenium tutorial.

I am struggling to think what people would really like. I would appreciate it if people would take the time to let me know, via the comments section of this blog, what tutorial they would like next. Please don't think that your questions are too novice. I would like to help as many people out as possible!

I look forward to your replies!

Friday, 15 February 2008

Is documenting test processes a bad idea in an Agile Development Environment?

I work in a smallish development team in Southampton as some of you know. There are other development teams in the group in United Kingdom and in France that all fall under the same group of companies of different sizes.

The unfortunate problem that we have is that there isn't a properly documented process for getting everything in place and ready for each testing cycle. Once we have everything ready we have another problem that I will hopefully discuss in a future posting.

This is where I had the thought "Is documenting test processes a bad idea in an Agile Development Environment?". If you have worked in an office with ISO9001 accreditation you will know that everything needs to be documented or if you have worked in a Six Sigma office you will have noticed that you have to put everything you do in a database so the "Black Belts" or "Master Black Belts" can see areas that need improvement.

So back to my question, "Is documenting test processes a bad idea in an Agile Development Environment?". My answer is definitely! And I am going to admit this is a fairly biased answer having started my career in large financial institutions where I was taught that everyone is expendable but their process knowledge isn't. This means that everything needs to be documented. However my thoughts on this is that things need to documented and followed to a point as long as that point does not impact on the creativity of the software developers and the testers.

Its this creativity that allows people to come up with new and exciting ways of developing and testing. Tools like the xUnit and Selenium and many more have come from this creativity. This then needs to be balanced with some form of Total Quality Management (TQM) so that everything can be reproduced if and when needed.

Documented processes can't be completely wrong, if they were then large companies like IBM, Citrix and Microsoft would have never got to where they are now!

Thursday, 14 February 2008

What has changed in the last year at work

Not that long ago it was my year anniversary with my employer. As with most people it just seemed to blow over but I have had a chance to reflect over what
changes there have been in my time there. The one thing that I can definitely say is that I has been a good year.

When I started there was no testing team which means that there were no definitive test processes and no definitive automation testing.

There was however a team of developers eager to make sure that they delivered a
good quality product.

In the last year I have managed to automate around 40% of the site with Selenium. This has then been installed on the development machines so that the developers can have a

way of developers to unit test thier front end code.

I have managed to get the development team to start using NUnit to create their unit tests instead of developing their own applications to unit test their work. This has meant the number of bugs that get into the builds has dropped dramatically.

These two changes have created a mindshift for all of the developers to a more
agile development cycle. This shift has not been that difficult and the reason
for this is the management team that I report to. They are always open to ideas
especially ones that save money. And agile development on web projects
definitely has this this appeal.

So next thing in my plan is to create an integration server so that all the
different development teams all over the UK and France can check in their work
and be confident that their changes will work on another environment.

Wednesday, 13 February 2008

Graphical Test Planning

In December I went to conference in London and one of the lectures that I went to listen to was about Graphical Test Planning by Hardeep Sharma of Citrix Systems.

Graphical Test Planning is a way of creating a test plan without having to write entire documents. I am sure that most of the readers
that will be reading this will be practise full agile development and are probably
thinking that I am mad.

Maybe I am mad but I really think that this is a great way to work out what needs to be testing and
is a good way to track progress of testing.

The entry below is more my thoughts having tried this technique on web projects
and have used a web examples. I hope you enjoy reading it!

Graphical Test Planning is done by creating a structural relationship diagrams.
These
allow product managers to know what is going to be tested from the day that they decide there is going to be a new version of the product.

What a Graphical Test Plan is or isn't

What a Graphical Test Plan is:

  • A structured relationship diagram

  • A list of behavioural areas that need to be tested

  • A method of getting greating feedback about what needs to be tested

  • A method of tracking progress of design and execution of testing

What a Graphical Test Plan isn't:


  • A flow diagram

  • A tester managers thoughts or mind map

  • A feature list

  • A hand-holding exercise for a test engineer on how to use the application

How To Create A Structured Relationship Diagram

A structured relationship diagram will show what behavioural areas need to be
tested on different platforms and different compilers. Below is an example of
what a SRD looks like for Selenium.










Its not 100% correct for selenium but I wanted to show something
with no coverage.

From this you can see what areas need to be tested and what doesn't. If we carry this theme on we can then
start drilling the behavioural areas down into more detailed SRDs and if we were to
carry on drilling down we will start to create another form of these diagrams
called a Test Case Diagram.

A Test case diagram looks a lot like a SRD except at the end of the digram instead of showing the behavioural areas it will show the expected results.
The great thing about this is that you can create the SRD in about 10 minutes and then do the test case diagrams in a few hours and this means that you
can have all this test information done before any code is done.

So if we use this metric to create an entire test strategy for a project using graphical test planning compared to the waterfall method of doing things

you would have saved yourself somewhere in the region of 1 man month and you are likely to have fixed a number
of flaws in the design before any code is near completion.

I hope that you found this very interesting and I would like to the thank

Hardeep Sharma for showing off this technique at the SIGIST Conference in
December.