Testing e-Commerce - in at the deep end!
Around the middle of 1999 I found myself, for the first time, working on a development project, the goal of which was to deliver a new e-commerce application.
Initially I was part of a team that evaluated alternative solutions and put together a proposal that would deliver the application in a 15 month development timescale.
The solution itself used modified package software at the back end and bespoke code for the web interface. The back-end systems were supplied by two separate companies and the front-end by a third.
At that time, I was responsible for defining the Test Strategy which consisted of a conventional Integration Testing, System Functional and Technical Testing, followed by UAT and OAT.
We submitted our proposal in late July and pretty much the whole team went on their summer holidays. When we got back from holiday we were horrified at how badly things had apparently gone wrong while we were away.
The good news was that the proposal had been accepted by the Client. The bad news was that they wanted it in nine months rather than the original 15 - and it wouldn't have been easy doing it in 15.
When we had calmed down, we realised that it wasn’t really as bad as it had seemed. This was all taking place at the height of 'dot com' fever when everyone was desperate to achieve a presence on the web.
The Client knew that if he took 15 months, he would be launching after his main competitor and if that occurred, he would lose market share to a degree that was unacceptable. He naturally enough wanted to avoid that and was willing to accept reduced scope if that would achieve an earlier delivery.
In addition to accepting a reduced scope, the Client had also signalled an acceptance of a higher degree of risk, so while the designers took another look at the solution to see what could realistically be achieved in a shorter timescale, I revisited the test strategy.
We submitted our revised solution, meeting the Client's wishes for an early implementation and obtained agreement at the end of September.
The developers leapt into action at that point. My immediate concern, however, was recruiting and building the test team and making sure that everything was ready to start test execution early in January which was the delivery date targeted for the first release.
As is often the case, finding good resource proved difficult. The largest single source of test analysts was the 'Test Team' we bought in from one of the consultancies.
We had to get testing staffed and effective in a very short time, and when it became obvious that we weren’t going to be able to recruit enough people from within the Client, the offer of a team as a job lot was very attractive and was eagerly accepted.
In describing them as a 'Test Team', I used inverted commas as it became very rapidly apparent to us that they were neither a team (most of them had never worked with each other before) nor had most of them any real test experience.
Though there was a wide range of experience on the team, several of them were on their first assignment since joining their consultancy practice after graduation. We suspected that the 'team' was simply a number of individuals who were between assignments at the time we were looking for staff.
In the event we need not have worried as they all did any excellent job for us. Those of you reading this who work in Professional Services will not be unfamiliar with the concept of being sold as an 'expert' in something you aren't, so all credit is due to the youngest and least experienced members of that team who performed far better than anyone could reasonably have expected. Looking back on it now, I realise that recruiting that team represented one of the best pieces of luck we had in testing over the course of the whole project.
On their arrival, the new test team must have wondered what they had let themselves in for. They had no sooner arrived and settled in than they were relocated en masse to another building that gave us the space we badly needed but which didn't as yet have any PC's except for the managers and team leaders. The provision of desktops was to take another two weeks and for that time all they could do was to get their hands on as much reading material as possible.
Solid reading material was, however, in very short supply. As a result of the speed of development there was very little indeed in the way of definition.
Business requirements were vaguely defined - frequently as one-liners that could of been interpreted in a number of different ways. Design specifications were not much better. For me that was the biggest difference between this project and previous conventional ones. How do you test when you aren't really sure how the system was meant to work?
We attempted to set up some formal education courses from the suppliers of the packages but they weren't able to set anything up in the time.
In the event, we were able to persuade the suppliers to take responsibility for the definition and initial execution of Integration Testing. Our people would sit alongside them and observe the results of the testing. In doing so, we would obtain skills transfer and would gradually be able to take over execution ourselves.
January came and we went into execution. Overall, Integration Testing was very successful in getting the various parts of the system to talk to each other.
At this stage, we didn't insist that interfaces had to be processed successfully, only that it was processed at all. The skills transfer also worked very well, and already we were starting to develop our own experts. Though it went well, it ran later than the plan had demanded, taking five weeks rather than the three in the plan.
From then on until full launch at the start of June (punctuated only by the arrival of the Pilot at the start of April), it was just a blur of activity with time passing more quickly than I had ever experienced before on any other project.
The typical working pattern was 12 hours a day, 6 days a week which would go a long way towards accounting for that. The main factors that ultimately made the difference between success and failure during that period were:
- As close to a blame-free culture operated as I have ever experienced on a project. The whole management team genuinely accepted that if we were to operate at the speed we were, then mistakes would inevitably be made.
All we asked in return is that mistakes, when they occurred, should be openly admitted and not hidden. Suppliers 'closing ranks' to defend mistakes made by one of their number was especially frowned upon and strongly discouraged. - Planning was only ever done at two levels: one was at a very high level and involved being aware of the target dates and driving towards them, the other level was 'what are we doing in the next few days'.
There wasn't really anything in between - I did try maintaining proper plans and Gantt charts but they were always blown within a few days. It was such a dynamic environment that it was impossible to plan beyond that size of a window.
The most useful planning 'tool' we employed was a round table in the corner of the office that we use frequently to resolve issues and crises. We would get the 4 or 5 people together round the table who were key to resolving the particular issue and agree a short-term action plan that would resolve it. - We piloted the live implementation. Of all the decisions made, by far the best was to launch initially with a two-month Pilot restricted to the more inquisitive members of the project team, and to their relatives (the development manager's 8 year old daughter proved very effective in this respect). This had the benefit of allowing us to:
- Make mistakes in a controlled way rather than in the full glare of publicity.
- Attempt things that would otherwise be impossible to properly test e.g. using your credit card at Disneyland in Florida.
- Mitigate the risk of the massive parallelism of the testing and the lack of the proper end-to-end testing that we were unable to do.
- Incident Management. Generally, we were very light on process, but one process we made sure we tied down tightly before we got into execution was in the area of Incident Management. We took all sorts of chances in other areas but not with this one. Configuration Management was handled by the supplier support staff, so that was one less thing for us to worry about and, by and large, they did a good job for us.
We invested a lot of effort in daily incident reviews with the suppliers, making sure they understood what the relative priorities of the different incidents were and applying maximum pressure to ensure that turnaround was the very best it could be. I recall that in the last week of testing prior to live launch we successfully re-tested 338 incidents. Constant change was very much the order of the day.
One last point on Incident Management - it is essential that in an environment where components are supplied by different companies that there is someone who can arbitrate on those incidents where ownership of the problem is disputed. This person needs to have stature and to be trusted and respected by all parties.
We successfully employed the lead Technical Architect from the Design Authority in this role. It really is important - you can lose a lot of time having incidents bounce between suppliers, and human nature being what it is, people are always ready to believe that the fault lies with the other group. - Without a doubt, the biggest single factor that contributed most to our success was the calibre of the team. It was by far the best team I have ever worked in.
Not just because everyone, almost without exception, was excellent but because we had somehow contrived to manage things so that we had the right mix of personalities too, balancing the 'go-getters' necessary to drive the project forward, with calming influences that prevents potential flash-points from developing.
It helped enormously that the team was relatively small and that roles and responsibilities were very clear.
Finally, although there are clearly differences between e-commerce and conventional projects, for me it was not the differences that struck me most but rather it was the similarities. Good practice in Project and Test Management applies across technologies.
To meet the aggressive timescales of an e-commerce project, there are risks that can be taken and risks that can't and your experience, however obtained, will be able to guide you.
David Scott, UK BIS Test Services IBM Global Services