The event was on 12 December 2006 at the Royal Society in London.
Two speakers stimulated debate by sharing their thoughts / experiences in short talks and then the 45 participants discussed the topic in small groups over dinner.
Each table included senior people from IT suppliers, commerce, industry, the public sector and universities. At the end of the evening each table reported back to the entire gathering.
Background
In the past barber-surgeons were known for the sharpness of their instruments not for their knowledge of anatomy, which seemed to take second place in the practice of surgery.
Imagine the dismay with which these so called surgeons greeted some ivory-towered academic who told them that the practice of surgery should be based on a long and detailed study of human anatomy, on familiarity with surgical procedures pioneered by great doctors of the past, and that it should be carried out only in a strictly controlled bug-free environment, far removed from the hair and dust of the normal barber's shop.
It has been said that software developers are disturbingly similar to these barber-surgeons.
Today this relatively new industry has many practitioners who are specifying software using ambiguous English and vague diagrams, implementing it using ill-defined programming languages with well-known and egregious weaknesses, and hoping against all logic and evidence that testing the resulting mess will somehow lead to a usable product.
And still too many projects get cancelled and those that do survive are almost all over budget and full of vulnerabilities. The US National Institute of Science and Technology has calculated that poor quality software costs the US economy nearly one per cent of GDP.
In fact the software industry is reputably years behind where it should be, a fact, which often leads to unusable products.
Currently software only delivers a percentage of what it's supposed to with users often putting up with bugs in order to get the benefits of software, and finding themselves in a rather nebulous arena, where they often don't know enough about the product to formulate a useful complaint.
What is dependability?
Software 'dependability' is not the same as having the software meet its users' needs. For example, one could have software with a disappointing feature set which fails to meet its users' expectations, but nevertheless is dependable because it never does anything that could not be predicted.
Sometimes it's a case of how much is a system 'up' as opposed to 'down'.
Why is dependability important?
Frequently CIOs don't recognise the hidden flaws of poor software including the hidden costs, which results in a business suffering, hence they need to take greater responsibility when it comes to selecting and monitoring the development of this critical element within a business's infrastructure.
CIOs need to understand the cost of outages - many do not understand what the lack of dependability is costing their organisation. Procurement often focuses on cost and time scales to the detriment of dependability.
It's often the so called 'mega-projects' which break down, thereby sending the wrong message out to the media and the public at large who see such disasters as a reflection of the software industry as a whole.
It is unfortunate that, as with many industries, major change only occurs after people have been killed in spectacular ways.
So far, this doesn't appear to have happened because of faults in software and even if this has occurred then the industry is not transparent enough to allow a clear perspective.
Due to its very nature software development does not readily enable a physical view of the process at the start; hence the end result frequently does not tally with what was intended.
Is dependability always necessary?
Not all users require the software they use to be reliable / dependable. Different user communities will have different thresholds of tolerance.
Some participants thought that in many cases a lack of dependability would be tolerated if there were some means of restitution (perhaps through the law).
Rather like the fashion industry, software engineering has delusions of grandeur. When it comes to contract and procurement this becomes a game that most people lose, where they put up with poor service and a lack of warranties. However, defects are getting fixed these days, a factor which is often driven by security issues.
What is needed to improve dependability?
Specification and planning
Most agree that there is a real need for specifications and requirements to be agreed and documented before any software is developed.
There is often a gap between customer expectations and actual deliverables but attendees believed this could be minimised by having common base specifications for software to ensure a minimum dependability.
Regarding the specification of large-scale government projects, which have had a high rate of failure, it was said that the government and civil service are weak in skills of specification and project management and don't really understand technology.
Complacency, particularly with lack of planning, leads to increased costs, many of which are hidden.
If reliability means that the software in question should always behave predictably, a difficulty can arise from any unknowns in the overall architecture. For example, a system may fail not because of flaws in the commissioned software, but because of flaws in the underlying operating system that supports it.
Reliability also exists within a context of use: one person used the analogy of reliable cars being driven badly.
Corporate suppliers and customers need to take on greater responsibility in order to ensure that there is greater satisfaction for software overall. Improved documentation would assist this, particularly in the shape of risk assessments, which would be done prior to any development work.
Smaller projects tend to be more successful than larger ones. One participant suggested that when a project becomes 'too big to fit in one person's head' then that's when problems start to happen and failure inevitable.
Risk assessments
Managers need to use risk-based assessment to drive programming forward, and also take into account human fallibilities to a greater extent than is done currently.
However, some participants thought that risk assessments are regularly done by the wrong people, and the penalty clauses that are frequently employed generally don't stick due to technicalities.
It is hoped that software designers would adopt a risk based analysis approach to their development work but this is frequently not the case, with many opting to use availability (up-time) as a decisive factor rather than reliability.
Unfortunately, the human brain is not very good at assessing risk, especially those with low probabilities. Some participants thought that managers should probably use availability rather than reliability as a measure of software success.
In other words a single very long outage may be worse than several very short outages. There is a real need to understand and agree with the client what the priorities are.
The right model of software development
There was also some discussion of different models of software development. The 'waterfall model', for instance, requires a very formal process of defining the requirements and establishing the process of development, then implementing this step by step and often in a modular fashion.
Problems can arise near the end of this process in integrating the modules together, testing the system and tracking down the bugs.
Often there appears to be a 'disconnect' between policy-making and project decision-making, which can lead to not inconsiderable problems for the business.
Recently there have been trends towards interactive models of development, and so-called 'agile methods', in which the development proceeds in short time-frames, and the team meets frequently face-to-face to review progress. Rather than having an unchanging specification, feedback during development can result in the goals changing.
The best language
It was suggested that industry needs to go back to programming basics, with everyone utilising one language throughout.
Good programmers will always write good code but there is a growing shortage of these skilled workers.
It would seem that industry trains its IT workers to use C++ or Java, if they're lucky, (many participants believed that Praxis SPARK Ada is better).
However, due to peer pressure to use what is more widely known about there appears to be a general dumbing down of software development, which is frequently lead by costs, or a short term focus with developers always looking ahead to their next job, which will also probably be programming in C++.
Hence there is never a real incentive to change the programming language to one which might actually be better for the job in hand. In fact the choice of development tools can be a result of CV-engineering rather than of using the best tool for the job.
For example, programmers tend to want to develop in C++ rather than a more robust language such as Ada because there are more job ads asking for C++ skills.
It was thought by some delegates that we shouldn't lay so much significance on tools and programmers need to be more jacks of all trades and be able to adapt to changing environments rather than sticking to just one or two languages.
However, it was recognised that there is often considerable pressure from software vendors to follow one popular route without considering what other options there might actually be.
For comparison there doesn't appear to be the same kind of pressure applied to mechanical engineers and their tools.
There was some debate on whether having code open to inspection, as in the open source model, would improve reliability; the 'many eyes' principle as it were.
The example used was e-voting, with doubts expressed about whether one could trust the security of an undisclosed proprietary system. It was noted that Dutch experience of e-voting had concluded that the machines could be hacked, prompting a return to pencil and paper ballots.
Avoiding issues with internet integration
A worrying trend, which was highlighted, is the movement to integrate software with the internet, which by its very nature is an intrinsically anarchic environment, and one which seems to encourage software failure.
In fact software development is becoming more of an issue as connectivity increases.
Unexpected developments are occurring as systems are connected where designers are unable to accurately predict the outcome of increased connectivity resulting in emerging properties which can have frightening consequences.
With a greater degree of connection and interconnectivity no software is an island, and asymmetry leads to greater design challenges.
Security is a growing problem, with the only fool-proof way to avoid certain forms of contamination to run one PC completely separate from the internet and have another connected online.
Secure systems should, therefore, perhaps not connect to the internet for safety reasons.
Less emphasis on cost
It was thought that companies and individuals need to put more value on dependability - cost shouldn't be the only consideration, which is often only a short-term benefit.
Cheap software has hidden costs, for example, some have to be fixed with patches and the users have to live with the consequences. Hence, the total cost of development has to take into account the impact of failures as a result of potential short cuts made during the development process; hence development should be based on what is appropriate.
Change, however, will not happen unless costs are understood, whether they are hidden or otherwise.
Competence of users
It is estimated that the majority of projects fail because the end consumers use the software incorrectly or because the systems engineering is inadequate for the job it's being asked to support.
In fact the competence of the user is often over looked in favour of blaming the designer who might have actually produced good software, which has been subsequently used incorrectly.
However, users are only as good as the user interface and usability testing, for example, with talk-aloud protocols, could perhaps reveal problems at an earlier stage.
Software designers treating customers fairly
As for customers, they often do not read the small print frequently enough, which they should do in order to make a risk based judgement.
Software designers have a habit of actually pushing customers into the direction they want them to go, leaving people feeling manipulated and 'locked in' to a process. These locked in solutions should apply both ways but frequently do not with the producers treating their customers with contempt knowing they have them 'over a barrel' once they have them signed up.
In the field of consumer-purchased mass-market software, it was noted that the software vendors' terms and conditions routinely deny that the user has any rights, and state that no warranty is expressed or implied.
It was thought by some participants that designers should be encouraged to use systems engineering tools and that applied software should only be applied with a good solid engineering base.
All agreed that software should perhaps have more rigorous certification, whatever it might be used for.
Encourage professionalism
It was suggested that because parts of the civil service have no real idea about the nature of programming they are encouraging developers to move in the wrong direction and until we have more informed decisions made at the highest levels nothing significant will change.
For example, Government needs to encourage a more professional approach by creating a syllabus or framework, which will help to generate quality across the board.
It was widely thought that industry knows what needs to be done but is averse to any change, which might inconvenience or add extra cost.
Questioning attitude
In fact it is important for designers to question systems as they are developed and constantly ask of themselves 'why' and 'what' might happen if they proceed down a particular path before they allow their system to connect to the internet.
They also should be asking themselves why they are doing what they are doing and for what reason more frequently as the requirements of the system and business will probably be constantly changing.
Different countries have a different approach to software development.
It was noted that India, for example, will supply as required, to the letter of the specifications originally sent, whereas Serbia will continue to ask questions throughout the development process, constantly asking for reasons why the project is headed in a particular direction.
Outsourcing itself can create extra risks if not managed correctly, for example, we can lose control of safety properties when development is outsourced; hence a proper service model is important.
Feedback to individuals
As to whether professionalism and certification of individual practitioners would help to improve software quality, there was generally scepticism.
Such schemes could make people victims of the narrow framework of whatever was deemed to be important and flavour-of-the-month. And who could be trusted to devise the syllabus?
Several felt that a model based on feedback and reputation was more appropriate to the software industry.
One radical proposal put forward by a number of those at the debate was to set up a system whereby feedback on software would come from the developer's own peer group, creating a similar rating system to that seen on eBay where a reputation system exists allowing people to read feedback on various developers and their products.
It was suggested that BCS itself could host such a site as an independent organisation.
Another less controversial suggestion was that industry should employ more women developers and improvements would naturally follow. This might have been as a result of women's different work-styles or modes of attention but further expansion was not given.
Certifications of the companies
Others thought that the issue is not the professionalism and certification of individuals, but of the companies that undertake software development; mention was made of CMM, the Capability Maturity Model methodology for refining an organisations' software development processes, pioneered by the Software Engineering Institute at Carnegie Mellon University, Pittsburgh.
This provides for five levels of capability, and in Europe most organisations reach CMMI Level 3. It was said that CMM Level 5 is reached only by some organisations in the USA, Japan and India.
Success stories
Games software has fantastic specifications in comparison to most used by businesses. In fact one participant said they were the best he'd seen since he's been working within the MOD, where quality control is probably better than normal.
Many developers could learn a lot from the games industry, which seems to have better standards and deliver more bug-free code. Ultimately, if a game gets a reputation for crashing the company which developed it can go bust.
However, there are plenty of systems that have been produced and do work, for example software that runs chemical plants, pharmaceutical equipment, etc.
In fact there are many systems, which are sufficiently reliable most of the time and therefore are not flagged up as a problem.
Disasters are frequently caused by human error or mechanical failure and not by software, as is often reported. Although a number of attendees sounded pessimistic there are systems which are driving factories and services safely and reliably.
Successes like the Oyster card system and London's congestion charging system are often overlooked and there is a need to celebrate these successes more, which organisations such as the BCS are endeavouring to do.