With this in mind Robert Cowham, MBCS, Vice Chair of the BCS CMSG, discusses continuous delivery and the state of DevOps.
The initial results were often very positive, but organisations typically realised that the gains were only in part of the life cycle. Even with developers delivering regular sprints of new functionality, processes such as testing and release/deployment would cause bottlenecks and delays.
Thus releases to end-users or customers were still only happening every few months at best and the organisation was not getting much overall benefit. Forrester’s 2013 report on the development landscape showed the stalling of agile practices and difficulties in attempting to scale them.
Addressing these issues gave rise to DevOps and continuous delivery (CD) practices, with a focus on how to optimise the whole life-cycle. The term DevOps was first coined in 2009 and started gaining momentum in 2010.
The book Continuous Delivery by Jez Humble and Dave Farley was written in 2011 and was based on some years of best practice experience in different organisations. It covers many different effective technical practices, ranging from version control to managing environments, databases and other aspects of automation of the development pipeline.
In a sense these approaches are going back to some of the roots of agile methods in lean manufacturing principles. I prefer the term CD because it reduces the risk of ignoring important aspects of the whole - it is more than just about improving the cooperation between development and operations, although that is very important.
Delivering new releases of software or new products requires coordination with other parts of your organisation as well, such as support, marketing and sales. However, it is the principles that are more important than the precise name!
The state of DevOps
The ‘State of DevOps’ 2014 report (by Puppet Labs, IT Revolution Press and ThoughtWorks, and freely available) describes the relationship between strong IT performance and business goals such as profitability, market share and productivity - not perhaps surprising considering Marc Andreessen's 2011 proclamation that ‘software is eating the world’.
For example, mobile operators, financial services providers or engineering companies may not sell software, but they certainly depend on it. With the Internet of Things (IoT) or the evolution of hardware vendors into software-centric manufacturers (hardware is the new software), the importance of IT is only increasing.
The other key factors highlighted in the report are that DevOps practices improve IT performance and that organisational culture is also a strong predictor of IT performance. An interesting finding, and though perhaps less surprising given the developer-lead agile method adoption, is that job satisfaction is the number one predictor of organisational performance!
High-performing organisations report deploying code 30 times more frequently, with 50 per cent fewer failures than other organisations. The key practices that underpin such performance are:
- Continuous delivery - this means ensuring that your software is always potentially releasable (the business needs to decide separately how often to actually release, taking into account the ability of customers or end users and other supporting processes to handle releases).
- Version control of all artifacts - this includes environments (treating infrastructure as code). Interestingly version control of production artifacts is a higher predictor than version control of code - perhaps showing the issues and complexity around managing configurations.
- Automation throughout the lifecycle, particularly of testing, including acceptance and performance testing.
- Good monitoring of systems and applications.
A report by Evans Data commissioned in 2013 reported that 28 per cent of companies were using continuous delivery for all of their projects, but the same companies thought that 46 per cent of their competitors were doing so - no one wants to be left behind!
The benefits for organisations can be considerable. Being able to release more frequently, and with greater reliability and predictability, allows the business to experiment and evolve towards satisfying real customer needs more quickly and efficiently and with less risk.
By delivering software faster into the hands of end-users, the time to get customer feedback is reduced. Somewhat paradoxically, doing this reliably and consistently usually results in higher quality. This is a side effect of the mantra - ‘if it hurts, do it more often!’
Of course there are many challenges to implementing such methods and it is always going to be harder in organisations with decades of investment in their software systems.
Making progress on the CD journey
So how are organisations successfully implementing CD practices?
The initial driving force behind DevOps and CD tended to be smaller companies with modern ‘on the web’ infrastructures. People on small teams are more likely to be multi-skilled, which makes it easier to apply things like development practices to operations. These include use of version control and treating infrastructure like code with automated tools and testing processes.
It can be quite a challenge to change the mindset of administrators from ‘I’ll just login and fix that’, to ‘I’ll make a change to a configuration file, test it locally and check it in to version control, before running my deployment tool to propagate the change.’
It became clear that the same principles could also scale to larger companies, in spite of some challenges. Organisations of any size will need deep technical expertise in areas such as database administration, technical architecture, testing, security, system administration and operations. Traditionally these skills reside in separate teams, often with quite widely separated reporting structures.
As with most such improvement initiatives, changing people’s mindset is often the biggest challenge. This requires both bottom-up activity and enthusiasm to improve individual processes, and also top-down management support. The management support is required to overcome issues such as organisational inertia, reporting structures and how progress is measured and people are rewarded.
Coordination between teams is vital to CD, so matrix-type organisation with representatives of different technical teams being brought together for particular projects has been shown to work well. Assigning experienced and respected people within your organisation to any such new initiative is important.
They need an appropriate mandate and ability to work across the organisation and pull together teams to solve problems. Focusing on improving or automating workflow and handovers between teams is often a major win.
While there are plenty of technical challenges to achieving CD, there are also solutions, proven approaches and supporting tools, and steadily more successful case studies being reported.
The role of architecture is fundamental for longer-term success. Appropriate layering and interfaces can help to support practices such as automated testing. While existing systems may not have been designed with this in mind, it is important to come up with an architectural vision and look to evolve, even if it takes years.
Large companies with mainframe-based systems with decades of development have managed to do this using approaches such as adopting a services-oriented architecture.
It is beneficial for all teams to improve their individual practices, for example by increasing automation within their team and ensuring that the many different tools in use through the lifecycle can co-exist and work together.
Tools range from requirements management through design, coding and testing to continuous integration and automated deployment. Automation capabilities need to be a key selection criteria for any tool, although increasingly many have web-based application program interfaces (APIs).
As previously mentioned, ensuring that all assets are in version control has a very high correlation with IT performance and is a key foundation for other tools. Change tracking, audit trails and impact analysis for things like risk assessment are all driven from this foundation. It is very important to ensure that any tooling in this area will scale appropriately and perform adequately to meet the needs of the organisation.
There are some powerful workflow tools available to help with implementation, but they need to be used with care. While they may improve visibility and improve consistency, implementation may not always be optimal overall. One financial services client I worked with recently had implemented ServiceNow, and creating a new environment for testing involved some 20 different sub-tasks.
These were queued and assigned to different teams as appropriate, with automated onward queuing after each step. In this case, some tasks took only a few minutes to complete, and yet the delay in noticing the request on the queue and processing it sometimes meant that hours passed between steps.
Thus overall it often took two to three days to fully create the environment, when - if the whole thing were automated - it could be done in less than an hour. The tool itself was very capable, but it needed to be used in the most appropriate manner.
Aggregating marginal gains
Another way of looking at CD is applying the continuous improvement methods that helped make the British cycling team so successful under the leadership of Sir Dave Brailsford. At the Beijing Olympics in 2008 they won seven out of 10 gold medals on offer in the velodrome (plus one out of four golds on the road), and four years later did exactly the same at the London Olympics.
This was an outstanding acheivement by a single country, and it is not as if the other countries were not trying their hardest to improve in the intervening four years! The approach was to break every little action down and attempt to improve it even by only one per cent, which resulted in a significant overall gain when put back together.
Applying this sort of mindset to your overall software development will reap dividends and provide a significant competitive advantage for those companies who make the effort. It is about optimizing individual parts, but also keeping an eye on the big picture and optimising the whole.
Where are you on your journey towards continuous delivery?