Chartered Week celebrated professionalism and its place in raising standards in IT, culminating in BCS’ sell-out Conference, Convene.
BCS’ Convene conference took place on 24 February 2025, packing our London office to capacity. The vibrant, bustling and interactive event focused on innovating an ethical UK tech landscape, exploring today’s most critical cutting-edge technologies and topics including AI, quantum computing, service resilience and EDI in the workplace. With several keynotes, panel debates and presentations, Convene offered space for attendees to learn and share their professional opinions about our industry’s direction of travel.
BCS President Alastair Revell CITP FBCS welcomed attendees and emphasised the importance of embedding professionalism and ethics within tomorrow’s IT industry, seeing the conference as a platform for learning and championing new ideas. ‘We have the authority under our Royal Charter to bring people together’, Alastair said, explaining the conference’s name, and encouraged attendees to network and collaborate, urging them to ‘make it a personal mission to connect with one or two people you've never met’.
Convening for societal change
Convene was the flagship event of Chartered Week, a new annual event conceived by BCS which saw over 50 professional bodies celebrate chartered practice’s place in raising standards, building competence and embedding ethics across different industries. Supporters included the Engineering Council, the Royal Society of Biology, the Chartered Management Institute and the Royal Society of Chemistry.
Lucy Criddle said about Chartered Week: ‘This year’s theme is “celebrating trusted professionals”. It’s a great opportunity to recognise the chartered heroes whose work and commitment help to strengthen public confidence in their sector.’
AI: from Turing to today and beyond
Following the opening addresses Michael Wooldridge FBCS, Professor of the Foundation of Artificial Intelligence in the Department of Computer Science at the University of Oxford and a Senior Research Fellow at Hertford College, took to the stage.
Michael focused on machine learning (ML), taking care to distinguish it from AI as an umbrella field and demystifying it by explaining its most common method: supervised learning. He described it as a process where an AI is trained using input-output pairs, much like teaching a child by showing them examples. A simple illustration of this is facial recognition: when training an AI to recognise Alan Turing’s face, the system is repeatedly shown images of Turing, each labelled with his name. Over time, the AI learns to associate the features in the image with the correct label.
One of the most striking examples of supervised learning is its application in medical diagnostics. Michael shared the story of Paul Leeson, an Oxford cardiologist who spent decades collecting and labelling heart ultrasound data. By feeding this data into machine learning systems, Leeson enabled AI to detect heart disease with accuracy levels comparable to those of human specialists. ‘No data, no AI’, he emphasised.
According to the professor, AI’s breakthrough moment happened when neural networks, an idea dating back to the 1940s, finally began to work at scale. Inspired by the structure of the human brain, where billions of neurons communicate through simple signals, neural networks were once dismissed as a fringe concept — but the 2012 realisation that GPUs could be harnessed to train them at an unprecedented scale catapulted them to the forefront of AI’s rapid acceleration. ‘Suddenly, you get 10 times more compute capability for the same price’, he explained. ML models rapidly improved, catching the attention of major technology companies who poured billions into AI research. ‘The core ideas we use today were already known in the 1980s’, Michael explained. ‘What changed was the availability of vast amounts of data and enormous computational power.’
The next breakthrough came in 2017 when Google introduced transformer architecture, a new neural network design which enabled AI models to process vast amounts of text data more efficiently and led to GPT-3, a model that astonished the world with its ability to generate text indistinguishable from human writing — a seismic leap in capability, far surpassing previous models like GPT-2.
The staggering amounts of data, computational power, and money required to train such models was a key theme of the talk. ‘So where does that data come from?’ Michael asked. The answer: everywhere. AI companies scrape the entire internet, ingesting everything from social media posts to government documents — tens of terabytes of plain text, equivalent to millions of novels. Turning to computational power, Michael pointed out that GPT-3 was trained using 10^23 floating-point operations, a vast amount of computation. The financial cost of training such models is similarly immense, with estimates for cutting-edge systems reaching $400 million.
This reliance on vast datasets raises essential questions about copyright, bias and ethical AI development, and the immense financial input required to support the necessary computational power raises concerns about who controls AI development, as only a handful of wealthy corporations and state-level actors can afford to build these models.
Despite AI’s astonishing capabilities, the professor also warned about its significant limitations, highlighting its propensity for ‘hallucinations’. In an amusing but telling example, he noted that GPT-3 confidently but incorrectly claimed he had studied at Cambridge and later invented accompanying anecdotes. ‘Not only is it getting it wrong’, he said, ‘it’s getting it wrong in very plausible ways that make those falsehoods hard to detect.’
Michael also cited an infamous example of AI being more overtly harmful, where GPT-3 was asked for a foolproof way to get away with murder. The AI provided a list. Although companies have since implemented safeguards, users quickly found ways to bypass them, illustrating how difficult it is to constrain AI behaviour with simple rule-based fixes. He noted that AI’s potential misuse for generating terrorist instructions, misinformation or deepfakes remains a significant concern for governments worldwide.
He closed the talk with a powerful contrast: while conversational AI has made astounding progress, robotic AI remains inadequate. ‘We have AI that can talk fluently about quantum mechanics, but we don’t have a robot that can clear the dinner table and load the dishwasher’, he said. This disparity highlights that while AI has mastered specific cognitive tasks, real-world dexterity and problem-solving remain largely unsolved.
Panel: building resilience in digital systems
Panel Chair: Gill Ringland FBCS — Emeritus Fellow, Sami Consulting; Secretary - BCS IT Leaders Forum
Panel: Ed Steinmueller — Emeritus Professor, University of Sussex Science Policy Research Unit; Alan Brown, Professor in Digital Economy, University of Exeter; Steve Sands CITP FBCS MCIIS — Information Security Consultant and Data Protection Officer, Synetics Solutions and Chair, BCS Information Security Specialist Group
This panel discussion explored the growing importance of resilience in IT and business systems, an increasingly critical concern as digital infrastructure becomes central to the economy and public life.
Key points of discussion included:
- Resilience is more than cybersecurity — it’s about ensuring services can withstand, adapt to and recover quickly from disruption
- System fragility is rising, driven by complex software, legacy infrastructure, under-maintained third-party components and unpredictable AI risks
- Resilience planning must start early — it can’t be retrofitted. Frameworks like the NIS Directive help define impact tolerances and prioritise critical services
- It’s a whole-organisation challenge, not just for IT teams. Embedding resilience requires leadership, cross-functional skills and lessons from other sectors
Quantum
While AIs might be capable of talking ‘fluently’ about quantum, it is a good bet that they’ll never be as adept at discussing it as Dr Andy Stanford-Clark CITP FBCS. Delivering a talk called How Quantum Computing Will Change Our World and What You Need To Know About It, Andy demystified one of computing’s most complex and sometimes intimidating topics — so successfully, in fact, that the audience understood his jokes and even laughed at them.
Opening with an initial discussion about how quantum works, Andy focused mostly on exploring how, why and where quantum computers will eventually be used. He also explained that far from being redundant, classical computers are necessary to program quantum computers and to understand their findings — hence IBM developing hybrid quantum-classical systems and expanding access through cloud platforms like Qiskit.
Industries like pharmaceuticals, materials science, logistics and finance are already exploring early quantum applications where even small performance gains can yield significant cost savings. The current focus is on identifying tasks where quantum offers a clear advantage — and preparing organisations to capitalise on it rather than waiting until the technology becomes mainstream and competitive gaps widen.
Panel: sustainability through tech
Panel Chair: Sathpal Singh — Chair, BCS Agile Methods Specialist Group
Panel: Alex Bardell — Founder, SDAdvocate; Chair, BCS Green IT Specialist Group; Mark Butcher — Director, Posetiv Cloud Ltd; Joanna Masraff — Founder, Green PO and Pirate Jo
The discussion focused on embedding sustainability through technology, innovation and leadership shifts, with speakers emphasising the need for organisations to understand and execute sustainable processes such as developing sustainable code and reducing wasteful data practices.
Key points of discussion included:
- The need to move towards practical measures such as reducing data packet sizes, utilising image compression, and rethinking how we use cloud-based operations
- Reducing unnecessary data collection and lethargy around responsible data disposal
- Complacency around resource availability and some organisations’ willingness to continue wasting resources than fix inefficiencies
- The importance of standardising emissions reporting and tracking sustainability progress, with proposals for creating a more comprehensive, transparent Europe-wide strategy
- The role of supply chains in sustainability, with an emphasis on reusing and repairing technology rather than relying on recycling, which often leads to environmental damage in countries with lower recycling standards
- Encouraging industry leaders to understand the financial benefits of sustainability
The session concluded with thoughts on the importance of leadership buy-in to drive sustainability from the top, ensuring that the burden of responsibility doesn’t fall on employees. Attendees were encouraged to connect with communities, share best practices, and take accountability for their carbon footprints, including in relation to AI and hardware usage.
Policy talk
Kanishka Narayan, Labour MP for Vale of Glamorgan, emphasised the significant impact of AI and its potential across multiple sectors, including housebuilding and energy. He stressed the importance of developing people’s knowledge and expertise in tech to fill the skills gap, setting a high benchmark for UK tech productivity and maintaining high professional standards.
For you
Be part of something bigger, join BCS, The Chartered Institute for IT.
He advocated for public services to be at the forefront of technological advancements and pointed out that while there are efforts to regulate emerging technologies, much remains unknown — especially in AI — and there is a need to proceed with caution so that trust and safety can be properly guaranteed in their application. On the subject of UK R&D, Kanishka championed the power of research to create real-world impact, calling for long-term grants to provide security and stability and to bridge the gap between university research and practical development. This approach, he explained, would allow the UK to better compete globally while making lasting impact in both technology and public service innovation.
Skills
Panel chair: Julia Adamson — BCS Managing Director of Education and Public Benefit
Panel: Matthew Bellringer — Director, Meaningbit; Charlene Hunter MBE — Founder and CEO, Coding Black Females; Nimmi Patel — Head of Skills, Talent and Diversity, Tech UK; Kanishka Narayan — Labour Party MP for Vale of Glamorgan; Russ Shaw CBE — Founder, Tech London Advocates and Global Tech Advocates
Focusing on AI and its implications for education, industry, and workforce diversity, key talking points in this panel included:
- AI's potential to enhance productivity
- The importance of teaching both students and teachers how to use generative AI effectively and safely in education was emphasised
- Diversity in the workforce and the need for more inclusive hiring practices was highlighted as critical for creating tech products that serve a wide range of people
- A focus on individual skills and abilities was seen as more important than formal qualifications or traditional learning paths
- AI's role in creating new job opportunities, such as in tech innovation and data centres, was also discussed as a counter to fears about AI replacing jobs
The need to develop systematic upskilling opportunities, foster inclusivity and diversity and equip workers for the changing landscape of work due to AI advancements were the consensus.
Professionalism
Rich Corbridge FBCS, Chief Information Officer at Segro and previous Director of Digital at DWP, and Samantha Faithfull, Executive Partner at IBM, talked about the power of professionalism within the technology industries. They mentioned that ‘putting end-users at the heart of everything we do’ should become the rule across business today. They recommended that people should be at the centre of any design when it comes to transformational programmes, and that professionalisation underpinned by ethics and standards as foundational principles should be the goal for all organisations.
Closing address
Alastair Revell emphasised the importance of convening, calling it ‘very much part of our chartered mission’. He used the analogy of a jigsaw to illustrate, explaining that BCS places the first piece of the puzzle and invites others to contribute theirs. ‘This collaborative effort is essential for creating a complete and meaningful picture’, he said.
Finally, Alastair encouraged attendees to step outside their comfort zones and introduce themselves to others, which is crucial for the growth and flourishing of membership-based organisations. He also discussed the significance of building a professional IT community on key concepts like trust, accountability and responsibility.