The BCS SGAI-2024 conference celebrated groundbreaking work in developing AI and how the technology can be used. Far from being blind to AI’s drawbacks and limitations, the conference also explored how the AI community can ensure AI becomes a source of social good.

The BCS Specialist Group on Artificial Intelligence (SGAI) 2024 conference saw practitioners from industry and academia gather to share research into new AI techniques, explore novel AI applications, and consider the technology’s place in society.

‘It’s an important conference, and it’s growing every year. This year, over 90 people attended’, said Nadia Abouayoub. ‘AI practitioners are under a lot of pressure doing their day jobs. The SGAI conference is a safe space where ideas can be shared freely and discussed without judgement.’

Founded in 1980, the SGAI conference has become a cornerstone of the international AI scene. Along with presentations from groups around the world, the event also emphasises networking, community and encouraging practitioners to share ideas freely. With this in mind, Nadia said that Cambridge — the place that nurtured many AI founders and founding ideas — is very much the natural place for the conference to be held.

Examining AI development

Speaking at the event’s main plenary session, Professor Max Bramer offered a summary of AI’s development — both as a force for good and bad — in 2024.

‘We live in interesting and very exciting times’, Max stated. ‘This year we’ve seen some very significant achievements…AI has gained two Nobel Prizes and I think this really needs to be publicised much more. This really is an amazing time to work in AI.’

John Hopfield and Geoffrey Hinton were awarded the Nobel Prize in Physics for their work in brain-inspired artificial neural networks. Elsewhere, Demis Hassabis, David Baker and John Jumper shared the Nobel Prize in Chemistry. Theirs, Max explained, was pioneering work in computational protein design and prediction.

Along with celebrating AI's potential as a force for good in the world, Max also warned about how the technology could, can, and in some cases really is, being used to do tangible harm.

‘AI clearly has huge potential in healthcare and other fields’, he stated. ‘Alongside all these successes, there’s also a growing recognition of the problems. [Thanks to AI] its becoming increasingly difficult to tell fact from fiction online. Fake news, for example, is becoming dangerous and fake videos are becoming increasingly sophisticated.’

Speaking in Peterhouse’s panelled lecture theatre, Max also turned his attention to AI’s place in university life. ‘It’s reported that [many] British university students use AI in their coursework. That’s absolutely fine if they are using it to help them. But it’s certainly not fine if they’re submitting work which has been written by a machine — this doesn’t prove you know anything.’

Max also touched on the conference’s defining and most often recurring themes: accountability for AI-based decisions, the explainability of those decisions and the colossal quantities of natural resources needed to train, deploy and operate today’s biggest large language models.   

Summing up, Max said: ‘Ignoring these problems won’t cause them to stop. We are part of the AI community, and if we don’t find answers to these problems…convincing answers…there’s a risk we’ll face a backlash from the public. AI has had many successes, but despite these, if we lose the public’s trust, we risk another AI winter.’

Elsewhere, the conference featured workshops and two headline keynote addresses. The first keynote, from Professor Frans Coenen, asked, What has AI done for healthcare? The answer was a lot, with more to come. But Professor Coenen warned that AI isn’t a panacea for the health system’s funding and structural problems.

Professor Steven Meers from the Defence Science and Technology Laboratory (Dstl) explored Harnessing the power of AI and autonomous systems for defence and security. In these spheres of work — like in healthcare — there is no room for AI hallucinations and errors. For soldiers and people in the field to trust AI, it has to be unquestionably reliable. This places considerable technical, ethical, policy and process pressure on scientists developing defence AI solutions. Along with exploring how Dstl balances reliability and other factors, Steven also discussed why AI is essential when meeting threats levelled by adversaries who are themselves using AI.

Debating AI

The conference’s second working day concluded with a panel debate chaired by Andrew Lea FBCS, a member of the SGAI committee.

Coining the phrase ‘humongous language models’, Andrew challenged the panel with this question: ‘Is large AI good or bad for society, and what should be done to make it beneficial to society?’

For you

Be part of something bigger, join BCS, The Chartered Institute for IT.

Large AIs, Andrew explained, are systems that are so big or expensive to train or run that only large organisations and countries can afford the finance or resources to do so.

On the one hand, Andrew outlined that today’s very largest AI systems can do seemingly amazing, almost magical, things. But is it healthy for society when only the wealthiest countries and corporations can afford to train and deploy them?

Artificial intelligence systems, particularly large-scale systems like ChatGPT, have heralded groundbreaking innovations capable of revolutionising industries. These tools excel at summarising documents, brainstorming ideas, coding and even creating art.

They’ve transformed research, education and commerce, offering efficiency and creativity at unprecedented scales. Yet, beneath the shiny veneer lies a host of challenges that warrant scrutiny.

Key observations from the panel focused on:

  • Job displacement: concerns about AI replacing jobs dominated the discussion, though some argue AI can and indeed should complement human roles. Historical parallels were drawn with past industrial revolutions where new technologies created opportunities despite initial disruptions
  • Environmental concerns: training and operating large AI models require staggering amounts of energy, contributing to planet-warming emissions. Geographic disparities in energy sourcing exacerbate the problem, with regions heavily reliant on fossil fuels facing higher environmental costs of deploying AI
  • Ethical and social risks: the centralisation of AI under a few large organisations poses risks of power imbalance, potential misuse for propaganda and societal control. Additionally, the mental health impacts of human-AI interaction, such as toxic chatbot relationships, were highlighted
  • Regulatory and moral responsibility: the panel called for better governance and ethical oversight and stressed the need for transparency and measures to ensure AI benefits society equitably. Proposals include mandatory environmental impact assessments and public collaboration in AI development.

In its conclusion, the panel offered the following recommendations:

  1. Ethical frameworks: develop and implement robust ethical frameworks for AI, particularly in sensitive areas like healthcare, to ensure that AI systems provide accurate and ethically sound responses
  2. Standardisation: establish standardised assessment methods for AI models to ensure consistency, reliability and safety across various applications
  3. Broaden participation: encourage broader participation from diverse stakeholders in the AI space to prevent monopolisation by a few large companies and ensure balanced development
  4. Education and training: invest in education and training programs that teach students and professionals how to use AI tools effectively, fostering critical thinking and ethical use of AI
  5. Human-AI collaboration: focus on designing AI systems that augment human abilities and clearly define the roles of humans and AI in various applications to ensure effective collaboration.
  6. Transparent green energy practices: promote transparency in green energy claims by data centres and encourage genuine contributions to reducing carbon footprints through solutions like integrating renewable energy sources

When asked to vote on whether LLMs are good for society, the result was hung with around half feeling AI is positive and the rest harmful.