Chris Loveday from Barton Peveril Sixth Form College introduces Martin Cooper MBCS to Barton Buddy, the college’s chatbot and explains the AI strategy setting the school apart.

Pick up a newspaper or scroll through a tech news website, and you’ll inevitably come across stories about how schools and AI are awkward bedfellows. On the one hand, you have the inevitable pieces about pupils using AI to write essays and do homework with just a few clicks. On the other hand, there might be articles about teachers being unsure how, where or even if they should introduce AI into the classroom.

Take a trip to Barton Peveril Sixth Form College in Eastleigh, Hampshire, and you’ll find something different. There, you’ll find AI helping the college’s support staff, teachers and pupils learn about the technology and become more efficient in their day-to-day school lives.

Why don’t you introduce yourself and your school and tell us about your career?

I'm the Vice Principal for Business Services at Barton Peveril Sixth Form College. My route into educational leadership was unconventional. I left college with no meaningful qualifications, started a career in community lettings in education, and found myself at Swanmore College, a large secondary school.

My unusual route to college leadership led me here. I'm responsible for all the business services at the college. I'm described as entrepreneurial and very good at leading projects.

I speak passionately about social mobility and digital equity. Students with a device and internet access have an advantage over those without. Giving those same students a paid-for ChatGPT account creates a significant divide. We're trying to bridge that gap by providing devices, internet connectivity, digital literacy skills, and access to a free large language model. Equity is at the heart of what we're trying to achieve.

Let’s start with a snapshot of now: where are you using AI already?

We decided to push the use of generative AI around 12 months ago. We've developed approximately 20 bespoke agents to solve problems in the last year. We became the first Google Gemini Academy in the UK by providing every staff member with a Gemini Pro licence. We aimed to create a supportive culture where staff felt empowered to harness AI to solve problems.

We're ahead of the curve in adopting generative AI and creating agents. We launched an agent called Barton Buddy, a digital assistant for students. At the time, it felt like a significant achievement and genuinely innovative.

Our model includes enhanced safeguard features, allowing us to strategically roll it out to students while protecting them from inappropriate content. These safeguards prevent responses to questions about drugs, misogyny and violence. This enables us to explore student use and engagement with AI while educating them on critical thinking and human skills.

What was the central, singular problem you were looking to solve? And why pick AI as a possible solution?

We had a single agenda item senior leadership team meeting focused on AI. We asked, ‘What can generative AI do to improve operational efficiency and effectiveness?’

We aimed to address workflow issues, repetitive tasks, decreasing productivity and increasing errors. We also wanted to enhance staff's work-life balance and workflow, making their experience more enjoyable.

Our original work involved a deep dive into our business services to explore how this technology could enhance the efficiency and effectiveness of our teams. This approach proved to be a significant success, internally and in how our work has been perceived externally.

Was AI’s arrival welcomed with open arms, or was there some cultural resistance? And, if so, how did you overcome it?

We prioritised making it clear to staff that this is not about reducing staff numbers but staff workload. The premise was to ensure that our staff could do their jobs effectively and happily, using technology to support that and freeing the human up to be a human.

That message must be regularly disseminated because people go through emotional cycles at different speeds. So, we continually reinforced the message that we aren't looking to reduce the number of staff we have. We're looking to make our staff happier, make the workload more enjoyable, and free the humans up for the human element of education.

Working with AI requires different — new — skills. How have you prepared your workforce for AI collaboration, and what has the feedback been like?

For you

Be part of something bigger, join BCS, The Chartered Institute for IT.

For the last two years, we've held annual AI inset days. These are for all staff, not just teaching staff. We shared our vision for AI and facilitated lots of training.

We ran our first training session in late June last year and made it optional. Typically, if you make something optional you don't expect many people to attend, but we were oversubscribed for the training from both teaching and support staff.

Our feedback from inset days and training has been overwhelmingly positive. Frequent requests for more time for training and exploring the use of generative AI have occurred. Everyone can see it's going to be transformational.

And the pupils, how has AI been integrated into their education, and what’s the reception been like?

I sat through a lecture from some of our students on our digital assistant, Barton Buddy. They told me it was the worst large language model they'd ever used, and I had to explain that Barton Buddy was an agent, not a large language model. It was evident that many young people associate AI with large language models because of things like ChatGPT or CoPilot and don't differentiate between the technology and its application.

We've begun educating students on AI, its efficacies, biases, implications, hallucinations and safe use. We're currently piloting and red-teaming the use of Barton AI with a team of students who have had some training on its use. With parental consent and controlled access, we encourage them to use it for independent research, planning and other safe uses. Our filtering system runs in the background, so if anything concerning is put into our large language model, it will be flagged immediately.

What are the problems you’ve encountered along the way?

It's about educating everybody on the limitations and strengths of generative AI. To ask a large language model to help write a story where the main character is an alien with three eyes from a specific planet, the model will need to be able to hallucinate to fill in blanks and have a form of imagination. There is a genuine application of hallucination that can be powerful and transformative in the proper context. But it's about understanding that everything has its context, place and limitations.

Finally, talk to us about sustainability.

Sustainability is pertinent, and I don't think it gets the coverage it should regarding understanding the implications. AI gets more negative PR for hallucination and bias than carbon emissions. We have aspirations as an organisation to be carbon neutral by 2030. It's an ambitious target, and ironically, we've adopted AI quite significantly, which is a significant producer of carbon.

So, we're trying to work with providers who are more careful about their own goals for carbon neutrality. We’re also very serious about offsetting.

Where next in your AI journey?

We're working on a marking tool. Unlike some commercially available ones, this tool will be targeted specifically to our curriculum and courses. It will be able to provide formative and summative feedback, make recommendations to the teacher based on class performance, and potentially give students instant feedback, even during holidays, to support their learning.