‘IDEntity-aware Autonomous system’, otherwise known as IDEA, is an AI powered agent which has been shown to reduce emergency evacuation times by an average of 13.6%. Georgia Smith MBCS spoke with the Open University’s Director of Research for the School of Computing and Communications, Dr Amel Bennaceur, to hear more about it.

‘First responders’ is phrase most of us will recognise as referring to the emergency services — but what happens before first responders can get there?

Laypeople who find themselves caught in emergency situations are known as ‘zero responders’ — and social evidence increasingly shows that in emergency situations, ‘zero responders’ are far from being a panicking liability. Instead, they are a real asset because they are there on the ground, and they want to help. Often, however, they don't know how — and that’s where IDEA comes in.

‘Being able to communicate how to help [through technology like IDEA robots] enables us to turn zero responders into an effective resource’, Dr Amel Bennaceur explains. She adds that the benefits don’t stop at improving emergency management in the moment: ‘It’s been shown over and over again that after attacks, after natural disasters, communities who come together to help each other will recover better in the long run.’

Human behaviour and social identity

IDEA works by inferring social identities, then using that information to analyse and predict behaviour. It calculates the best steps forward and relays them to zero responders, additionally facilitating communication between zero and first responders to enable smoother emergency management. ‘IDEA is based on the fact that there are many different kinds of social identity or group, and each individual has multiple identities — for example you can be a family member, a friend, a woman, a colleague or an engineer, in different contexts’, Amel explains. ‘We call this a super-identity, and people behave differently depending on which super-identity they’re identifying with in a given moment. It determines what values and what norms you follow.

‘With IDEA, we’re interested in the phenomenon that people develop a specific super-identity when crowds form within emergency situations that we call a ‘pro-social’ identity, which is defined by a sense of togetherness and displaying a desire to help each other. People stop just being part of the crowd in a physical sense and it becomes psychological. A sense of shared identity develops quite quickly as the common experience of danger brings people together.’

‘What IDEA does is pick up on this kind of group membership, partly through linguistic markers; for example, people who belong to a group tend to use certain language, saying things like “we are in this together”, or “we will help each other”. They will say ‘us’, and ‘we’ and other phrases conveying togetherness — even if they do use ‘I’ language, it’s within phrases that display that pro-social identity, such as “what do I need to do? How do I help you? How do I…?”. It's all embedded in the language. Some are more pro-self, prioritising their own or their loved ones’ safety, and that comes across in their language too. IDEA was trained using LLMs to enable it to perform this kind of semantic analysis and to fine tune the inferences it can make so that it can pick up on those markers and identify who is willing to help.’

Methods of communication

Input is only half the battle — IDEA also needs to communicate with zero responders in order to achieve its purpose of facilitating successful emergency management. Amel explains that the other challenge was that output: ‘Considering how a robot can successfully, clearly communicate with a person who’s in distress required us to develop a more restrictive language. We’ve worked with computational linguists who have guided us on the effects and consequences of different linguistic structures — for example, declarative statements give information and interrogative statements collect information. So we’ve developed the robot to use those structures when communicating with zero responders.’

Amongst many complexities of the project comes the question of finding the best method for IDEA to communicate information to zero responders, which Amel explains is something the team is actively exploring.We’ve achieved the initial goal that the robot can successfully decide the best course of action — but how do you package that information so that it has the biggest effect? How do you communicate it? And by what means?

‘One of the things we’re exploring is speech — for example the robot using text to speech to pass information for first responders. We’re exploring two ways of doing that — either you have mini drones that directly communicate, or alternatively something that could be deployed through phones. There is a lot of research going on around what we call interaction designs, so we’re using that research to develop ideas.’

Navigating across disciplines

The presence of not only computer scientists, but linguists, social psychologists and many others in developing IDEA makes it a highly multidisciplinary project — which can be famously hard to navigate. ‘Cross disciplinary teams can be very tricky — a lot of things can go wrong. But when it works, it really works in wonderful ways. We work very closely with social psychologists and we have a great collaboration because they are curious, they ask a lot of questions — and of course, they are social psychologists, they understand social dynamics! I don't know whether it's the same working with other disciplines, but social psychologists are really good at the human side’, Amel laughs.

‘The first practical thing is establishing a common language’, she continues. ‘When a computer scientist says model, they mean a completely different thing to when a social psychologist says model. So establishing what everyone means is the first thing. The second thing is that it takes a lot of time to develop trust. There's no way around it. But it is one of those things which is high risk, high reward.’

The implications of social identities and behaviour

The idea of AI identifying human characteristics has raised eyebrows in other contexts — for example, fears over facial recognition software being used to target activists. However, the software behind IDEA doesn’t focus on protected characteristics such as gender, race or age in order to identify individuals — instead, such identities are taken into account in order to understand and predict what Amel refers to as ‘SLEEC’ (social, legal, ethical, empathy and culture) behaviours. She explains: ‘We’re interested in the social psychology [of the group membership] rather than individual identifiers — for example, there is a lot of evidence that men are more likely to help women in emergencies, and that young people will most likely help older people. Those norms impact behaviour and so they need to be taken into account.’

There are also other reasons to take social identities into account, she goes on: ‘It’s also important to consider the cultural perception of robots. If we use drones, for example, their reception is going to be very different for a UK concert-goer and someone in a refugee camp for whom it might have traumatic associations. How we might carry out interventions is impacted by that information.’

Legal implications add yet another layer of complexity to implementing technology like IDEA. ‘If the robot asks [a zero responder] to help someone, and that person dies, who's liable? Where does the accountability lie?’, Amel explains. These kind of questions mean the broader project goes far beyond social psychology and computer science, necessitating legal perspectives and even philosophers. ‘In the beginning we really only visualised the very quantifiable and functional objectives of the project — reducing evacuation time and increasing the number of evacuees. But there are so many other implications that can’t be so easily computed, and we are exploring [how to work through those].’

Addressing concerns about the biases that can be inherent in data sets, Amel explains that the team is incredibly careful with the training data they use, which they analyse carefully. ‘We do fine tune the data. There is always a residual risk of bias, and nothing can completely eliminate that especially when you’re focusing on group belonging and psychology. But we try to have a diverse team of annotators to mitigate it and reduce focus on protected characteristics as much as we can.’

Simulations and models: the efficacy of IDEA

Amel explains that modelling is especially vital with this project because it faces the interesting challenge that since emergencies are critical — often life or death — it’s not ethical to take risks with testing. ‘You need to have a lot of evidence that it works already in order to even pass the ethical approvals to develop the technology [which is where models come in]. It's costly and time consuming to do, and it requires a lot of preparation — but it’s also invaluable, because using models we can simulate situations [we’d never otherwise be able to test in] that would be completely unsafe for humans. For example, situations where people are injured and there aren’t enough first responders, or they take too long to get there. That's where the simulation aspect is really important; we can try multiple scenarios, see where IDEA does and doesn’t work, and ascertain the contexts where it is most and least effective.

‘This is one area where the multidisciplinarity has been really important. We have a team that just focuses on modelling emergency situations; how things work, how people behave and how they help each other (or not) — that work has been done by social psychologists who have analysed a lot of real events to extract that information, which we then put in a model and use to simulate different situations and strategies.’

‘For example, we’ve simulated situations where there were plenty of first responders in good time, so the involvement of zero responders wasn’t critical — the robot’s effect was marginal. It didn't make things worse, but it didn't make them better. At the other end of the scale, in the kind of extreme event where everybody's injured, the robot actually doesn't have enough people to communicate with to enable it to perform effectively and its effect there is also negligible. The robot has the maximum effect in situations where you don't have enough first responders, but there is a good quantity of zero responders willing and able to intervene and help. And there are a lot of situations in reality that are in that category: for example, in the Manchester Arena bombing, the emergency services arrived but weren’t able to enter the venue due to risk levels for them — but bystanders were helping.’

In situations like that, Amel explains, a system like IDEA assisting zero responders and coordinating the action could have made a big difference.

Trust and trustworthiness

One of the trickiest parts of developing this technology is understanding the inner workings of shared decisions between humans and robots — and humans will not always comply with what the robot is suggesting.

For you

Be part of something bigger, join BCS, The Chartered Institute for IT.

As well as gathering data on people’s responses and interactions with the robots, Amel explains that understanding the nature of trust is also vital.

‘Something that we need to look at a little bit more now that we have working prototypes and technologies is seeing how people take the robot into account. What we're trying to do is build safe and trustworthy systems that we can prove will do the right thing and which perform the reasoning through a defined process’, she explains. ‘But trustworthiness and trust are two different things in the sense that trust is very subjective; you can have a very reliable, trustworthy person who you don't trust. Understanding the game of trust is very important.

‘Things as simple as dressing the robot a certain way can make people trust it; there are studies that show that if you dress it as a police officer, people will follow the robot even if it’s wrong. That proves the point that trust and trustworthiness are not the same; we want to build a system that is both trustworthy and able to be trusted.’

Game theory and human interactions

Game theory studies scenarios where what we call ‘rational self interested agents’ interact and make decisions — their actions or interactions have an impact, and a pay off. Amel uses a game of chess as an example. ‘Chess is a game and the players are the agents; the payoff, winning, is the point of the match. Each agent takes a series of actions, completing a sequence of moves in line with what they calculate to be the best strategy to reach that payoff — but they don’t know how the other player, or agent, is going to move. That’s similar to what we do; what we’re working with is a game of incomplete information. The robot doesn’t know what the other agent — the human — is going to do exactly, but it still needs to be able to make decisions about the best moves and strategies to achieve its payoff of minimising evacuation time and maximising evacuees.

‘It's a game in the sense that the robot has to take an action — make a decision to ask the person to help (or not), based on information about context and identity — and the other player is the human responder who can either then help or not help. So that's how we modelled it, as a multiplayer game. What the robot does have is information to allow it to estimate how humans will behave based on their identities — even without a complete and perfect knowledge of the specific individuals involved, this can still maximise its chance of reaching its payoff. This is a strategy that’s used in so many sectors — a lot of the work around game theory was inspired by a Nobel prize in economics, for example, where market evaluation looks at how people are likely to behave according to their group identity in different situations to maximise their payoff.’

The next steps for IDEA

Even once the technology behind IDEA robots is working, there is almost as much work to be done deciding the best way to implement them. There are a lot of possibilities — for example, would there be an IDEA robot at big events as a matter of course in case something happens? ‘That is something we’ve considered, especially if we’re working with drones’, Amel explains, ‘and we’ve also thought of ideas like the emergency services sending the drones in first if they’re unable to intervene quickly. We’re also thinking about the possibility of deploying some of the IDEA services on phones, so that people inside the situation can access them and collaborate. We’re considering all angles of that aspect currently.’

Amel is also keen for the project to expand and explore different situations: ‘We want to look at different types of emergency — some emergencies are very tangible and immediate, like the Manchester Arena bombing. But there are other emergencies like cyber security incidents where people still need to collaborate, and quickly, to find a solution — and there are different groups within that beyond the security team.  At a virtual event you might have layperson attendees, or the general employee base could be helpful in a corporate cyber attack. So we're now exploring how people behave and how the agent can bring people together in other settings like that too.’

Special thanks to Dr Amel Bennaceur for her time and insight, and to Michael Bowkis, Director of External Engagement at the Open University, for providing extensive background information and research summaries of this project.

References and further reading

For a technical description of the IDEA  technology, please refer to:

For a deeper look into the language components, please refer to:

Learn more: