While generative AI tools are good at replicating patterns to generate text, fully replicating human reasoning around sustainability is a whole other ball game. A team of five academic writers have their say.

Picture this: a world where artificial intelligence (AI) sits alongside us, grappling with the complexities of sustainability. With advanced AI technology like Gemini (an AI ecosystem from Google) and a tidal wave of data flooding our digital landscape, possibilities seem boundless. However, amidst this whirlwind of innovation, a question arises: can AI ‘grasp’ the complexities of sustainability, aligning with our human values and perspectives?

To achieve the targets of the Sustainable Development Goals requires collaboration among diverse stakeholders. Yet, research by Preuss, Fischer and Arora has uncovered a fundamental weakness in this framework: the divergence of perspectives held by different groups. The lack of a shared understanding poses a significant obstacle to achieving common goals. Without alignment among stakeholders, realising sustainable development objectives becomes increasingly challenging.

The point is illustrated well in this quote: ‘…the effectiveness of multi-stakeholder initiatives (MSI) crucially depends on stakeholder cognition, that is, on how different stakeholders make sense of the sustainability issue at hand (Preuss, Fischer & Arora, 2023).’

Drawing from that research, which was conducted by a team of researchers spanning India, France, and the UK and focused on India as its subject matter, we compared sustainability priorities among those stakeholders in India with those expressed by generative AI (GenAI).

Could Generative AI be a solution?

With the rapid advancements in generative AI, we've witnessed its transformative potential across various sectors, from entertainment to finance. Technologies like SoraOpenAI in entertainment and large language models (LLMs) for research and education have demonstrated AI's ability to streamline processes and enhance productivity.

But can this same technology be harnessed to address the complex challenges of sustainability? Can AI truly reason about sustainability issues with the same depth and nuance as humans? These questions lie at the heart of our exploration into the intersection of AI and sustainability.

Introducing AI into the sustainability conversation opens up possibilities; however, before we entrust AI with a seat in sustainability discussions, we must first understand its compatibility with human perspectives. A comparative analysis between different stakeholder groups sheds light on the divergent priorities and perceptions regarding sustainability issues (Preuss, Chhaparia, Arora & Fischer, forthcoming). In India, stakeholders from various sectors including private, public, NGO and education industries participated in a survey to rank 23 sustainability concepts. These concepts ranged from the interconnection of environmental, economic and social issues to sustainability indicators. The same questions were prompts for four prominent GenAI ChatBots: ChatGPT, Gemini, Copilot, and Claude. This comparative analysis sheds light on how AI systems perceive and prioritise sustainability issues compared to human stakeholders.

Findings

Within ‘human participants’ of the business and private sectors, we found that climate change, gender equality, and education are top priorities. Even more surprising was the alignment between these stakeholder priorities and the rankings by Copilot and Claude, highlighting climate change as the most pressing issue. However, ChatGPT had a different perspective, emphasising the interconnection of environmental, economic, and social issues.

Meanwhile, Gemini stood out by placing a strong emphasis on the needs of current versus future generations. As for low priorities, ChatGPT correctly identified the ‘triple bottom line’ as a lower priority. Gemini showed the highest correlation (0.734) with human stakeholder priorities, indicating a significant and positive alignment. Conversely, Claude had the lowest correlation (-0.54), suggesting a notable disparity in its rankings.

Similarly, we observed a consensus in the public sector on crucial sustainability concerns such as water, healthy ecosystems, and climate change. Once more, Gemini proved to be aligned with human reasoning, emphasising climate change as the top priority for this group. However, the other GenAI Chatbots failed to identify this priority accurately, and none of them correctly ranked lower factors such as green chemistry, poverty, interconnection of environmental factors, and the needs of current versus future generations.

This discrepancy underscores the necessity for further refinement in AI reasoning. Gemini exhibited the highest correlation (0.56) with human stakeholder priorities, indicating a positive alignment. Conversely, Copilot demonstrated the lowest correlation (-0.31), suggesting a notable disparity in its rankings.

For you

Be part of something bigger, join BCS, The Chartered Institute for IT.

NGOs displayed distinct priorities, focusing on healthy ecosystems, environmental interconnection, gender equality, and consumption. While each of the chatbots, excluding Gemini, identified one of these priorities, their rankings were inconsistent.

Although ChatGPT, Bard, and Copilot correctly identified the lowest factor as the "Triple Bottom Line," discrepancies persisted, indicating the need to improve AI's understanding of stakeholder perspectives. ChatGPT demonstrated the highest correlation (0.76) with human stakeholder priorities, suggesting a solid alignment. Conversely, Claude exhibited the lowest correlation (0.13), highlighting areas for enhancement in its reasoning capabilities.

Stakeholders in the education sector prioritised water, education, and population growth. Interestingly, GenAI Chatbots also emphasised education and environmental interconnection, indicating a partial alignment with human priorities.

Among the Chatbots, Claude demonstrated the highest correlation (0.75) with human stakeholder priorities, suggesting a solid alignment in reasoning. Conversely, Copilot exhibited the lowest correlation (-0.22), indicating a mismatch with human perspectives and highlighting areas for improvement in its reasoning capabilities.

Conclusion

It still seems essential to recognise the irreplaceable value of human reasoning, especially in the context of diverse perspectives, often overlooked by widely available information. While GenAI tools are good at replicating patterns to generate text, fully replicating human reasoning is another ball game. As we look to the future, one thing is sure: the debate about AI and sustainability is far from over — our path towards a sustainable future hinges on the fusion of human wisdom and collaboration with GenAI. By joining forces, we can harness the full spectrum of innovation and insight, steering us towards a future where sustainability thrives.

About the authors

Isabel Fischer is a Reader in Information Systems at Warwick Business School in the UK (isabel.fischer@wbs.ac.uk). Priyanka Chhaparia is a Bricoleur at the Indian School of Development Management in Noida, India. Lutz Preuss is a Professor of Strategic Management at Kedge Business School in France. Bimal Arora was a Reader in Enterprise at Manchester Metropolitan University in the UK and Marie-Dolores Ako-Adounvo is a recent alumni of Warwick Business School in the UK.