University of Cambridge researchers have found that harmful climate action can be avoided with less-biased Artificial Intelligence and human help.
The team found that bias in the collection of data on which Artificial Intelligence (AI) computer programmes depend can limit the usefulness of this tool for scientists guiding global climate action.
The new paper, ‘Harnessing human and machine intelligence for planetary-level climate action,’ is published in Nature’s npj | Climate Action series.
Those in the Global South are likely to be digitally misrepresented
AI computer programmes used for climate science are trained to look through complex datasets to identify patterns and insightful information.
However, missing information from certain locations on the planet can lead to unreliable climate predictions and harmful climate action.
Individuals with access to technology in the Global North are more likely to see their climate priorities and perceptions reflected in the digital information widely available for AI use.
Contrastingly, those with the same access to technology in the Global South are more likely to find their experiences, perceptions, and priorities missing from those same digital sources.
“When the information on climate change is over-represented by the work of well-educated individuals at high-ranking institutions within the Global North, AI will only see climate change and climate solutions through their eyes,” Debnath said.
Biased AI can lead to harmful climate actions
Biased AI has the potential to misrepresent climate information.
For example, it could generate ineffective weather predictions or underestimate carbon emissions from certain industries. This could lead to harmful climate action, such as misguiding governments trying to create policy and regulations aimed at mitigating or adapting to climate change.
Data holes can harm under-represented communities
AI-supported climate action which is developed from biased data is in danger of harming under-represented communities, particularly those in the Global South who lack resources.
Often, these are the same communities who are most vulnerable to extreme weather events caused by climate change like heat waves, droughts, and floods.
The paper warns that this combination could lead to ‘societal tipping events’.
Human knowledge can help fill in the gaps
The authors argue that a human-in-the-loop design should be implemented in AI climate change programmes. This will help improve the accuracy of predictions and the usefulness of any conclusions.
The human-in-the-loop design allows bias to be noticed and corrected. Users can input critical social information, such as existing infrastructure to allow the AI to predict any unintended socioeconomic consequences of climate action more accurately.
“No data is clean or without prejudice, and this is particularly problematic for AI which relies entirely on digital information,” co-author, Cambridge Zero Director and climate scientist Professor Emily Shuckburgh said.
Awareness of data will lead to better climate action
The paper also promotes internet access as a public necessity, rather than a private commodity. This will help to engage as many users as possible in the design of AI for contemporary conversations about climate action.
The researchers concluded that technology must be human-guided in the development of socially responsible AI.
Less-biased AI is critical to our understanding of how the climate is changing. It will guide realistic solutions to mitigate and adapt to the on-going climate crisis.
Professor Shuckburgh, who also leads the UKRI’s Centre for Doctoral Training on the Application of AI to the study of Environmental Risks (AI4ER), said that recognising the issue of data justice is the first step to better outcomes.
“Only with an active awareness of this data injustice can we begin to tackle it, and consequently, to build better and more trustworthy AI-led climate solutions,” she stated.