1. Home
  2. Global

The clock is ticking to build guardrails into humanitarian AI

‘In the scramble to adopt AI, humanitarian actors risk neglecting their core values.’

A composite image showing a computer at the centre with concentric circles going from the middle of the screen and outwards.

As you read this, scores of philanthropists, donors, and aid agencies are exploring how AI might improve the efficiency, reach, and response times of aid operations.

This is exciting. Developing and deploying innovative and ethical AI solutions could unlock transformative gains for the millions of people who are affected by humanitarian crises.

But in the scramble to adopt AI, humanitarian actors risk neglecting their core values. The seductive search for viable AI applications is supplanting equally critical debates on AI ethics and governance, humanitarian principles and commitments, and accountability.

As staff at one agency alarmingly told us in one of our many ongoing conversations on the use of AI in humanitarian response, AI governance and ethics should only be considered after a pilot project has been designed.

Adopting AI without setting the rules and guidelines for its use risks undermining our core, collective principles as well as the hard-won promises we’ve made to shift power and resources to the Global South, to put the voices and agency of crisis-affected populations at the centre of all we do, and to make aid more accountable. 

The humanitarian sector needs to set guardrails around how it uses AI. Call it what you will: guidelines, minimum standards, a code of conduct, or even a manifesto. But this shared AI blueprint must embed our core humanitarian values and principles into any use of the technology. Shared standards can help humanitarians zero in on what safe and responsible uses of AI will look like.

The time to do this is now, while AI-backed projects are being designed – and before widespread adoption leads to consequences we can't reverse.

Benefits, risks, and consent

It’s true that some AI applications could help level the humanitarian playing field and support wider localisation efforts. Communities in crises are using generative AI tools to help with job applications and get more information about their rights; local organisations are navigating complex funding architecture, developing proposals, and creating communications materials.

But risks abound. Many of the AI systems humanitarian actors are experimenting with are procured from firms based largely in Europe or North America, potentially shifting resources and focus away from the Global South. And some of AI’s promised efficiency gains could, in fact, reduce the number of staff employed in the Global South, such as finance or grants management staff.

Many refugees have high hopes for how it could support their ambitions and make aid more fair. But they wanted to learn more about AI’s opportunities and harms – and to shape its use.

Additionally, it remains unclear which agencies (if any) have consulted with or engaged communities in the design of their AI initiatives. One machine learning engineer told us privately that community consultations would be too difficult given the complexity of AI.

But this misses the point. It’s not the complexity of AI’s inner workings that needs to be conveyed, but how it is used, its risks and benefits, and its rates of accuracy (and failure). After all, many adults who drive can’t explain how the engines and computers inside their cars work, but they still understand the risks related to driving and can operate a vehicle safely.

This also underestimates what communities already know about AI. Last month, during community consultations on AI in Kenya, the CDAC Network heard that many refugees have high hopes for how it could support their ambitions and make aid more fair. But they wanted to learn more about AI’s opportunities and harms – and to shape its use.

Equally, it’s not clear if any crisis-affected communities exposed to algorithmic decision-making or automated processes within aid have been informed and consented to the use of AI to this end. This includes situations where, for example, algorithms support decisions about where to allocate emergency funding. The right to opt out of automated decision-making (otherwise known as the “right to a human decision”) is on track to becoming a recognised right across Europe and the UK. Shouldn’t this right be extended to those receiving humanitarian assistance – arguably some of the most vulnerable on the planet?

Nor is it clear if these same communities are aware if and when the data related to the services they use comes into contact with an AI model. For example, we are aware of cases where humanitarian agencies are exploring ways to leverage programme-generated data (which sometimes includes information linked or linkable to individuals who use aid) to power AI solutions, or to run an AI through data to generate insights – all without the explicit consent of the data subjects.

In both cases, these agencies have yet to establish agency-wide policies related to the use of AI and how it relates to their data governance strategies.

A starting point for baselines standards

So, where do we go from here? 

All efforts to adopt or integrate AI into humanitarian operations should be grounded in humanitarian principles and commitments, particularly those related to do no harm, accountability, and localisation. At a minimum, this means humanitarians should:

  • Consult: Engage with communities whose well-being will be impacted by the use of AI, and explain if and how its use will impact the support they will receive. 
  • Make feedback meaningful: Incorporate the views of communities into the design of AI pilot projects and establish means of recourse if and when things go wrong, in line with both existing humanitarian and international AI standards. 
  • Integrate: Ensure any AI accountability initiatives link into established, functioning, and where possible, collective mechanisms for community engagement. 
  • Share resources: International organisations should partner with local and national organisations to increase access to AI solutions and relationships with solution providers. 
  • Diversify: Explore partnerships with AI and tech firms headquartered in the Global South – not only those based in Europe and North America. 
  • Be transparent: Publish how AI tools are being used within organisations, and share what has and hasn’t worked. 

At the same time, donors, philanthropists, and operational actors must work together to develop human-centred guidelines or standards that prioritise transparency, consent, and community engagement. These should draw on existing practice and standards, like the CHS and the CDAC Network’s best practice on community engagement, as well as newly published AI standards like the recently adopted ISO/IEC 42001 standard on AI management, which includes guidance on managing risks related to – and establishing accountability for – the management of AI systems within organisations.

We can design a car and the safety belt at the same time.

On top of this, humanitarians should explore ways to seek the meaningful consent of people who use aid for any automated decision-making – and ways they can opt out.

Humanitarians need a paradigm shift on data collection and ownership, away from today’s common practice of “aid for data”. People whose data comes into contact with AI must be treated as individuals with agency, rather than sources of extraction.

Developing humanitarian AI pilot projects, and ensuring that AI is used safely and ethically, are not mutually exclusive. One doesn’t come before the other: We can design a car and the safety belt at the same time.

The tens of millions of people who use aid are at the most vulnerable moment in their lives. Humanitarian experimentation must not put their well-being – and their trust in the humanitarian sector – at risk.

Prioritising community engagement and getting the governance of AI right is imperative. Setting guidelines and standards for the use of AI in humanitarian settings will ensure AI lives up to all its promises, big or small. 

There’s a window of opportunity to do this right – and it’s open now.

Share this article

Get the day’s top headlines in your inbox every morning

Starting at just $5 a month, you can become a member of The New Humanitarian and receive our premium newsletter, DAWNS Digest.

DAWNS Digest has been the trusted essential morning read for global aid and foreign policy professionals for more than 10 years.

Government, media, global governance organisations, NGOs, academics, and more subscribe to DAWNS to receive the day’s top global headlines of news and analysis in their inboxes every weekday morning.

It’s the perfect way to start your day.

Become a member of The New Humanitarian today and you’ll automatically be subscribed to DAWNS Digest – free of charge.

Become a member of The New Humanitarian

Support our journalism and become more involved in our community. Help us deliver informative, accessible, independent journalism that you can trust and provides accountability to the millions of people affected by crises worldwide.

Join