1. Home
  2. Global

The problem with emergency aid’s growing reliance on corporations

‘Private companies are already fulfilling critical tech needs for aid at the highest levels.’

A person's silhouette whilst looking at their smartphone Composite Image/Artur Widak/Reuters

Humanitarians are growing dependent on corporate giants to power emergency response.

Biometric identification, cash payment systems, digital data sources for remote needs assessment, customer relationship management software, big data storage and analysis, and now the growing role of artificial intelligence and predictive analytics have seeped into everything we do and how we do it.

There’s an inherent problem: Corporations aren’t driven by the same principles that guide humanitarians. As private tech and humanitarian action become intertwined, we urgently need a frank discussion about how relations with corporate partners affect our ability to live our core principles – humanity, neutrality, independence, and impartiality. 

Humanitarians need shared standards to guide our growing reliance. The rise of AI and predictive analytics makes the discussion even more urgent.

Relationships with corporations impact virtually every area of humanitarian aid today, from refugee camps to headquarters.

Microsoft’s AI for Good pilots and provides core AI services for various humanitarian purposes, including partnering with OpenStreetMap to map vulnerable areas, and creating an AI mental health counsellor interface for people experiencing domestic violence or food insecurity, or who are in need of housing.

Salesforce provides customer relationship management software that can be used to track beneficiaries, or manage donor relations. 

Amazon Web Services is used for data storage in disaster response; WhatsApp, whose parent company is Meta, is used for communication and coordination; software firms like Palantir are used for analytics. 

Corporations have a fundamental fiduciary responsibility to their shareholders to maximise profit. This does not mean they do not participate in charity, but that their philanthropic actions are by nature aligned with their business interests.

Google, data-communications provider O3b, and other partners work with the UN’s refugee agency to provide wifi connections in refugee camps in Chad, Uganda, and Jordan.

Coursera and Rosetta Stone have provided free classes to refugees. Mastercard has provided cash transfers for Ukrainians and Syrians. And data drawn from refugee registration has been used to feed advanced analytics models that identify future trends.

The examples are endless. What’s clear is that private companies are already fulfilling critical tech needs for aid at the highest levels.

So why is it a concern if companies want to use their products for good? It’s a matter of principle versus profit.

Corporations have a fundamental fiduciary responsibility to their shareholders to maximise profit. This does not mean they do not participate in charity, but that their philanthropic actions are by nature aligned with their business interests. 

A 2017 study from the UN’s humanitarian aid coordination arm, OCHA, which examined what drives the private sector to get involved in emergency aid, found that 70% of respondents said they believed it to be important or very important to have a clear return on investment for their relationships. Some even said that humanitarian response provided “opportunities for the development and testing of new products”.

In other words: Humanitarians and the private sector don’t share the same motivations.

Corporations need not follow humanitarian principles. They are not driven by impartiality (some even refuse to operate in conflict zones). They frequently take sides. They can even be an extension of government entities, such as when telecoms firm AT&T reportedly helped the US National Security Agency tap into its customers communications – including the UN headquarters in New York.

With limited data protection laws, especially in developing countries, private sector actors can test new products and markets on vulnerable populations in conflict situations, sell and use data to any number of actors, and dictate the terms of collaboration to maximise their interests.

On the surface, the fact that humanitarians and corporations have different principles is not necessarily problematic – if there are barriers and protections to ensure these differences don’t bleed into humanitarian work.

But the opposite is far more common.

Are we choosing where we work based on need alone, or is it dictated by the availability of tech?

The current landscape governing humanitarians’ relationships with private companies is the wild west. There are no standard end-user licensing agreements. There is little leverage to dictate terms, and not enough pushing back on ensuring alignment. There are few firewalls, and no widely accepted standards for engaging with corporate partners across the digital landscape.

There has been a cry for change from some over the past five years, but with little tangible difference. 

This raises critical questions.

Humanitarians must ask themselves if product and market-testing – or brokering potentially sensitive data of vulnerable populations without clear consent  – is truly upholding their dignity? It’s certainly not holding up the core principle of humanity. These data-brokering relations can be found at the highest levels of humanitarian aid, including most notably Palantir’s controversial relationship with the World Food Programme

We can’t abandon all partnerships with private tech firms, but our values cannot be hostage to them. The growth of AI will make these questions far more complex, and our answers that much more important.

Are we choosing where we work based on need alone, or is it dictated by the availability of tech? That’s exactly what’s happening if the tech firms that humanitarians partner with won’t work in certain areas.

Are humanitarians clear about how the data they collect and store is being used by our corporate partners? We fear the answers to these important questions, and so many others, are different for every organisation.

We can’t abandon all partnerships with private tech firms, but our values cannot be hostage to them. The growth of AI will make these questions far more complex, and our answers that much more important.

These relationships are critical, innovative, and impact the lives and human security for the populations we serve. It is the responsibility of humanitarians to ensure these relationships align with our principles. 

The humanitarian community must have common standards for applying the humanitarian principles to corporate relations as it relates to digital technology, equal to those of how we work with governments and militaries. The absence of those protocols endangers our core values – and the dignity and security of those we serve.

Share this article

Get the day’s top headlines in your inbox every morning

Starting at just $5 a month, you can become a member of The New Humanitarian and receive our premium newsletter, DAWNS Digest.

DAWNS Digest has been the trusted essential morning read for global aid and foreign policy professionals for more than 10 years.

Government, media, global governance organisations, NGOs, academics, and more subscribe to DAWNS to receive the day’s top global headlines of news and analysis in their inboxes every weekday morning.

It’s the perfect way to start your day.

Become a member of The New Humanitarian today and you’ll automatically be subscribed to DAWNS Digest – free of charge.

Become a member of The New Humanitarian

Support our journalism and become more involved in our community. Help us deliver informative, accessible, independent journalism that you can trust and provides accountability to the millions of people affected by crises worldwide.

Join