1. Home
  2. Global

How private tech threatens humanitarian principles 

‘Technology is becoming more complex than we used to think, and we cannot afford to be naive anymore.’

This is a composite image done in the style of editorial collage. At the centre is a laptop. Out of its screen we see a hand holding a phone. Behind these two things are two satellite dishes. Two green squares and two grids are in the background.

Humanitarian aid organisations are risking their neutrality, and potentially even their protection under international humanitarian law, by partnering with some of the same private technology companies that provide services to military, intelligence, and law enforcement agencies.

This is one of many concerns raised about the humanitarian sector’s digital transformation by the report “Mapping Humanitarian Tech”, published last month by the digital rights advocacy group Access Now.

“Humanitarian principles tell you to be impartial, to provide services only based on need. That’s not how businesses work,” Giulio Coppi, senior humanitarian officer at Access Now and author of the report, told The New Humanitarian in a March interview. “What happens when the technology is not aligned with the humanitarian values that should be driving our action?”

The report traces the journey of data collected by tech companies from people experiencing crises through surveys and registration processes, then managed in cloud-based platforms, matched to groups or individuals using biometric systems, and finally analysed using AI and advanced analytics.

Throughout this journey, personal identifiable information about people in need of humanitarian support is exposed to various risks.

Some organisations hold on to sensitive data for years after they are no longer using it, only because they are required to do so by donors, or they lack the ability to delete it automatically or in bulk. Archived data may end up being acquired by companies that were not part of the original humanitarian response, such as in the case of the survey platform GeoPoll, whose parent company underwent a series of mergers in 2015 and 2019 with no announcement about how the mergers would affect the use of its humanitarian data.

Humanitarian actors also need to be more transparent about the nature of their relationships with tech companies, what criteria they use to pick partnerships, and share the terms of their agreements.

Many private tech companies also collect both military and humanitarian data, raising questions about whether the data is sufficiently compartmentalised.

To collect evidence from victims of the so-called Islamic State, the United Nations Investigative Team to Promote Accountability for Crimes Committed by Daʼesh (UNITAD) enlisted the Israeli company Cellebrite, which has a record of being used by governments for border control, enabling abuses against activists, and providing services to Bangladesh’s Rapid Action Battalion – “a notorious paramilitary unit accused of carrying out extra-judicial killings, torture, and disappearances”, according to the Access Now report.

Meanwhile, biometric companies point to their activities in humanitarian contexts to demonstrate their trustworthiness and contributions to good causes, while also fuelling mistrust in publicly funded social services – what the report refers to as “a mix of aidwashing and market entry strategy”.

Underlying all of these concerns is the lack of transparency around humanitarian tech partnerships.

“Tech companies fail to disclose the full geographical scope of their humanitarian engagement or to discuss the terms and conditions of their data processing systems, just like humanitarians refuse to publish their data protection or protection impact assessments, or even to reveal the full list of tech vendors,” the report says.

In the interview below, Coppi elaborates on the risks of humanitarian tech partnerships, and proposes ways for the humanitarian sector to be more transparent and more discerning about giving private technology companies access to people living in a crisis.

This interview has been edited for length and clarity. 

The New Humanitarian: Your report begins at the inception of the humanitarian aid sector itself, pointing out that a founder of the Red Cross movement, Henry Dunant, was an “exploitative colonial settler”, and another early leader of the movement, Gustave Moynier, was “commercially associated with the genocidal exploitation of Congo by King Leopold II”. What motivated you to include these historical points in your analysis of humanitarian tech today?

Giulio Coppi: The initial idea was to show that the link between business and humanitarian action is not new. The humanitarian framework actually started with a travelling businessman. But it was pointed out to me that he wasn’t just a businessman. If you scratch the surface, there’s more. The humanitarian sector has come a long way in discussing Dunant’s roots and starting the process of decolonising. But it’s not finished. We haven’t tackled the relationship to business and its colonial components. I thought this anecdote was a good segue to say: not all humanitarian action is good humanitarian action, not all business is good business. And we need to be deliberate about how we define each one.

The New Humanitarian: You’ve said you initially planned to map out which humanitarian organisations are using which technologies, and where. That proved impossible. What happened to that initial plan, and how did you pivot?

Coppi: The more I kept looking, the more I found only press releases and small project reports talking about random developments in a tech partnership. Sometimes there was information about a pilot that just wrapped up, with big plans for expansion, but then nothing else. Then I started crawling on the pages of the companies involved and the pages of the humanitarian actors involved. There was nothing there about these partnerships. Almost no humanitarian tech project includes information about when and where it started, about the technology they’re using, or even about the partnerships they’re striking. Sometimes they say these corporate actors gave us $5 million for the next five years, but they don’t say how much of that is actually cash, how much of that is value of services or software. You are left to guess what might be happening. This has heavy repercussions on the way the media or those inside the humanitarian sector discuss technology. We don’t talk about the details of the project – we only talk about the technology itself. We talk about AI, we talk about blockchain, we talk about chatbots. But we don’t talk about the specifics – what that technology entails.

The New Humanitarian: Why does it matter that we don’t have all of these details?

Coppi: It matters because the humanitarian sector is stuck with a static view of risk – that data, in the form of a piece of paper, or an email, or a spreadsheet, could be snatched, and people might be harmed. But the digital ecosystem is much more complex today, with sub-processors, third parties, and cloud processing activities. Meanwhile, as humanitarian access shrinks, as we see in Syria, Yemen, and parts of Ukraine, reliance on technology is increasing. What happens when technology goes dark? What happens when the technology is not aligned with the humanitarian values that should be driving our action? You can’t just switch your operating system. You can’t just change all your computers. It matters because technology is becoming more complex than we used to think, and we cannot afford to be naive anymore. 

The New Humanitarian: The report points out that there are instances where humanitarian organisations and the tech companies they partner with might have opposite interests, such as when certain technologies have humanitarian applications as well as military applications. Can you highlight an example to illustrate why this is concerning?

Coppi: There is one example which is a little bit more abstract. All the cloud and storage platforms also serve military purposes. All the cloud processing systems that are sold as “AI for good” are also used to do military targeting. That doesn’t mean that the same exact data is used for both things, but again, because of the opacity, we don’t even know. Some humanitarian actors try to push back with their tech providers, and they try to put limits on how their data is used, but those negotiations are unknown to the public, which leaves other NGOs, especially smaller local NGOs, in a vulnerable position because they cannot push back against big tech. 

Another example is Starlink, the satellite internet service provided by billionaire Elon Musk’s aerospace company SpaceX. It was presented as the saviour in Ukraine because it could connect communities and would revolutionise the humanitarian sector. Several UN entities partnered with Starlink but barely ended up using it. Communities did use it, and they loved it, and they wanted more, but it was too expensive, too complicated. It also became risky because both parties in the conflict ended up using it. How do you distinguish between a Starlink terminal for military purposes and a Starlink terminal used by a family to connect with their loved ones abroad? Starlink ended up separating its defence line of business from the civilian one, but this happened under the radar. This is one example of a tech company creating a defence solution that had been tested in a humanitarian context.

The New Humanitarian: What is the likelihood of tech companies coming into line with humanitarian principles? And in the meantime, what would you advise to humanitarian organisations in need of tech solutions?

Coppi: I don’t think there is any chance that companies will align with humanitarian principles, for one simple reason – it’s incompatible with their liabilities and shareholder fidelity. Humanitarian principles tell you to be impartial, to provide services only based on need. That’s not how businesses work. The humanitarian principles also tell you to be neutral – don’t provide any military advantage to one of the two parties. How could a tech company do this if they have a line of business that is only focused on defence services? If you talk about the principle of independence, you’re supposed to not be under the influence of one of the parties to the conflict. But what if one of the parties to the conflict is the country where you’re registered as a company, and you’re liable, and you can be called in front of Congress or in front of the parliament, or in front of a court to answer for your actions? That’s not independence.

Clearly, the humanitarian framework does not help humanitarian actors negotiate with tech companies. But what they could do is start applying the UN Guiding Principles on Business and Human Rights in a very strict way. There must be due diligence and accountability for suppliers and third-party providers, which is not happening. Humanitarian actors also need to be more transparent about the nature of their relationships with tech companies, what criteria they use to pick partnerships, and share the terms of their agreements.

The New Humanitarian: What feedback did you get from humanitarian organisations or from tech companies on your report?

Coppi: From tech companies, not a lot. Unfortunately, because of liability, because of shareholder interest, because of the complexity of even talking about situations of conflict, companies are neither willing nor interested in sharing where they work in situations of conflict. This is not necessarily because people are bad. It’s because the nature of a business means you cannot say what you want, whenever you want. Our next step should be to build a trust-based relationship with some companies and find a way to talk about these things that is not problematic for them but that gives enough guarantees to the humanitarian sector. 

The feedback from humanitarian actors has been pretty good for those who decided to engage. There are some that just did not respond at all – not answering our queries, not providing feedback on the report itself, which I shared in advance. Some others did engage, pushed back, and provided counterpoints. This was very good, and I corrected some mistakes, thanks to the openness of these organisations. Now, we have a better relationship with those humanitarian organisations that decided to open up.

The New Humanitarian: Do you want to shout out any of the organisations that did engage?

Coppi: UNHCR (the UN’s refugee agency) was very open, and it was a constructive discussion. They don’t agree with all the conclusions, but they are open to keeping the discussion going, and I’m very happy about that. ICRC and IFRC (the International Committee of the Red Cross and the International Federation of Red Cross and Red Crescent Societies), as well, have been the most engaged in this discussion. It’s promising because it’s the beginning of a longer discussion, not just with me, but with the whole sector.

Share this article

Get the day’s top headlines in your inbox every morning

Starting at just $5 a month, you can become a member of The New Humanitarian and receive our premium newsletter, DAWNS Digest.

DAWNS Digest has been the trusted essential morning read for global aid and foreign policy professionals for more than 10 years.

Government, media, global governance organisations, NGOs, academics, and more subscribe to DAWNS to receive the day’s top global headlines of news and analysis in their inboxes every weekday morning.

It’s the perfect way to start your day.

Become a member of The New Humanitarian today and you’ll automatically be subscribed to DAWNS Digest – free of charge.

Become a member of The New Humanitarian

Support our journalism and become more involved in our community. Help us deliver informative, accessible, independent journalism that you can trust and provides accountability to the millions of people affected by crises worldwide.

Join