Humanitarians can’t afford to sit out conversations about Twitter’s next chapter. Too much is at stake.
The internet is ablaze with discussions surrounding Elon Musk’s Twitter takeover. The commentary has largely focused on the potential impacts of Musk’s proposed reforms on Western users. But the conversation shouldn’t stop there.
Countries like India, Brazil, and Indonesia boast among the highest number of Twitter users – a whopping 23 million, 19 million, and 18 million users respectively – and Mexico, Thailand, the Philippines, and Egypt aren’t far behind.
In these countries and beyond, Twitter has been used to both help and hinder humanitarian efforts.
Local activists and humanitarians have used the platform to help catalyse revolutions, to share and collect lifesaving information, and to flag needs during emergencies. On the flip side, Twitter has also obstructed humanitarian efforts by providing an unchecked platform for misinformation and hate – often with disastrous consequences.
As a humanitarian communications professional who has directly witnessed the influence of social media networks on emergency responses in West Africa, Greece, Lebanon, and beyond, I am concerned about what Musk’s proposed reforms could mean for the future of aid. Here are three implications:
1. Loosening oversight could weaken humanitarian responses
Musk’s expected intention to loosen content moderation on Twitter is one area where humanitarians need to pay close attention.
While the billionaire’s precise plan to unlock free speech on Twitter is yet to be unveiled, previous statements have suggested a desire to relax community standards and put an end to permanent account bans. As of 23 November, Twitter stopped enforcing policies that previously labelled and removed misleading COVID-19 information.
Weak social media moderation policies can exacerbate humanitarian crises, from amplifying hate speech during conflict in the Democratic Republic of Congo, to mobilising anti-migration sentiments during COVID-19 in Greece. Twitter has also been weaponised by groups like the so-called Islamic State, or ISIS, which maintains at least 42,000 Twitter accounts for propaganda campaigns, mostly in Syria, Iraq, and Saudi Arabia. Despite scaling up efforts to restrict harmful content in recent years, Twitter’s screening tools remain inadequate for scanning foreign-language posts – so a steer toward further cutbacks is worrying.
Loosening content rules can also slow efforts to share and receive timely and reliable information during crises. When emergencies strike, real-time posting on Twitter helps humanitarians better understand the situation and communicate where and how people can find help like food, shelter, and medical assistance. Misinformation on social media can hamper these efforts – like in the Democratic Republic of Congo, where fake news about the Ebola vaccine contributed to existing mistrust towards health officials and aid workers. Without a more nuanced vision for content moderation, this may become the new norm on Musk’s Twitter.
2. Content editing may deteriorate humanitarian legitimacy and accountability
New efforts to introduce content editing features like an “edit button” on Twitter may also have deep consequences for the humanitarian sector. With new editing abilities, nefarious actors can turn to Twitter to share seemingly innocuous content – and then revise those same posts to include propaganda, hate, or misinformation after they’ve already raked in thousands of retweets. Combined with the platform’s ongoing cybersecurity issues, it could now become possible for hackers to edit and change legitimate humanitarian information posted online – for example, from government authorities or UN agencies during a disaster.
In recent years, whistleblowers have increasingly used Twitter to demand greater accountability from humanitarian organisations – an ability that could deteriorate with new editing features. For example, critics have often turned to Twitter to call out white saviour tropes and imagery shared by international NGOs in their online campaigns – but what happens when problematic content can simply be edited out in the face of criticism? While this might pave the way for international NGOs to revise old content with a more anti-racist lens, it could also swap more meaningful accountability in favour of simply editing away mistakes with the click of a button.
3. Monetising the checkmark: Another roadblock for humanitarians and activists
Finally, Musk’s plans to create a new subscription model for Twitter's coveted checkmarks warrants interrogation. Iranian activists have already warned how a pay-for-verification model could help Iranian government officials and other anti-protest actors appear more legitimate. At the same time, activists who previously gained their blue checkmarks organically may lose them if they aren’t able to afford the fees.
A new model could also snatch the megaphone from the hands of those who have few other ways to participate in global humanitarian decision-making. At a time when access to important humanitarian-related summits remains restrictive – either because of costly travel or prohibitive participation rules – Twitter verification at least provides a platform to voice perspectives that might otherwise be sidelined. During this year’s International AIDS Conference, for example, many activists turned to Twitter when visa issues barred them from attending in-person in Montreal. A tiered verification system may weaken this advocacy potential.
Whether we like it or not, Twitter is changing – and humanitarians must change with it. We have a responsibility to use our voices to push for more sophisticated moderation mechanisms, stringent cybersecurity, and equitable verification processes that support people affected by emergencies.
The future of Twitter remains unclear, and we can’t afford to have it determined by the whims of one unpredictable billionaire.
Aanjalie Roane is director of communications at Médecins Sans Frontières (MSF) Canada but is writing this in a personal capacity.