Hard news about humanitarian and social issues is being treated as toxic by overzealous ad technology, undermining corporate social responsibility and effectively punishing publishers for reporting on international crises, researchers say.
Take the winning of the Nobel Peace Prize in 2020. This was big news for the World Food Programme, but ad technology scanning for gloomy keywords like “famine” and “conflict” meant that big advertisers shied away from it on major media sites: An upbeat NBC article about WFP’s win was boycotted automatically by dozens of advertisers.
This is just one example revealed by new research on the hidden rules of ad tech. By examining the code of public web pages, analyst Krzysztof Franaszek has compiled hundreds of previously unseen “blocklists” connected to advertisers, containing some 7,000 distinct words and phrases – evidence of what critics say is excessive caution about “brand safety”.
Major companies pay ad tech companies to stop their online ads appearing next to – and therefore being associated with – content that would hurt their brand image. The software automatically scans pages for topics like violence, terrorism, and sex.
When a web page is loaded, “programmatic” ad software checks – in split seconds – from a pool of available ads to see which to place, based on the content. At this stage, ads drop out of contention if the page triggers any of the negative conditions set by the advertiser.
If a page is flagged “unsafe”, only low-priced advertising from less fussy advertisers appears, or none at all. Since web publishers get paid when ads are seen or clicked on, that means less revenue for them whenever a keyword blocks an ad.
Franaszek’s findings confirm that mainstream brands don’t just avoid mention of violence, misery, and pornography in ad placement, but also of racial groups, religions, and sexual orientations. Adding to the political and human rights questions that raises, humanitarian advocates may object on principle at the inclusion of “refugee” and the “World Health Organization” on the lists of words apparently blocked by key advertisers.
As well as jarring with corporate brand values and statements, excessive automatic ad blocking “deprives the news organisations which report on humanitarian issues of much-needed income”, Kate Wright, senior lecturer in media and communication at the University of Edinburgh, told The New Humanitarian.
The issue came to the fore in 2020’s febrile atmosphere in the United States, digital advertising consultant Claire Atkin told TNH. Companies tried to keep away from news about the coronavirus, which led to an “alarming” proportion of ads being held back from news pages, Atkin said.
After “coronavirus” was added to advertising blocklists in March, 20 times more ads were withheld than in February, according to Buzzfeed’s account of monitoring by Integral Ad Science (IAS), one of several brand safety technology vendors.
Franaszek’s research reveals that news publishers fared especially poorly because of what they covered: 21 percent of The Economist’s pages, 30 percent of nytimes.com, and over half the pages on Vice were skipped by major advertisers because the content triggered brand safety concerns, according to the data gathered. Even the human rights-flavoured articles by New York Times columnist Nick Kristof are marked as “unsafe” for advertisers most of the time.
Against company values
Franaszek stresses in his blog – named for a software tool he is developing, Adalytics – that this research method is “entirely observational”, and provides only a partial view of how ad tech functions under the hood. Advertising companies, like IAS, and the advertisers themselves, remain tight-lipped.
But after delving into the coding of public websites, Franaszek has published lists of keywords grouped under headings that appear to correspond to company names. Some seem out of tune with the values and partnerships espoused by the companies.
For example, a list attributed to Microsoft (MSFT_Neg) included “interracial” and “world health organization” despite Microsoft’s commitment to diversity and working with the WHO.
A list marked Mastercard_BlockList2_Dec2020 includes the word “refugee” (along with “hitler” and “pedophile”). Mastercard and its foundation work closely with the UN’s refugee agency, UNHCR, and take a public stance against refugee marginalisation. One aid worker, commentating on the findings, wrote on Twitter: “Why are ‘#refugees’ on your blocklist? :( You’re better than this.”
Seth Eisen, senior vice president for communications at Mastercard, responded via email to questions, saying, “inclusion and exclusion lists for digital media are part of the effort to protect and enhance our brand.” He refused to comment on what he called a “speculative list”, but said the lists help the company's marketing “focus by having an even more active role identifying where our brand shows up”.
@AskMastercard— William Carter (@WillCarter_NRC) January 11, 2021
Why are ‘#refugees’ on your blocklist?
:( You’re better than this. #MakeItBetter @Mastercard
Franaszek’s findings have not been confirmed or denied by IAS or the companies named. However, the US branch of Médecins Sans Frontières (MSF) did confirm to TNH the authenticity of the contents of one of the blocklists, labelled Doctors without Borders.
Questions to IAS and Microsoft went unanswered.
Soon after Franaszek published his findings on Adalytics on 7 January, the code for IAS’ Admantx service – which he had been examining for his research – changed so the lists could no longer be seen. Public web archives have preserved examples validating Franaszek’s methodology. TNH independently reviewed examples, including the NBC article about the WFP, that show how the data had been left in the open.
The use of blocklists has expanded rapidly since reports in 2017 found ads supporting terrorist content on YouTube and elsewhere. Since then, concerns have grown of wrongly blocked content and discrimination, for example by blocking ads from pages referencing LGBTQI+ issues.
Even before the coronavirus, one study estimated that over $3 billion of potential revenue was lost to “over-blocking”.
In linking companies to keywords and estimating the proportion of news articles shunned by programmatic ad tech, Franaszek is producing “data that is unprecedented”, said Atkin, co-founder of Check My Ads, a marketing advisory firm.
Atkin said it was further evidence that measures promoted by technology companies have the effect that “the news is being demonetised”.
According to an executive at Vice, excessive blocklists mean advertisers are distancing themselves from the urgent social and political news of the day, putting themselves “pretty far removed from the national conversation”. But Atkin went further, saying brand safety techniques “are actively undermining democracy”.
The ad tech industry is increasingly acknowledging that blocklists can be a crude tool – that keywords can be ambiguous and can’t grasp the tone of an article.
New systems work better, according to Joe Barone, who handles brand safety at US-based advertising firm GroupM. Barone told TNH that most techniques in use now are “much more sophisticated than exact-match keyword blocking.
“As these machine learning-based technologies become more prevalent, we are seeing a dramatic decline in reliance on keywords,” he added.
The benefits of blocking
While critics say they have become far too broad, blocklists allow advertisers not to waste their ad spending and to protect their reputations.
Some uses are straightforward. For example, a video game called “Marvel Strike Force” is now on MSF’s blocklist – until they blocked it, medical MSF ads were being put next to gaming MSF content.
MSF told The New Humanitarian it has “thousands” of keywords in blocklists that it reviews quarterly, and said they served a useful purpose.
Kyle Levine, digital marketing manager for Doctors without Borders USA, said its fundraising campaigns are looking for potential donors, and those people need to “feel happy, secure, and confident about their donation”. Levine said that even though the NGO works in situations of conflict and disease, it wants to disassociate itself from graphic, upsetting, or triggering content, such as terrorist attacks, gun rights, and conspiracy theories.
He said MSF’s ads appear on thousands of sites through the use of automatic ad networks. Exclusion and blocklists are an important safeguard against “every digital person’s worst nightmare”, to find its ads on a controversial site, Levine added.
MSF confirmed the contents of one of the blocklists discovered by Franaszek, named DoctorsWithoutBorders_NegativeKeywords_Oct2019. It consists of “irc; oxfam; amnesty international; st jude; mercy corps; save the children; unicef; american red cross; unhcr”. This demonstrates another use of blocklists. Levine explained that under a loose agreement between major non-profits, they would refrain from jumping on potential donors who were showing an interest in one of their “peers”.
As well as facing criticism for being too zealous in blocking, advertisers also need to watch out for the reputational risk of spending on sites that promote division, misinformation, or hate.
A January 2021 report by publisher alliance NewsGuard said that 1,668 advertisers were funding disinformation, placing ads on 160 misleading current affairs sites during the US election season.
Activist groups, such as Sleeping Giants, have had visible success in pushing advertisers to drop sites that publish false or misleading misinformation, including commentators on Fox News. Since 2016, thousands of advertisers have pulled their ads from the American far-right news website Breitbart. In 2017, UNICEF, UNHCR, and MSF were among the non-profits that took a cue from the campaigners and pulled ads from Breitbart.
However, TNH has found that ads for the three agencies, and for WFP, can occasionally still appear on Breitbart’s search results page (the loophole, and possible related advertising fraud, is explained here). Text ads for the agencies appear when search terms match keywords they set up in Google’s ad distribution system. Any click on the ad would result in revenue for Breitbart.
Responding to questions from TNH, UNICEF and MSF said they were removing their ads from the search results on Breitbart. WFP declined to comment, and UNHCR did not provide a response before publication.
Opting in, not opting out
The ad tech industry is increasingly aware of the paradox that – in the pursuit of avoiding risk – it might have created a new headache that hurts advertisers and news publishers.
Many advertisers put their inventory into an ad selling network – such as Google’s – which allows them to appear on a vast number of sites, too many to vet manually. Those networks are set up to achieve a set spending target each month.
Marketing analyst Augustine Fou told TNH that withholding ads from quality publishers increases, ironically, the chance of ads finding their way onto “really crappy sites”.
Fou said the solution was for advertisers to stop the “spray and pray” system that pushes a large number of ad impressions out to a huge number of websites, “most of which they’ve never heard of”. The work of Sleeping Giants, which publicly exposed advertisers of questionable placements, showed that many advertisers “don’t even know where their ads ran”, he added.
Rather than shying away from real news, Atkin said advertisers should want to be seen on reputable news sites. “Advertising on the news helps you get in front of readers who are engaged, and educated, and reading,” she said. “They’re not clicking around uselessly.”
Two thirds of consumers say they are more likely to look at an ad on a site they know. But ads next to news about conflict and disaster are off-putting for at least a third, according to a September survey commissioned by ad tech company DoubleVerify.
According to a guide by the Interactive Advertising Bureau, an industry group, a more subtle approach of context-sensitive “brand suitability” is part of the solution.
Both Atkin and Fou argue that advertisers should opt in and select sites where they want to appear, rather than try to exclude an unknown number of “unsafe” sites and pages using automation.
Fou said brand safety companies have “overpromised” and “ended up defunding real news and funding fake news”.