The monitoring programme is a transparency measure to ensure accountability towards the public of the efforts made by platforms and relevant industry associations to limit online disinformation related to COVID-19. Today’s reports initiate the second phase of the monitoring programme set out in the Joint Communication.
For the first phase of the monitoring, the platform signatories Facebook, Google, Microsoft, TikTok and Mozilla provided baseline reports setting out information on relevant policies and procedures that platforms have put in place in Europe from the start of the COVID-19 pandemic through 31 July 2020. The Commission published these baseline reports on 10 September.
The second phase of the monitoring programme aims at making use of specific indicators to enable a monthly overview of the effectiveness and the impact of the signatories’ policies in curbing COVID-19 related disinformation from August to December 2020. The reports published today highlight the actions taken by Facebook, Google, Microsoft, TikTok and Twitter since the baseline period, with a focus on August 2020.
In addition, this set of reports includes reports by the World Federation of Advertisers (WFA), the European Association of Communications Agencies (EACA), and the Interactive Advertising Bureau Europe (IAB Europe), which are trade association signatories to the Code of Practice on Disinformation, overviewing activities and developments in the advertising industry in relation to the COVID-19 pandemic.
The Platforms’ Reports
The reports indicate efforts by the platforms to carry out policies and actions to address COVID-19 related disinformation. Broadly, the platforms reported on:
- Their continuing efforts to increase the visibility of authoritative information sources during the ongoing the COVID-19 crisis;
- Improvements in their use of panels, popups, information centres and hubs promoting authoritative information about COVID-19 from the WHO and national health organisations as well as on improved tools for users (tags, labels and notifications aimed at providing relevant and contextual information about the crisis);
- Their actions to limit the appearance or reduce the prominence of content containing false or misleading information on their channels. This has been done primarily by updating their terms of service and more rigorously applying machine and human controls to demote or remove content violating those terms, including content liable to cause physical harm or impair public health policies. The reports provide also some information about instances of manipulative behaviour on their services, without, however, much detail at Member States level;
- Broader and more effective collaboration with fact-checkers and researchers, and greater promotion of content that is fact-checked;
- Initiatives aimed at providing grants and free ad space to governmental and international organisations to promote campaigns and information on the pandemic, and to journalist organisations and media literacy actions to sustain good journalism;
- Actions taken to update advertising policies and provide relevant information to advertisers and to step up efforts to block or remove advertising that exploits the crisis or spreads disinformation about COVID-19.
The reports also provide some quantitative data illustrating these actions and their impact through August 2020. For example:
- From January to August 2020, Google blocked or removed over 82.5 million COVID-19 related ads, suspended more than 1.300 accounts from EU-based advertisers trying to circumvent Google’s systems, and took action on over 1.700 URLs with COVID-19 related content.
- From 1 to 31 August 2020, over 4 million EU users visited sources of authority on COVID-19 identified by search queries on Microsoft’s Bing. In addition, Microsoft Advertising prevented 1.165.481 ad submissions related to COVID-19 from being displayed to users in European markets.
- Facebook and Instagram reported that more than 13 million EU users visited their COVID-19 “Information Center” in July and 14 million EU users in August. Also, Facebook displayed misinformation warning screens associated with COVID-19 related fact-checks on over 4.1 million pieces of content in the EU in July and 4.6 million in August.
- Twitter reported that 949 Promoted Tweets violated their COVID-19 policy in August, estimating that 80% of the violating content was detected by their automated systems. During the same period, 4000 tweets were removed and 2.5 million accounts challenged under Twitter’s COVID-19 guidance.
- In July and August, TikTok applied a COVID-19 sticker to more than 86.000 videos across its four major European markets (Germany, France, Italy and Spain), while removing more than 1000 videos related to COVID-19 in violation of their policies or containing medical misinformation.
In general, the reports provide a good overview of actions taken by the platforms to address disinformation around COVID-19. However, there are still substantial gaps. In some instances, data has been provided at global, rather than at EU or Member State level. In other instances, it is unclear whether the data provided relates to actions taken to address COVID-19 disinformation, or a broader range of objectives.
The trade and advertising associations' reports
The WFA report refers to progress made within the Global Alliance for Responsible Media (GARM). Through GARM, the advertising industry and platforms are working to improve the scrutiny of ad placements and reduce revenues for purveyors of harmful content, including disinformation, through the development of common industry standards that can be enforced and monitored across advertising platforms in a transparent and consistent manner. The report also provides a broad overview of the impact of the COVID-19 crisis on the advertising industry, including the prospect of substantially reduced advertising budgets in 2020. It notes that advertisers have been criticised for using exclusion lists to avoid ad placements next to disinformation and harmful content related to COVID-19, which may also inadvertently result in blocking news relating to the pandemic.
The EACA report offers an overview of policies and practices used by its members to disrupt the monetisation of disinformation. In particular, advertising agencies in Europe are intensifying efforts to develop and encourage the use of brand safety and verification tools, increasing their collaboration with third-party verification companies, and assisting advertisers in assessing media buying strategies and online reputation risk. Moreover, the report notes the use by EACA members of exclusion lists, inclusion lists and grey lists in connection with ad placements. EACA also highlights that it and some of its members have joined GARM.
The IAB Europe report provides an overview of the association’s ongoing efforts to reduce the incidence of disinformation in the digital advertising ecosystem through awareness actions, initiatives by its national-level associations and its collaboration in GARM. It provides some broad estimates of the impact of the COVID-19 crisis on the digital advertising industry and the media sector in general. It notes, in particular, how the use of exact match words (e.g., “crisis,” “COVID-19”) in avoidance technologies has had the unintended consequence of blocking advertising from appearing next to COVID-19 related news, thus reducing the availability of monetisable inventory and impacting technology providers in the sector as well. Among other things, IaB Europe recommends that brands work closely with their partners to implement smarter solutions to ensure that their advertising continues to reach the right audiences during COVID-19.