GDPR and AI - unfair to controllers and product users

  • Stefan Keller profile
    Stefan Keller
    12 February 2019
    Total votes: 4

Ethics in AI (Artificial Intelligence) appears to be an area of focus for the EDPS and EDPB today, as this and the next few years might be a decisive point in time on whether or not our societies will suffer a lasting negative impact caused by AI tools. (Think social media and the upcoming elections, but also the overall influence of "BigTech"'s algorithms on everyday's lives.)

I saw arguments being made at the European IAPP conference and in drafts that "ethics-by-design" is somehow embedded in "privacy-by-design", and that AI could be controlled by skillful application of GDPR.

AI comes in different shapes into our daily lives. We find it embedded in finished end-consumer products (e. g. smart home surveillance cameras or software products) or in the corporate world as part of Software-as-a-Service Offerings or local software.

Some companies develop their own AI tools and integrate them into their own products and services. Many others (probably the majority) buy AI components from various vendors and then integrate them into something new.

AIs have interesting features:

  • AI holds implicitly personal data - even if no personal data is seemingly present. In a way, AI dormantly embodies categorizations, views, and decisions on data subjects without having interacted with them beforehand.AI components can be bundled.
  • A good AI might also be used to train another inferior AI.

Overall, there can be quite some distance between the original developer of one AI, the integrators, and the final manufacturer of a product or service provider. - None of these might actually be affected by GDPR directly.

The primary obligations under GDPR are with the data controller - who could also be the end-user of an AI-enabled product, e.g. a gadget or toy.

In many situations, it is unreasonable to expect that the data controller would be able to enforce privacy/ethics-by-design for an AI product/service in a practical way.

  • Very often there won't be any "golden records" to verify that the AI is fair and unbiased.
  • In many cases, the controller wouldn't even know, if and how many AI components are in the underlying product or service - and have no clue about their pedigree.
  • The practical influence of the controllers the vendors will be small - due to the limited understanding of the product/service, as well as the market dynamics associated with disruptive AI-based solutions.

In my mind, regulators must address this unfair position of the controller under GDPR by

  • introducing an obligation to have any AI declared in products and services by the vendor - accompanied by adequate information to allow for privacy compliant usage
  • requiring a pedigree to be provided for any AI component - including evidence of the component being fair and unbiased
  • establishing a licensing scheme for certain high-risk types of AI - maybe similar to the existing CE mark for medical devices
  • emphasizing the importance of privacy seals for AI based products and services