Dr Melanie Peters is part of the AI Alliance. In this post, she talks about her experience at the Rathenau Instituut and the new developments in the field of Artificial Intelligence

Digital Society

A couple of years ago Edward Snowden hacked American secret service data and published them for everyone to see. He shocked the world. How could so much data have been collected? How could this data have been hacked? Big data and safely storing them became an issue. We reassured ourselves: that was America and those data belonged to important people, this could not happen to us.

The 2016 US presidential election and the Brexit referendum proved us wrong: it could happen to anyone. We too are interesting targets. Our online behaviour can be analysed to profile us and divide us into certain groups. The information, in return, can be used to target us, for example by modifying the messages that appear in our Facebook feed. This means we are of interest to those who want to manipulate our thoughts and personal behaviour, even though we don’t know them personally.

The issue of big data, privacy and security became broader. These incidents showed that (without releasing data and violating our privacy in a direct way) combining, analysing and using different data sets (without our knowledge) to influence what we see in social media, has a huge impact on our personal lives and on our democratic society as a whole.

Human Rights

At the Rathenau Instituut, we analysed different cases ranging from care robots to local government services to establish how we are affected by new technologies that can be used for good or bad. Our studies show that they definitely affect not only us, but also our society. Sometimes we think this technology is convenient, for instance, when we learn that "other readers of this book were also interested in...", but we are mostly unaware of the data captured and used.

Our studies show that people’s most intimate rights are at stake: the rights that the European Union declared as fundamental and the rights adopted by the United Nations. New technologies affect our autonomy when they take decisions for us, like determining what news items we see. When they take over decisions in our workspace, they can lead to deterioration of our own professional decision making skills. They might lead to exclusion and discrimination in the marketplace when services are offered to some and not to others. They also have an effect on our rights as consumers to buy what we like and to access services without paying with our data.

Therefore, they affect both our individual and collective rights as citizens: the right of freedom of speech and the right to take part in public life. In fact, these technologies fundamentally change many relationships: between workers and employers, patients and doctors, citizens and government, and in this way affect the rights already defined in these domains.

So what do we need to do?

First, we need to look at the consequences for what they are. Digital technologies and data are not magically going to fulfill our basic needs, even though this is what we often hear. On the contrary, a lot of misuse is possible. We need conceptual clarity on new data-collecting players and new technology designers and owners. They are very much responsible for our wellbeing and cannot violate our rights.

Conceptual clarity also points to which laws or rules apply and which agencies need to supervise their behaviour.  Just like the ruling of the Court of Justice showed that Uber has responsibilities towards its drivers and passengers and towards road safety. Clarification is needed first, but it does not always mean we need to design new rules.

New rights and rules

We found that we needed two rights to satisfy our basic needs in the data society, in addition to our existing rights:

  1. the right not to be monitored in certain situations and second,
  2. the right to meaningful human contact in certain important situations.

The introduction of new technologies takes a lot of effort to rethink how to incorporate them to work for society as a whole and for our personal lives. We will need additional rules in some domains, but it all starts with actors taking responsibility and respecting their duties.

Competition rules

Competition is an area where we do need to think new rules. We need to answer the question: how can we create fair competition in a data society?

We know that data companies and their platforms tend to evolve into monopolies. They become like utilities and we need to ask how they can be privately owned yet governed to serve the common good. This question requires very fundamental rethinking. Data ownership related to competition is another issue.

Under the European GDPR your data are your property. Will you be able to command ownership of these data?

What about artificial intelligence (AI)?

In the last few months, Artificial Intelligence became a buzz word. In fact, what I described above is AI: an artificial system is intelligent when it is able to sense (collect data), decide (profile) and act (advise us or show us a piece of news).

Everyone talks about algorithms and AI as novelties when, for example, a calculator is AI and an IQ test designed to select children for certain schools is an algorithm. We have been using algorithms for centuries.

The scary bit is not AI or the algorithm, but automatic decision making without being able to answer. When our children take an entry test, we discuss the result with the teacher and everyone agrees one bad test should not determine the future of the child. This is why the interpretation of the test is always a teacher’s job. Even if the test algorithm carries out self-learning, we would want the teacher to be in charge.

The fact that we will be able to collect more and more data is new. With 5G networks we will be able to collect data from our homes, devices, cars and environment and combine them instantly. It can then be used for calculations by algorithms which can profile us and influence our behaviour in real time.

Our cases show that 5G will accelerate what we see today. Algorithms crunching big data will rise in many sectors, including banking, health, justice, government services and commerce. The examples above show which human rights are at stake. It is not the use of AI per se, but the unknown way data are used and combined. These decisions have an unacceptable impact on our personal lives and public life.

AI for the good of all

The High-Level Expert Group on AI has a problem when deciding in which situations or domains AI. When do we really need to be sure how decisions were taken and which data were used? Who is responsible? How can we prevent commercial or other misuse such as cyberattacks on the data? How can we trust our government with our data?

We will have to ask which standards already apply, which appeal options exist and how to form new practices around the use of these tools. Keep asking the teacher for advice, even though school entry will not be based on one test, but perhaps on data from the whole school career of a child. It is about human judgement of what is best for a child.

Like in education, extra safeguards are needed in banking, health, justice or government services. We believe that extra care should be taken in these domains when innovating, generating data and sharing them with third parties. We think this should be done in embedded innovations, in which human rights are not an afterthought but part of the design. This has to be tested in practice, monitored, and embedded in the democratic decision-making process

What we learned was to differentiate between public sectors creating public good, needing democratic control and the private sector. Even though private actors work for governments, they need to know that when working with governments even more safeguards are needed. We need to look at dual use applications (products that can be used and misused), which could be security threats and sectors that are vulnerable such as our energy networks. Decisions that affect these networks, when tampered with, could become geopolitical hazards.

All of the above means that we need to increase our knowledge on these technologies and how they can be used for the good of all. We should admit the impact they can have on our lives if we do not design and use them the right way.

We also need good governance frameworks that apply to different actors. Many companies put their ethical codes on AI and data-technologies at the top of their code for corporate social responsibility. International legislation and soft law already bind governments too. The question here is about collaboration. How will all these actors work together to protect us citizens, to create a world in which we are not controlled by data but where it helps us shape our world and fulfill our basic needs? How can we build trust in the digital society?

It is good to see that UN, ISO, OECD, UNESCO, the Council of Europe and the European Union put AI and the safe and inclusive digital society high on the agenda. They are all building a part of the governance framework in which companies, governments, society groups and we as citizens can take responsibility for this development.

More information

Join the European AI Alliance to comment on this blog or discover other blog posts by Loubna Bouarfa and other members of the High-Level Expert Group on Artificial Intelligence, and to contribute to their work on the future AI ethics guidelines and policy and investment recommendations.

The European AI Alliance is a multi-stakeholder forum for engaging in a broad and open discussion of all aspects of AI development and its impact on the economy and society. It is steered by the High-Level Expert Group on AI (AI HLEG), which consists of 52 experts who have been selected by the Commission for this task.

Go to the European AI Alliance platform