Let’s talk about AI

  • Gry Hasselbalch profile
    Gry Hasselbalch
    9 October 2018 - updated 2 years ago
    Total votes: 2

by Gry Hasselbalch, co-founder DataEthics.eu, member of the AI High Level Expert Group

/futurium/en/file/gryhasselbalchberlingskejpggry_hasselbalch_berlingske.jpg

gry hasselbalch
Copyrights : 
gry hasselbalch

The way we talk about AI limits us in what we think we can do with it. If we want AI that benefits a human evolution, we need a way of talking about it that respects our human values. 

AI is everywhere. And nowhere. Because what do we actually mean when we talk about AI? Is it a sophisticated improvement of our outdated human software? Is it a possible Sci Fi scenario where an out of human control machine out competes human kind? Or is it a commercial trade secret? 

Words are very powerful and as abstract as they might seem sometimes, they actually have real consequences. Real laws are implemented based on the particulars of language, real business decisions are made and real people’s lives are affected by the specific uses of words and the worlds that they portray. Evidently, the way we talk about AI defines what we think we can do with it and ask from it. Here’s a few musings on AI:

The founder of the singularity movement Ray Kurzweil - Humans are machines: “Biology is a software process. Our bodies are made up of trillions of cells, each governed by this process. You and I are walking around with outdated software running in our bodies, which evolved in a very different era.” (2013)

The late scientist Stephen Hawking – AI is a free agent: “The development of full artificial intelligence could spell the end of the human race (...)It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” (2014)

The co-founder of Google, Larry Page – AI is a (Google) service: “Artificial intelligence would be the ultimate version of Google. The ultimate search engine that would understand everything on the web. It would understand exactly what you wanted, and it would give you the right thing. We’re nowhere near doing that now. However, we can get incrementally closer to that, and that is basically what we work on.” (2000)

These are kaleidoscopic views on AI and they are representative of common ways of describing the AI development today. But they do not address the core issue at stake in this debate: What does it mean to be human? What role should technologies play in a human evolution? What do we want AI to do for humanity? 

One might even argue that they describe AI in a way that cloud our judgement and limit us as humans in what we think we can do with AI: 

If humans are just software, then of course we need an update. All software does. Doesn’t it? Say no more.

If machines are our superiors, then it is already too late. We are doomed. Let it go. 

If AI is just one company’s great business adventure (a better search engine, a smarter health care solution etc.), then it is also the greatest trade secret. So keep your nose out of it. 

These ways of describing AI leave us powerless. Before we can move on to a constructive discussion of the ethical implications of AI, we need to choose our words with care. Let’s start from this:

Respect ourselves for what we are – we are humans with specific qualities(not predictable software) – we have will, creativity, unpredictability, intuition, consciousness. (the day we’ve managed to understand these human qualities fully in science, we can start talking about replicating them)

Let’s approach AI as manmade data processing systemsthat can be managed and directed. Not as an uncontrollable free agent. 

And lastly AI is a shared good in society. It is not a trade secret, one company’s success and property. 

And then there are a myriad of things we might ask of the development of AI as individual human beings and as human communities. For example:

We could think of innovation as a human endeavor, build technologies that extend human agency, design built in means for individuals to influence and determine the values, rules and inputs that guide the system. 

We can think outside the box of profit-oriented AI to the development of non-profit AI, AI built for social good or even AI designed, owned and controlled by citizens. 

We can think of a plethora of governance (of AI) approaches that responds to many interrelated components, human and non-human factors - hardware, software, laws, standards, people, education, representing a complexity of interests. 

But we can also create laws that address and support the distinction between human and non-human actors, e.g. that an AI system should always make itself known as an AI agent. 

And we can even find areas in the human evolution where AI should play no role (the “red lines”).