-
12 comments
The appearance of Strong AI or Artificial general intelligence (AGI) - as a machine that can understand or learn any intellectual task that human beings can - has been predicted several times in the history of computing.
In 1955 McCarthy, Minsky, Rochester and Shannon coined the term Artificial Intelligence, in 1956 at the Dartmouth Conference they proposed a project:
"The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."
Since then we saw several updates on this prediction, including the concept of 'Intelligence explosion' by Gudak in 1965, and its update to a 'Singularity' by Kurzweil in 2005.
A new wave and technological concept appeared in the 1980s after Neural networks were understood better.
The current incarnation of AI enthusiasm is accompanied by an allocation of private and public budgets - including the EU - as well as media attention not seen before.
Many publications care about the ethical conclusions of strong AI and its effects on the human society - including here at the European AI Alliance.
Personally since 2018 I conducted new research and published on the field, having lectured on it first in 1988. From this insight I am - still - surprised by the current AI attention.
Does strong AI actually work?
Is there any evidence or sign of it actually occuring any time soon?
Do we have evidence about 'less intelligent' systems to demonstrate a progress toward strong AI?