Should we welcome and trust the Artificial Intelligence (AI)?

  • Vasco Gonçalves profile
    Vasco Gonçalves
    21 February 2020 - updated 1 year ago
    Total votes: 4

It depends whom you ask [1]:

  • For industrials driven by money, they would say ‘Yes’.
  • For end-consumers driven by privacy, they would say ‘No’.
  • Others would say ‘Don’t know’.

We know that the AI could bring health and prosperity to anybody who is willing to use it. Why such divisions?

The problem was and still is the science-fiction movies, which many transliterated into reality, and the most obvious the data leakage into the wild during these last two decades, and still going on. We know that the press likes sensational articles and documentaries. These media instill fear on the people who are neophytes.

The GDPR was created to avoid such leakages, where it became partly successful, but they forgot the manufacturers of information technology devices and software where the minimum, or none, is invested in security tests, i.e. holes in their firmware which are exploited by rogue hackers and governments. A legitimate question comes to the fore: ‘Where is the responsibility of the manufacturers?’


Where should the AI be regulated?
Many speak about high-risk zones, but what are these? Just to mention some:

  • Militarized weapons, the so-called ‘Lethal Autonomous Weapon systems (LAWs)’ [2] (It was a big mistake not to include the military AI in the EU commission guidelines!)
  • Health industry, e.g. hospitals, social securities, private physicians, …
  • Banking industry, e.g. financial reports, credit card information, …
  • Justice industry, e.g. information pertaining to children
  • People’s personal data, e.g. photographs, documents, recorded conversations, …

You might say that these areas are already covered by the GDPR, but let’s assume that a minor wants to disclose private information about himself or the family, and asks the AI to do it. Here lies the problem, it was authorized by a human. A child is not aware of the dangers by disclosing information to the public.


Will people trust the EU commission on the AI guidelines?
Very difficult to tell. According to many (news articles were written), the commission and other politicians, were “inactive” for a long time, not listening to the European citizen, … Just these last years, European politicians “woke up”. Many still do not trust the GDPR, despite it being transcribed into law. This untrust still can be seen through the data leakage and corporations going “unpunished” with their lame excuses (monetary fines do not solve the problem).

I think that the commission will have a hard time.

Trust must be earned. It is not acquired.

 

I would like to hear your thoughts about these ideas and experiences (mostly experienced myself).

 

[1] We did a non-scientific survey about EVA Smart City project, we got similar responses, despite people being told that the AI with the inhabitants’ data would be self-contained and only accessed if a legitimate court warrant was issued or in case of an extreme emergency, e.g. a molested child or her/his disappearance, or someone in danger.
To know more about this project, follow the hashtag #EVASmartCity on LinkedIn (be aware that these articles and posts are outdated, and our proposal was mainly refused by politicians and corporations who “love the status quo”, about climate change).

[2] Wikipedia, 2020