Some thoughts on the Assessment List

  • Gabriele Trovato profile
    Gabriele Trovato
    2 February 2020 - updated 1 year ago
    Total votes: 1

I am not sure if this is the right place to report, but I would like to add my 2 cents to the revision of the assessment list from the comments I gathered from my AI class.

In general, me and some students had the impression that this list may be too strict and hinder the development of AI. In particular regarding:

- 6. Societal and environmental well-being - Social impact – ”signals that its social interaction is simulated”: this goes against a lot of research in Social Robotics. Robots and artificial agents are simulating many aspects of communication in order to be credible. Target users should NOT know that the interaction is simulated in order to keep the "suspension of disbelief".  On the other hand, some emphasis should be added regarding the problem of nudging by the developers, and regarding the explicit warning to final user, not to take any personal advice from the AI (possibly this could added into 4.Transparency - Communication)

 

Other small things:

- 1. Human agency and oversight - Human agency: sounds a like the confinement of industrial robots into cages; should be clarified.

2. Technical robustness and safety

Resilience to attack and security: how do we know unexpected situations?

Fallback plan and general safety – “unintentional behaviour of the AI system”: does an AI system have an intention? How is the unintentional behaviour defined?

4. Transparency - Explainability: why should business model be explainable? They are generally sensitive information of a company.

 

 

That's all.

Best regards

Gabriele Trovato

School of International Liberal Studies

Waseda University

Tokyo, Japan