Anyone external to JRC interested in attending, please contact us.
"Beyond Performance: Capability vs. Generality"
Prof. José Hernández-Orallo, Polytechnic University of Valencia and University of Cambridge
One major phenomenon regulating behaviour is the trade-off between specialisation and generalisation. This spectrum underlies the view of intelligence as a general cognitive ability that is effective for a variety of environments. In this talk we analyse how this phenomenon can be seen universally. We first reconsider the space of all possible environments in terms of the space of all policies and their difficulty. We look at environment diversity from this resource-bounded perspective. This leads us to new notions of capability and generality, and their interpretation for humans and AI. We discuss the role generality plays in social contexts, in the workplace, its relation to cognitive development and diversity. This interplay is not only compatible, but also unifying, for natural and artificial intelligence. Looking ahead, generality helps us define and measure artificial general intelligence and gives us insight into the analysis of the future landscape of cognition.
Further information in https://riunet.upv.es/bitstream/handle/10251/100267/secondbest.pdf
Dr. Fernando Martínez-Plumed
The impact AI can have on society depends, in the first place, on the technical progress of AI itself. However, any measure of progress must consider how easily, economically and broadly a new technology can be applied to a wide range of social needs and potential new applications, also considering whether it has other side effects or footprints for society. In addition to the prevailing metrics of performance, in this talk we highlight the usually neglected costs paid in the development and deployment of an AI system, including: data, expert knowledge, human oversight, software resources, computing cycles, hardware and network facilities, calendar time, etc. These costs are paid throughout the life cycle of the system and may fall differentially on various individuals and parts of society. This multidimensional performance and cost space we present can be collapsed to a single utility metric measuring the impact for different stakeholders. Even absent a single utility function or a societal application, AI advances can be generically assessed by whether they expand the Pareto (optimal) surface. In this regard, we will explore a subset of these neglected dimensions using the two case studies of Alpha*, in which we will analyse the series of algorithms developed by the Google company DeepMind to master Go; and ALE, in which we will analyse almost all the AI systems that have addressed Atari 2600 games in the Arcade Learning Environment. Finally, we will argue that this new comprehensive framework has the potential of changing the conception of progress in AI, leading to novel ways of estimating the potential impact of an AI technology and can help set milestones for future progress.
This is joint work with Shahar Avin, Miles Brundage, Allan Dafoe, Sean Ó hÉigeartaigh and José Hernández-Orallo, Further information in https://arxiv.org/abs/1806.00610