JRC Science Hub Communities
The European Commission's science and knowledge service

2nd HUMAINT Winter school on Fairness, Accountability and Transparency in Artificial Intelligence

2nd HUMAINT Winter school on Fairness, Accountability and Transparency in Artificial Intelligence

2nd HUMAINT Winter school on Fairness, Accountability and Transparency in Artificial Intelligence

Joint Research Centre, European Commission 

January Wed 22nd to Friday 24th 2020, Seville, Spain 

 

The HUMAINT winter school is conceived as an interdisciplinary course about the impact of Artificial Intelligence (AI) systems on human behaviour. After a successful first event, this year edition focuses on the topic of fairness and accountability in AI and it is taking place one week before the ACM Conference on Fairness, Accountability, and Transparency (ACM FAT*), a computer science conference with a cross-disciplinary focus that brings together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems. The winterschool is part of the ACM FAT* network.


Seville and Barcelona are located 5 hours away by  train or 80 min by airplane.

 

Index

 

 

Context

This school in Seville is organized by the HUMAINT project (Centre for Advanced Studies, Joint Research Centre, European Commission). This is the 2nd edition of the Winter school. The material of its 1st edition can be found here, and you can also read some of our student testimonies

>> Back to index

 

 

Participants

Around 35 students:  Early stage researchers (PhD candidates and postdocs) from computer science, SSH (social sciences incl. economics and humanities) and law.

Academic coordinationProf. Virginia Dignum (Umea University) and  Dr. Emilia Gómez (JRC and Universitat Pompeu Fabra).

    Lecturers:

    • Dr. Cosmina Dorobantu, Deputy Programme Director for Public Policy, and Policy Fellow, The Alan Turing Institute.

    Biography: Prior to joining the Turing, Cosmina Dorobantu was a Director and co-founder of Aurora Energy Research, one of UK’s leading energy market modelling and analytics firms. She also spent five years at Google, defining the company’s investment and business strategy in Eastern Europe, the Middle East, and Africa. 
    Cosmina is interested in working with unconventional data to improve policy makers’ understanding of the world. In the past, she collaborated with the European Commission on quantifying the impediments to cross-border e-commerce within the digital single market by using Google data. The findings of her research informed the EU-wide digital agenda policy package and represented the first use of Google data to inform policy design at the EU level. 
    Academically, Cosmina’s expertise lies in international economics and econometrics. She has an MPhil (obtained with Distinction) and a DPhil in Economics from the University of Oxford.

    • Prof Gina Neff, Senior Research Fellow & Associate Professor, Oxford Internet Institute and Department of Sociology. 

    Biography: Professor Gina Neff is a Senior Research Fellow and Associate Professor at the Oxford Internet Institute and the Department of Sociology at the University of Oxford. She studies the future of work in data-rich environments. Professor Neff leads a new multinational comparative research project on the effects of the adoption of AI across multiple industries. She is the author of Venture Labor (MIT Press, 2012), which won the 2013 American Sociological Association Communication and Information Technologies Best Book Award; and with Dawn Nafus Self-Tracking (MIT Press, 2016). Her writing for the general public has appeared in WiredSlate and The Atlantic, among other outlets. She holds a Ph.D. in sociology from Columbia University, where she remains a faculty affiliate at the Center on Organizational Innovation, and she serves as a strategic advisor on the social impact of AI for the Women’s Forum.

    • Marion Osvald, Vice-Chancellor´s Senior Fellow in Law, Northumbria University, Solicitor (non-practising), Associate Fellow, RUSI.

    BiographyMarion is Vice-Chancellor’s Senior Fellow in Law at the University of Northumbria.  She researches the interaction between law and digital technology and has a particular interest in the use of information and innovative technology by criminal justice bodies and the wider public sector.  Marion regularly writes, speaks and advises on the legal and ethical implications of new technologies.  She is a solicitor (non-practicing), with previous experience in legal management roles within private practice, international companies and UK central government including national security.  She has worked extensively in the fields of data protection, freedom of information and information technology, having advised on a number of information technology implementations, data sharing projects and statutory reforms.  Marion chairs the West Midlands Police & Crime Commissioner and West Midlands Police data ethics committee.  She is an Associate Fellow of the Royal United Services Institute for Defence and Security Studies, an executive member of the British & Irish Law, Education & Technology Association, a member of the National Statistician's Data Ethics Advisory Committee, and a member of the Council of Europe Working Group of Experts on Artificial Intelligence and Criminal Law.

    BiographyManuel Gomez Rodriguez is a tenure-track faculty at Max Planck Institute for Software Systems. Manuel develops human-centered machine learning models and algorithms for the analysis, modeling and control of social, information and networked systems. He has received several recognitions for his research, including an Outstanding Paper Award at NeurIPS’13 and a Best Research Paper Honorable Mention at KDD’10 and WWW’17, and has served as Track Chair for FAT 2020 and as Area Chair (Senior PC) for every major conference in machine learning, data mining and the Web. Manuel has co-authored over 50 publications in top-tier conferences (NeurIPS, ICML, WWW, KDD, WSDM, AAAI) and journals (PNAS, Nature Communications, JMLR). Manuel holds a BS in Electrical Engineering from Carlos III University in Madrid (Spain), a MS and PhD in Electrical Engineering from Stanford University, and has received postdoctoral training at the Max Planck Institute for Intelligent Systems. You can find more about him at http://learning.mpi-sws.org.

    BiographyDr. Andreas Theodorou is is a postdoctoral researcher at the Responsible AI Research Group, led by Prof Virginia Digdum, at Umea University. Dr. Theodorou is working at producing techniques and tools for the design, implementation, and deployment intelligent systems, while taking into consideration the socio-economic, legal, and other ethical issues and challenges that arise from integrating AI into our societies. His ongoing work is funded by the Horizon2020 AI4EU project. In parallel to his current post, he is an active member of AI policy initiatives (e.g. IEEE SA’ P7001 series, ISO JTC1/42, BSI ART/1) and of AI Ethics boards (e.g. ROXANNE project). Prior to his current post, Dr. Theodorou undertook his doctoral studies at the University of Bath Intelligent Systems research group under the supervision of Dr. Joanna J. Bryson. His Doctoral Thesis, titled AI Governance Through a Transparency Lens, includes the development of tools and methodologies to provide algorithmic transparency, while also exploring our moral-decision making and human cooperative behaviour. Finally, he has been a Teaching Fellow at the UNiversity of Bath (2017-18), a visiting scholar at the Georgia Institute of Technology (2016), and a machine learning researcher at the University of Surrey (2015).

     

    Organizing committee:

    • Dr. Songül Tolan
    • Dr. Marius Miron 
    • Paula Galnares
    • Marina Escobar Planas
    • Dr. Vasiliki Charisi

      >> Back to index

       

       

      Preliminary program

      • Lectures: each lecturer will give a 3 hours morning/afternoon course which will be structured in different ways depending on the topic (e.g. 1 lecture/theory, 2 hours hands-on). 
      • Tutoring: there will be time for short student presentations and 1-1 mentoring sessions.
      • Abstracts:

      Responsible AI frameworks - the current state (Virginia Dignum)
      AI ethics are increasingly recognised as important if not critical. We have many offerings of AI ethical codes and frameworks. Central features of ethical AI are that it is transparent and not biased. Explainability, accountability and safety are also often listed. But what do these kinds of features mean when translated into action? How are they understood when dealing with a machine? How do we create trust in citizens that these principles are effective in protecting them in their use of AI, and in preserving the fundamental social values and rights?
      >> to Schedule

       

      Human Centric Machine Learning: Feedback loops, Human-AI Collaboration and Strategic Behavior (Manuel Gómez)
      With the advent of mass-scale digitization of information and virtually limitless computational power, an increasing number of social, information and cyber-physical systems evaluate, support or even replace human decisions using machine learning models and algorithms. Machine learning models and algorithms have been traditionally designed to take decisions autonomously, without human intervention, on the basis of passively collected data. However, in most social, information and cyber-physical systems, algorithmic and human decisions feed on and influence each other. As these decisions become more consequential to individuals and society, machine learning models and algorithms have been blamed to play a major role in an increasing number of missteps, from discriminating minorities, causing car accidents and increasing polarization to misleading people in social media. 

      In this lecture, you will learn about a new generation of human-centric machine learning models and algorithms for evaluating, supporting and enhancing decision making processes where algorithmic and human decisions feed on and influence each other. These models and algorithms account for the feedback loop between algorithmic and human decisions, which currently perpetuates or even amplifies biases and inequalities, they learn to operate under different automation levels, and anticipate how individuals will react to their algorithmic decisions, often strategically, to receive beneficial decisions. Moreover, you will get to know about a wide range of applications, from content moderation, recidivism prediction, and credit scoring to medical diagnosis and autonomous driving, where human-centric machine learning can make a positive difference.

      >> to Schedule

      Rethink government with AI (Cosmina Dorobantu)
      Data science and AI have enormous potential to make government better. The reams of data that governments collect about citizens could, in theory, be used to tailor education to the needs of each child or to fit health care to the genetics and lifestyle of each patient. They could help to predict and prevent traffic deaths, street crime or the necessity of taking children into care. Huge costs of floods, disease outbreaks and financial crises could be alleviated using state-of-the-art modelling. Yet alongside their potential to drastically improve policy-making, when used in the public sector, data science and AI bring with them a host of ethical concerns. These range from replicating existing biases to threatening individuals' privacy and generating unexplainable outcomes. In the first half of the session, we will explore the ways in which data science and AI can foster government innovation. In the second half, we will bring to light the ethical dimension by examining a concrete example: if tasked with writing a machine learning system to identify children at risk, how should we proceed?

      >> to Schedule

       

      Studying AI in its Social Context (Gina Neff)
      Recent advances in artificial intelligence applications have sparked scholarly and public attention to the challenges of the ethical design of technologies. This approach can be expanded with research on AI in social contexts to inform the responsible integration and use of AI in workplaces, governments and societies. This session will introduce early career researchers to a theoretical and methodological toolkit from the social sciences for studying emerging AI technologies in use. We will begin with the assumption that socially informed technological affordances, or “imagined affordances,” shape how people understand and use AI in practice and place social institutions  nd their power at the centre of research on technologies-in-practice. Students will work through a framework for evaluating AI that include the following five questions: 1) What and whose goals are being achieved or promised through; 2) What structured performance using; 3) What division of labour; 4) Under whose control and 5) At whose expense?

      The objectives of the first session are to 1) develop skills for mapping shifting accountabilities that pattern how technologies are adopted, modified and used; 2) introduce tools for identifying social dynamics that shape technological affordances of AI; 3) introduce practice theories for studying AI use in context; and 4) introduce normative scholarly tools for intervention in AI design and policies.

      In the second session we will practice these tools by identifying them in a series of cases and by practicing generating our own research questions and contexts.
      >> to Schedule

       

       

      Considering the 'FAT' implications of the use of machine learning within police decision-making (Marion Oswald)
      This session
      focuses upon machine learning algorithms within police decision-making, specifically in relation to predictive analytics.  It first reviews the state of the art regarding the implementation of algorithmic tools underpinned by machine learning to aid police decision-making, often linked to the prevention and public protection duties and functions of the police.  The potential implications of these tools for appropriate application of decision-making discretion within policing, as well as their potential impact on individual human rights are then considered. Students will have the opportunity to review a fictionalized case study in a workshop environment.  Students will identify the legal issues arising and consider ways that ‘FAT’ influenced machine learning design and methods of human-computer interaction could mitigate those issues. 
      >> to Schedule

       

      Assesing the impact of AI on human behaviour (Emilia Gómez)
      In this last session we will first discuss on the impact that AI systems have on individuals, in particular on our cognitive capabilities and socio-emotional development.  We will tackle this research question by presenting four concrete examples: (1) the definition of fairness requirements and metrics in different application domains, e.g. criminal justice or medical diagnosis; (2) how to relate human cognitive capabilities, AI computational tasks, and the tasks we do at work; (3) how do AI systems affect music listening, creation and culture; (4) the potential impact of AI on children. For each of these scenarios, we will present concrete research outcomes, propose some short group exercises, and discuss on the need for interdisciplinary and diverse views, impact assessment frameworks and specific adaptations to the concrete application domains.

      In the second part of the session, we will present the divinAI initiative that researches and develops indicators of diversity of AI research events including gender, academia vs industry, and geographical location. We will work together on a small data gathering exercise and discuss about the results and implications.

      >> to Schedule

       

       

      • Schedule:

      • Registration: Wednesday, January 22nd, 8:30 - 9:00, reception at the main entrance.

       

      Wednesday Thursday Friday

      09:00 -10:00

                                                                                Emilia Gómez
         (introduction)

       Virginia Dignum
         (ethical perspective)
       Responsible AI frameworks
       - the current state

       Cosmina Dorobantu       
       (policy perspective)               Rethink government with AI

                                                                   Marion Oswald                                     (legal perspective)                                 Considering the 'FAT' implications
       of the use of machine learning
       within police decision-making
                                           

      10:00 -10:30

      Coffee and discussion                    

      10:30 -12:15

       Virginia Dignum
         (ethical perspective)     Responsible AI frameworks
       - the current state
       Cosmina Dorobantu       
       (policy perspective)               Rethink government with AI

                                                                   Marion Oswald                                     (legal perspective)                                 Considering the 'FAT' implications
       of the use of machine learning
       within police decision-making
                                           

      12:15 -13:00

      Flash presentations Chaired discussion Chaired discussion

      13:00 -14:00

      Lunch                    

      14:00 -15:00

                                             Manuel Gómez             
       (technical perspective)        Human Centric Machine
       Learning: Feedback loops,
       Human-AI Collaboration and
       Strategic Behavior

       Gina Neff
       (social perspective)
       Studying AI in its
       Social Context
       Emilia Gómez
       (interdisciplinary perspectives, wrap up)
       Assessing the impact of AI on
       human behaviour

      15:00 -15:30

      Coffee and discussion                    

      15:30 -17:15

                                             Manuel Gómez             
       (technical perspective)        Human Centric Machine
       Learning: Feedback loops,
       Human-AI Collaboration and
       Strategic Behavior

       Gina Neff
       (social perspective)
       Studying AI in its
       Social Context
       Emilia Gómez
       (interdisciplinary perspectives, wrap up)
       Assessing the impact of AI on
       human behaviour

      17:15 -18:00

      Chaired discussion Chaired discussion Chaired discussion
      Evening   Self-paid diner  

      >> Back to index

       

       

      Videos (recorded sessions)

      >> Back to index

       

       

      Slides

      >> Back to index

       

      Application and registration

      The winter school is free of charge. 
      Due to space limitations, there is an application process. 
      We will provide financial support for a selected group of participants based on CV, degree of financial need and diversity.

      • Application deadline: October 25th 2019 (CLOSED)
      • Notification of acceptance: November 20th 2019   

      >> Back to index

       

       

      Contact

        jrc-humaint@ec.europa.eu 

      >> Back to index

       

       

      Dates: 
      Wednesday, 22 January, 2020 - 09:00 to Friday, 24 January, 2020 - 18:00
      Location: 
      Seville, Spain
      Organiser(s): 
      Virginia Dignum
      Emilia Gómez
      up
      0 users have voted.