-
5 comments
By Franco Accordino, Head of Unit - Knowledge Management and Innovative Systems at DG CONNECT, European Commission
For many years, automation in public administration was synonymous with optimizing existing routine tasks, procedures and workflows. Today, with the massive explosion of data and the growing take up of AI technologies, the scope becomes much wider than office automation. It encompasses tasks that require high-level cognitive skills and which were for long time the prerogative of public servants, such as understanding texts, drafting summaries, recognising trends in data or supporting decisions. In future, we can imagine entire parts of the policy making lifecycle, from evidence gathering up to impact analysis, being facilitated by AI agents, or even be delegated to them.
As a public service, the European Commission is pursuing these developments, in line with the e-Government Action Plan. Here are a few illustrative examples.
1) Where does this document go?
Every day, public administration departments handle thousands of documents that, for transparency or auditing purposes, may need to be registered, categorised and filed into Document Management Systems. The decision whether or not a document has to be filed depends on the "importance" of the information therein, i.e. if it is used to support a policy decision, a legal or financial transaction, etc. Up to now, such a time-consuming and error-prone activity was done manually by employees on the basis of their personal assessment, because the relevance of documents is rather subjective and "undecidable,” i.e. cannot be decided "programmatically". As part of a corporate initiative on data, information and knowledge management, we are exploring the possibility to use machine learning and natural language processing algorithms to predict how a document has to be treated and advise public servants accordingly.
2) What is this text talking about?
Public services are more and more confronted with the need to quickly process large amount of data - evidence - to support their decisions. For instance, when preparing a policy proposal, the European Commission has to gather legal and scientific evidence on the subject matter, e.g. a competition case or a funding decision, which can result in millions of documents and terabytes of data. Furthermore, according to the "Better Regulation", the European Commission has to consult stakeholders through a structured survey that can gather up to millions of inputs; for instance, the recent 'Summer Time' consultation received 4.7 Million replies. Such a wealth of data and documents has to be analysed and summarised in a short timeframe and be reflected into the policy proposal prepared by the public servants. Recently, such intellectual effort has been facilitated by Data Analytics Services (DORIS), a set of tools based on Machine Learning and Natural Language Processing developed by DG CONNECT to face the above challenges. The same tools have been re-used to address several other business needs such as to analyse the impact of large public investments in specific policy areas.
3) Are we ready to delegate more policy making tasks to AI?
A more long-term scenario sees Artificial Intelligence used even more systematically throughout the entire policy-making lifecycle. Our ability to gather accurate data from the real world will allow continuously feeding policy processes with fresh evidence about ongoing trends and events. For instance, we will be able to more easily evaluate the impact of previous policy measures, design new policies and test "in-silico" their effectiveness, discover unforeseen correlations between different policy domains and actions, etc.
As a consequence of such an "extreme" digitisation of policy processes we can envisage that certain decisions, merely requiring analysis of evidence, can be delegated to AI agents. For instance, an algorithm could decide whether the traffic in a city has to be limited to green vehicles, e.g. during peak pollution hours, if a certain public infrastructure requires maintenance or if economic policies need adaptation based on the variables and equations that describe those systems. In such a context, the quality of data as well as the transparency, robustness and liability of AI algorithms will become crucial.
However, as policy decisions rely also on factors other than factual evidence, there will be a need to build in "non-measurable" elements into the policy making model, for instance the legitimate interest of stakeholders; and the influence of ideologies or the emotional flows that characterise today's public discourse on social networks. In order to capture such type of knowledge, AI must be able to discern the rational, evidence-based and emotional components of policymaking. This can be facilitated by the introduction of semantic models, such as the Policy Making 3.0 model introduced here.
The greater human ability to interconnect with other peers, co-create, co-decide and theoretically have a say on any policy matter raises questions as to the possibility to make such processes "scalable" and sustainable for the common public interest.
Our ability to adapt policy-making processes to this new hyper-connected world, particularly the ability of AI to discern emotional and rational components, and to make sense of large conversations on social networks, will determine the accuracy and effectiveness of policy decisions and the robustness of democratic processes.
Finally and most importantly, we can delegate even more policy-making tasks to AI only by ensuring that the underlying technologies are trustworthy and embed the EU’s fundamental values. This requirement applies to AI in general, but it is even more crucial in such a complex application domain as policymaking.