As Abraham Lincoln said: "Elections belong to the people. It's their decision." The problem today is that this sacred link is under threat from those seeking to manipulate our elections, democratic processes and institutions.

It is always an election year somewhere in Europe, of course. Voters in Sweden, Latvia and Luxembourg - at least - go to the polls this year, while the next European Parliament elections follow next spring.

As well as traditional canvassing, these electoral campaigns will be fought online in a way that would have been hard to imagine even a few years ago.  Never has it been easier for political parties to get their messages across using the internet and social media, tools which have not only made it possible to reach large numbers of people but also, increasingly, to micro-target individuals with tailor-made messages.

But let's be honest, that has drawbacks as well as benefits. The same tools can easily be hijacked by malicious actors – both state and non-state – to subvert our democratic systems and use them as a weapon against us. What once might have been dismissed as fantasy has rapidly entered the world of fact with everything from a referendum on an EU agreement with Ukraine, through a referendum on EU membership to a Presidential election apparently being fair game.

The public are increasingly aware of the challenge posed by such cyberattacks and cyber interference which have become more frequent and more damaging in their impact. Too easy to perpetrate and too hard to trace and attribute. And our publics are pressing us to do more to tackle this challenge.

As in other areas of security, it is national governments and public authorities that are in the frontline. But the cross-border nature of the cyber threat means that cooperation has never been more important, and the EU has a clear role to play in leading efforts both at home and internationally.

So, we need to respond. The mechanics of how such attacks are carried out are, increasingly, well understood.

They fall into two main categories: those based on systems and those based on behaviours. The first category includes cyberattacks that manipulate the electoral process or voting technology to change the number of voters or the number of votes. This either involves hacking the electoral system to prevent voters from registering, or hacking it to manipulate the results or to obtain voter data. Although this approach is relatively crude, even the suggestion that it has happened can undermine trust.

But the second category of threat is much more subtle and pernicious – attempts at manipulating voting behaviour. In my view, this can take three forms: hacks and leaks designed to change public opinion by revealing damaging information at a crucial point during a campaign; the use of fake news to sway public opinion and influence results; and the misuse of targeted messaging based on psychometrics derived from mined user personality trait data – or Cambridge Analytica for short. All three are different forms of cyber-enabled manipulation; all are designed to skew the results in a particular direction.

So what have we been doing to counter this threat?

In terms of the systems threat, we are working with Member States on common guidelines in the context of the NIS Cooperation Group on how to secure the whole election lifecycle from cyberattacks. This work, carried out by Member States under the leadership of Estonia, will result in a concrete set of recommendations and measures for national authorities in order to protect against 'physical' cyber threats, i.e. the hacking of electronic tools, systems and databases used in the election process. I understand the recommendations are about to be finalised and I am very much looking forward to their publication.

To counter behavioural threats, the Commission proposed a number of measures in April against disinformation and behavioural manipulation including important steps that we expect the internet platforms to take to ensure that social media cannot be turned into a weapon against democracies; a Weapon of Mass Disinformation – a WMD for the modern age, if you will.

We want to see genuine transparency, traceability and accountability online. Users should know who has created the content they are seeing, who might gain from it, and why it is being shown to them.

This is key in trying to combat the spread of deliberate disinformation – Fake News – intended to sow doubt and division in our societies. We are drawing up a Code of Practice, to be adopted by internet platforms, requiring them to improve how adverts are placed, to restrict targeting options for political advertising, and to reduce the revenues made by those behind disinformation. It will also promote greater transparency around sponsored content – marking it clearly as such, and stating who has paid for it.

We want platforms to ramp up their efforts to identify and delete fake accounts, and establish clear rules around bots so that they cannot be passed off as human online.

We want – working with the platforms - to make it easier for users to assess the trustworthiness of content, while also reducing the visibility of disinformation. And we'd like to see greater clarity around how algorithms work – with more information available on how they prioritise what content to display, for example. We want to help users more easily to discover content and access news sources representing alternative points of view, as well as to report disinformation.

We believe the platforms, who make so much money from our online lives, have a responsibility to society to help counter disinformation.

We are not asking them to judge what is true or false, correct or not. But you should be able to know where the information you are seeing online comes from, who is funding it, and why it is being presented to you, so that you can make an informed decision about whether to take it at face value. This is especially true when it comes to political messaging.

If we are not careful, Fake News can be more influential than professional journalism. We clearly need to build more resilience into the system. So we will increase our support to the work of those spotting and rebutting fake news, the so-called 'fact checkers', and we will continue to support quality journalism and promote media literacy and critical thinking.

And, to counter the threat posed by the concerted, state-sponsored, misinformation campaigns targeting our Member States, we have set up the East Strategic Communication Task Force in the European External Action Service, to strengthen quality media in the region and to improve our capacity to respond to Russian disinformation.

Since its establishment in 2015, East Stratcom has catalogued over 4,000 examples of disinformation, including for example 31 disinformation narratives around the chemical attack in Salisbury and 57 around the downing of flight MH17. Thanks to this, we now have a better idea of the main tools, channels and messages of the Kremlin's disinformation campaign. We know that this disinformation is out to confuse us, slow our consensus and slow our response. It seeks to undermine faith in mainstream politics, media and democracy. It's a threat to all of us.

Across the EU there are many initiatives at national level where Member States are taking measures in view of upcoming elections. Sweden, for example, which is due to hold parliamentary elections in September, has been working with the private sector, social media companies, broadcasters and newspapers. A "Facebook hotline" has been created to allow officials quickly to report fake Swedish government Facebook pages. Facebook itself has pledged to report suspicious behaviour around the election.

A nationwide education programme has been launched to teach high school students about propaganda, and a leaflet has been distributed to 4.7 million homes which includes tips on spotting such disinformation.

It's not just the EU that has woken up to this threat - there was a strong message from the recent G7 summit in Canada, where the participants endorsed measures to protect elections from foreign interference. They undertook to share information between themselves and work with internet companies to combat meddling in elections.

And there is also strong transatlantic cooperation on this issue. It is discussed in the EU/US security and cyber dialogues.  We welcome the creation of the Transatlantic Commission on Election Integrity, co-chaired by former US Homeland Security Secretary Michael Chertoff and former NATO Secretary General and Danish Prime Minister Anders Fogh Rasmussen, and also including former US Vice President Joe Biden and former UK Deputy Prime Minister Nick Clegg. The group will meet for the first time in Copenhagen later this week [21-22 June].

So what morel needs to be done? The work underway is good, but we need to ramp it up and ensure that public authorities as well as other actors – both public and private – are as prepared as possible. In the EU, that means establishing plans at national level to guard against cyberattacks and election interference.

To this end, we need every Member State to comprehensively assess the threat to their democratic processes and institutions, whether from more traditional cyberattacks or from the manipulation of information. They should have a national action plan and a task force bringing together representatives from all relevant authorities – cybersecurity, intelligence, law enforcement, electoral commissions and the private sector – with the task of countering these cyber threats.

Above all, they should treat elections as a central component of their critical infrastructure protection and resilience planning.

Political parties themselves need to set an example – they could, for example, consider committing to certain standards of transparency and openness when it comes to their own online campaigns, such as on the targeting of political messages via social media, in order to set the right example.

From our side, together with my colleague Mariya Gabriel, we will continue to press online platforms to commit to an ambitious Code of Practice on online disinformation which we want agreed in the coming weeks so that it can start delivering concrete, effective and measurable results by the autumn.

Looking further ahead, the Commission will convene a high level meeting in the autumn, bringing together national players in order to take stock of progress on the various fronts and to identify and share best practices for election security. This will build on the work done by the NIS Cooperation Group on a Compendium on Cyber Security of Election Technology to define the key resilience measures to combat cyber threats to elections at national level, including the need for a response- protocol in case of incidents, training and exercises on possible scenarios at all levels, and a robust and trusted network across the relevant authorities at national level to deal with incidents.

Abraham Lincoln was right when he said that "the ballot is stronger than the bullet".  That, after all, is the foundation of our democratic societies. But we can take nothing for granted. Particularly today, when we are dealing with increasingly sophisticated digital ammunition rather than old-fashioned lead. We need to step up and build an effective shield to safeguard the integrity of our electoral systems and processes.  Frankly, this has never been more vital or more pressing.