Security tools to fight online predators and fake news
An EU-funded project is analysing malicious cyber activity and developing parental control tools to protect minors and to help fight fake news, other malicious content and online sexual predators. The projects recommendations for policymakers will also help combat harmful online behaviour protecting Europeans.
© Brian Jackson, source:fotolia.com
Updated on 01/02/2019
Offensive, abusive and hateful language, sexism, racism, fake news, cyberbullying and other types of aggressive and sometimes criminal behaviour has been appearing with increased frequency on social media platforms such as Twitter and YouTube.
In response, the EU-funded ENCASE project has been conducting analysis into these online threats. The project team’s analysis, and eventual recommendations aim to help EU policymakers and internet regulators develop better approaches to protect people, including minors, from such activity.
The team is also developing commercial tools to help empower both parents and children to detect this type of malicious behaviour, including grooming attempts online by sexual predators and other malicious users.
The software tools will offer three distinct services. A browser add-on is able to analyse a user’s actions while online and can detect incidents of aggressive or distressed behaviour. Another tool analyses social web data to detect fraudulent activity. The third one warns a user when he or she is about to share sensitive content like photos of their home address with an inappropriate audience. It can also alert the user or their parents of an imminent privacy threat.
‘The consortium is working hard on exploiting and commercialising this research as a complete package of an advanced parental control tool that will allow custodians to monitor and suppress the threats that minors face in online social networks,’ says project coordinator Michael Sirivianos from the Cyprus University of Technology in Cyprus.
Detecting aggressive behaviour
The ENCASE team has been conducting in-depth research in two main areas. The first is the detection of cyberbullying, hate speech and aggressive behaviour on mainstream social media sites like Twitter.
The second, studies how alternative social media and fringe online communities can affect and distribute fake news via more mainstream networks like Twitter and Facebook. These include 4chan, an anonymous image-based bulletin board, and Reddit, where some channels actively share extreme views.
Despite the seriousness of the problem, there are very few successful efforts to detect abusive behaviour and disinformation, both from the research community and online social networks, says Sirivianos. In contrast, the project’s research aims to automatically detect, flag and filter fake news, hate speech and cyberbullying before it reaches mainstream audiences and vulnerable minors.
To create tools that can detect cyberbullying, the project began by collecting and annotating a large collection of tweets, to determine whether cyberbullies shared any specific attributes. For example, the team discovered that bullies have a propensity to use fewer hashtags within their tweets and have fewer friends in their online network.
Sirivianos says the project’s research is among the few that focused on characterising the bullying users themselves, not only their abusive content.
‘Our results indicate that we can distinguish between aggressive, bully and normal users with 87.8 % precision,’ Sirivianos says.
Fake news and hateful memes study
In two other studies, ENCASE’s researchers examined the propagation of fake news and of hateful or politics-related internet memes and how fringe groups such as 4chan, The_Donald subreddit and Gab influenced mainstream social network sites like Twitter.
The study helped the team gain insight into which web communities are the most influential in the dissemination of fake news and politics-related memes.
They discovered that the propagation of many hateful and racist meme images began on some of these fringe communities. They were then disseminated across mainstream sites, often using humour to spread the message more easily and often unwittingly by normal, unsuspecting users.
‘It’s a type of organic propaganda,’ Sirivianos says.
Some of these fringe sites, despite having far less users than sites like Twitter, were found to be responsible for around 6 % of mainstream news posts.
‘This means that these fringe web communities punch above their weight when it comes to spreading political propaganda’ Sirivianos says.
To conduct the research the team developed software tools to automatically detect and track suspect racist and hateful images across online communities. The researchers shared the library of code and technical tracking tools for use by other researchers and policing authorities. It also includes a database of the research.
In another study, the team analysed how online trolls act differently from regular users. Agents of various governments have been found to be manipulating social media, thereby influencing people’s thinking and posing a threat to democracy. This study revealed that there are some behaviours that might help identify trolls who spread discord across political spectrums.
‘Hate speech and fake news are the emerging problems of our era,’ says Sirivianos. ‘This can have dire consequences in real life, like changing the outcomes of elections or leading to violence.’
The research motivated the team to design the detection and protection tools. By the end of December 2019, the ENCASE project hopes to publish a functioning prototype of these tools. They are in contact with telecommunication providers in Cyprus and Spain to commercialise this tool as a complimentary service to the internet packages they offer their customers.
‘These solutions can be extremely important for safeguarding specific sensitive population groups like teenagers or minors, which might not have the critical thinking to discern whether a story or a meme is manipulated, biased, or fake,’ says Sirivianos.
The project’s research results have been widely covered by the international press, with around 40 articles published so far by media organisations, including the BBC, Business Insider and the Washington Post.
The project also suggests an exchange of knowledge between industry and academia. Specifically, 36 researchers and 12 industrial experts from four universities and four industrial organisations respectively were seconded within the consortium as a means to exchange knowledge and gain experience. The project received funding from the EU’s Marie Skłodowska-Curie actions programme.