What happens when Twitter takes down terrorism

Dr Suraj Lakhani鈥檚 research on the effectiveness of disrupting the online activities of terrorist and violent extremist groups is having an impact on international counter-terrorism policies.

Laptop

Picture credit: Philipp Katzenberger

From the Manchester Arena bombings of 2017, to the gunning down of worshippers at mosques in New Zealand in 2019, callous acts of terrorism continue to destroy lives and create fear across the globe.

While governments are constantly reviewing their counter-terrorism strategies to protect us from future atrocities, extremist organisations are turning to ever-more covert methods to recruit followers and pursue their agendas.  

Dr Suraj Lakhani, a Lecturer in Criminology and Sociology at the 5X社区视频 and director of the Sussex Terrorism and Extremist Network (STERN), has spent more than a decade looking into causes and consequences of radicalisation by Jihadist networks, including Islamic State (IS). In the past four years he has turned his attention to the online activity of IS and other pro-jihadist groups.

His interdisciplinary research, in collaboration with researchers in the Department of Informatics at Sussex, has helped to create a more targeted approach by the UK Government to disrupt violent extremism online. It has also increased awareness of how small and obscure digital media channels are now used by groups to spread hate and incite heinous acts.

“Social media is a powerful tool for terrorist organisations,” Lakhani says. “Platforms are used for recruitment, for attack planning, and for the dissemination of propaganda.

“By disrupting these groups and fragmenting them, you’re breaking up this core base. However, when they move to smaller sites, you can more easily lose track of what they are discussing and the propaganda they are disseminating.”

Effectiveness of disruption measures

In 2015, Twitter were made aware of how their platforms were being used by pro-IS users to recruit followers. The company eventually began removing posts and closing down accounts – and have continued to do so for many far-right, conspiracy and extremist networks.

Two years later, Lakhani led a consortium of academics from Sussex and from Dublin City 5X社区视频 for a UK Home Office-funded project that analysed the effectiveness of these disruption measures.

Working with , and Professor Maura Conway at Dublin City 5X社区视频, Lakhani and his co-researchers were able to draw on mass data to check how much pro-IS content on Twitter was being removed.

“It was thanks to the fantastic software developed by David and Jeremy that we were able to do this,” says Lakhani, on using the Natural Language Processing (NLP) tools developed by the Text Analysis Group (TAG) Laboratory. “The more data we fed into the system, the more it became aware of what an IS account looked like. This was based on profile names, hash tags, and connections they might have to other known accounts. The system would then continually monitor those accounts and log when they were taken down.”

From the data, they saw that Twitter was more effective at shutting down IS accounts than other pro-jihadists groups, such as Hayat Tahir al-Sham (HTS), Ahrar al-Sham (AAS) and the Taliban.

Lakhani says: “It was evident that, while Twitter were disrupting IS because it was seen as posing the greatest threat, other groups continued to exploit the platform.”

The danger of echo chambers

A second project, also funded by the Home Office, built on the success of the first and looked at how these extremist groups were using Twitter to connect with other social media platforms  - both large (eg Facebook) and small and obscure (eg 8Chan, which was used by the Christchurch mosque mass murderer Brenton Tarrant to shockingly livestream his killing of 52 people).

Lakhani says: “The danger is that the more we disrupt organisations on big social media platforms, the more they go into echo chambers and onto more obscure platforms, where they are not being disrupted as much.”

The research demonstrated how smaller social media companies need to protect their platforms from being used by extremists, feeding into the important work being undertaken by organisations such as Tech Against Terrorism, and has been used to inform Home Office ministerial briefings to broaden the understanding of online extremism. The Home Office is now using this research to give guidance to internet companies in the UK and internationally on how to disrupt extremist activity on their sites.

Beyond the UK, the European Parliament Policy Department for Citizens’ Rights and Constitutional Affairs drew on the research for making recommendations in its ‘Countering Terrorist Narratives’ proposal, while the UN Security Council Counter Terrorism Committee also cited the research – notably drawing attention to the support smaller social media platforms needed to remove illegal content.

Lakhani, who intends to continue using the methodology to look at far-right extremism, says: “We must remain vigilant about any method extremist groups try to use to spread their propaganda. As we’ve seen, these organisations are astute about how they undertake various activities online.  But we also know that disrupting their online activities is one effective way to stop and hinder their plans, and ultimately save lives.”

Contact us

Research development enquiries:
researchexternal@sussex.ac.uk

Research impact enquiries:
rqi@sussex.ac.uk

Research governance enquiries:
rgoffice@sussex.ac.uk

Doctoral study enquiries:
doctoralschool@sussex.ac.uk

Undergraduate research enquiries:
undergraduate-research@sussex.ac.uk

General press enquiries: 
press@sussex.ac.uk