Addressing youth violence on social media: time to prioritise digital sanitation

Unregulated digital spaces are increasingly recognised as places where harmful content, such as violence, disinformation and hate speech, are amplified, potentially fuelling antisocial behaviour among youth, including knife crime.1 2 Just as 19th century public health officials cleansed water systems to ensure clean drinking water, we now need to ‘clean’ digital spaces. Yet, while earlier public health officials could easily test water quality, today’s experts lack the data and tools needed to ensure effective ‘digital sanitation’. This urgently needs to change.

Platforms like TikTok and X (formerly Twitter) use algorithms that prioritise sensationalist content to maximise engagement, exposing youth to material that may reinforce harmful beliefs, limit exposure to diverse perspectives, reduce empathy and foster distrust among communities.1 This raises concerns about social media’s role in inciting real-world violence, especially among youths, with potential devastating short-term and long-term consequences for individuals and society.2

Much like water companies, the technology industry resists regulation, potentially fearing reduced profits. The power and global reach of social media platforms make corrective action even more challenging. The UK Online Safety Act is now in force, but its detailed implementation is still in progress, and much harmful content—especially ‘lawful but awful’ content—will likely evade regulation. Glorification of harmful content, such as knife crime, often appears in music or is hidden in youth slang, emojis and subcultures, making detection difficult. Concerns remain about the lack of a clear process to evaluate the Act’s effectiveness or ensure cooperation by leading platforms. Meanwhile, Meta’s recent relaxation of content regulations in the USA—replacing third-party fact-checking with a community notes system—may encourage such content in other regions.

An …

Comments (0)

No login
gif