Algorithmic radicalization refers to the process where social media[2] platforms, through their personalized algorithms, guide users towards increasingly extreme content. These algorithms, designed to enhance user engagement, inadvertently create echo chambers and filter bubbles, reinforcing users’ existing beliefs and leading to confirmation bias and group polarization. This process is notably prevalent on platforms like Facebook[3], YouTube[4], and TikTok[5]. It’s been criticized for promoting misinformation, hate speech, and extremist ideologies, and has sparked legal debates. The spread of false news and extremist content is faster than truth due to these algorithms. The phenomenon of algorithmic radicalization has been extensively studied, with researchers highlighting concerns over its societal impact and suggesting the need for new regulations to control advanced artificial intelligence[1].
Algorithmic radicalization is the concept that recommender algorithms on popular social media sites such as YouTube and Facebook drive users toward progressively more extreme content over time, leading to them developing radicalized extremist political views. Algorithms record user interactions, from likes/dislikes to amount of time spent on posts, to generate endless media aimed to keep users engaged. Through echo chamber channels, the consumer is driven to be more polarized through preferences in media and self-confirmation.
Algorithmic radicalization remains a controversial phenomenon as it is often not in the best interest of social media companies to remove echo chamber channels. Though social media companies have admitted to algorithmic radicalization's existence, it remains unclear how each will manage this growing threat.