Amazon’s Algorithms, Conspiracy Theories and Extremist Literature

4th May  2021

By Elise Thomas

The role of algorithms in boosting conspiracy theories and radicalisation has been brought into sharp focus by several interlocking crises over the past 12 months. While social media platforms have sought to clamp down on algorithmic recommendation of conspiracy theories and extremist content, they are far from the only tech companies to use algorithms at scale for content curation and recommendation.  

New research from ISD uses Amazon’s book sales platform to illustrate how problems with algorithmic recommendation extend far beyond social media platforms. Many online book retailers use algorithmically-driven recommendations to direct potential customers who have shown interest in one book towards other, similar books. For most, this is a harmless and indeed often helpful way to discover new books they might want to read. However, in the context of conspiracy theories or extreme content, this can rapidly become problematic, driving users towards more extreme and misleading beliefs or factually wrong information. 

This article lays out some of the ways in which this can happen, highlighting that search results are not the only ways in which algorithmic recommendations on Amazon direct users towards potentially harmful content. 

_______________________________________________________________________

On book landing pages, Amazon recommends other books to users in several ways

These are ‘Customers who bought this item also bought’, ‘Customers who viewed this item also viewed’, ‘What other items do customers view after viewing this item?’ and paid ads, which are sometimes billed as ‘Products related to this item’. For most users, these recommendations can be a useful way of finding new content they are interested in, and at worst an irritation to be simply ignored. However, for conspiracy theorists, white nationalists and curious users perhaps only dipping a toe in the murky waters of extremist or conspiratorial content, these recommendations can serve as a gateway into a broader universe of conspiracy theories and misinformation, or to increasingly radical far-right and white nationalist content.

Cross-pollinating conspiracy theories

One of the effects of Amazon’s recommendation algorithm is to cross-propagate conspiracy theories. Users who view a book about one conspiracy theory are not only likely to be recommended more books about that conspiracy theory, but also books about other conspiracy theories. Amazon’s recommendation algorithms can thus drive users deeper into conspiratorial content and cross-pollinate conspiracy theories.

Propelling ideological extremism

Amazon’s proactive promotion of new or additional conspiracy theories to users via book landing page recommendations such as ‘Customers who bought this item also bought’ and ‘Customers who viewed this item also viewed’ becomes particularly problematic where books form a part of a strategy for a broader ideological and political project.

Auto-complete: recommending rabbit holes

Another way in which Amazon draws on algorithms to recommend content, albeit indirectly, is through its search auto-complete function. In a similar manner to other platforms such as Google Search, auto-complete searches are suggested to Amazon users who type into the platform’s search bar. ISD’s findings indicate that Amazon’s search auto-complete suggestions could potentially direct users from searches about an ‘election’ to being sold baseless and harmful conspiracy theories in just two clicks. From there, as stated above, Amazon’s recommendations would show them a range of books on various US political conspiracy theories, COVID-19 disinformation, sovereign citizen and new world order conspiracy theories.

Amplifying authors

Amazon’s recommendation systems can point users towards extremist content via its author pages. On author pages, users are shown a side-panel titled ‘Customers also bought items by’, recommending other authors whose work might be of interest to the person searching. Customers who buy books by an author who produces extremist content are also likely to buy books from other extremist authors, thereby training the algorithm to proactively recommend those authors to new users interested in similar topics.

 

In driving users towards conspiracy theories, disinformation and extremist books, Amazon’s recommendation algorithms are potentially directing their customers toward content which could lead, directly or indirectly, to harm. The damage that individuals sucked into conspiracy theories like QAnon can cause to themselves and others is increasingly apparent. In the wake of 6 January 2021, the damage which it can cause to entire communities and nations is also plain to see. Health disinformation can cost lives in the context of a global pandemic, and could set back the path to recovery for entire communities if it results in even a relatively small proportion of the population refusing to be vaccinated. The promotion of racist and white nationalist propaganda is always abhorrent, but is perhaps particularly concerning amid warnings of rising far-right extremist threats in countries around the world. Currently, Amazon’s recommendation algorithms actively promote books that spread each of these potentially dangerous viewpoints.

At the core of this issue is the failure to consider what a system designed to upsell customers on fitness equipment or gardening tools would do when applied to products espousing conspiracy theories, disinformation or extreme views. The entirely foreseeable outcome is that Amazon’s platform is inadvertently but actively promoting conspiracy theories and extremism to their customers.

The question of whether books promoting conspiracy theories, disinformation and extremist ideologies should be sold on Amazon’s platform is a complex and challenging one. Banning books is a contentious issue, and innately and reasonably stirs fears of censorship. Authoritarian regimes throughout history have themselves relied on the banning of books to protect their causes and power structures.

However, there is a relatively simple solution to the problem of algorithmic amplification: turning off recommendations on products that espouse conspiracy theories, disinformation or extremist viewpoints. This would at least prevent Amazon’s own algorithms from promoting the beliefs such books espouse. Likewise, moderating search auto-complete suggestions would help to avoid Amazon recommending conspiracy theories to users, particularly those who have only searched for general terms. Such changes could go a long way in helping to reduce the role Amazon plays in spreading harmful content, as well as reducing the profits flowing via Amazon to the authors selling such content.

 

Elise Thomas is an OSINT Analyst at ISD. She has previously worked for the Australian Strategic Policy Institute, and has written for Foreign Policy, The Daily Beast, Wired and others.