Should platforms like Meta and 4chan be held responsible if their algorithms radicalize mass shooters?

Lawyers representing victims of the Buffalo, New York mass shooting have filed a lawsuit, arguing that about 10 social media companies, including Meta, Amazon, Google, Discord, and 4chan, should be held responsible for radicalizing the shooter's ideology. As attention is focused on whether the claim that 'the algorithms that these companies use to recommend content to users radicalized the shooter's ideology' will be accepted, technology media The Verge has summarized the background.
If algorithms radicalized the Buffalo mass shooter, are companies to blame? | The Verge
The incident in question took place at a Buffalo supermarket in 2022. The perpetrator was Peyton Gendron, who was 18 years old at the time, who drove several hours from his home to the supermarket and opened fire on shoppers, killing 10 people and wounding three.
In addition to the gruesome nature of the crime, it attracted public attention because Gendron had live-streamed the attack on the streaming service Twitch and had written a lengthy manifesto and diary entry on the chat service Discord, in which he said he had been 'inspired by racist memes and radicalized to intentionally target predominantly black communities.'

The incident led to two lawsuits filed in 2023 by
The plaintiffs' argument is that the 'recommendation' algorithms found on YouTube and other platforms designed to stimulate user interest significantly influenced the perpetrator's ideological formation, and that the platforms cannot escape responsibility. Although some of the defendant platforms, such as Discord and 4chan, do not have recommendation algorithms customized for individual users, the plaintiffs also argue that these platforms 'are designed to attract users and are predictable in causing harm.'
In fact, Gendron has admitted to being influenced by 4chan, and the manifesto contains numerous quotes from 4chan.
Gendron was convicted of murder and terrorism charges in 2022 and is serving a life sentence, but the question is whether the platform was to blame.

These lawsuits usually refer to
The plaintiffs understand this, and argue that the issue in their lawsuit is not liability for the content posted, but the platform's liability for continually providing content to attract users. They do not allege that it is illegal to display racist, white supremacist, or violent content, because that would provide ammunition for dismissal under Section 230 of the Communications Decency Act.
The plaintiffs emphasize that the algorithms are 'products' and 'patented.' They argue that the recommendation algorithms and product designs that attract users are 'dangerous and unsafe,' making them 'defective' under New York's product liability law and allowing consumers to sue for damages.
For example, the plaintiffs argue that YouTube is more reliant on algorithms than other social media sites, and that its governance practices, which tolerate white supremacist content, may make the platform less safe. The plaintiffs point out that 'while it could have been designed to be more secure, in order to maximize user engagement and profits, it did not mitigate the risks.'
The plaintiffs argue that Twitch 'does not rely on algorithms, so it could reduce that reliance by displaying videos with a delay.' As for Reddit, they point out that the voting feature on posts 'creates a loop that encourages the use of Reddit.' They also argue that 4chan, which can be used without registering an account, is flawed in its design, allowing users to post extremist content anonymously.

The companies, however, argue that they are not responsible. While algorithms do determine which content each user sees, they are not legally a 'product.' Eric Shumsky, a lawyer for Meta, said, 'The services customize the experience based on user behavior. The algorithms may have influenced Gendron, but Gendron's beliefs also influenced the algorithms.'
A famous case similar to this one is Gonzalez v. Google. This case was brought by the family of Nohemi Gonzalez, a victim of the terrorist attack carried out by the Islamic State (ISIS) in November 2015, and the plaintiff sued Google, alleging that YouTube supported ISIS by recommending terrorist group videos to users. In this case, the issue of how the algorithm is interpreted under Section 230 of the Communications Decency Act was also at issue, sparking much debate, but the Supreme Court

Whether the plaintiffs' theory will hold up depends on how courts interpret Section 230 of the Communications Decency Act, the cornerstone of internet law. A New York judge in 2024 allowed the lawsuit to proceed, but a concrete decision has yet to be made.
Related Posts:
in Posted by log1p_kr