Worrying about the algorithm is a threat to information awareness
I do a probably much greater than average amount of consumption and discussion of hate propaganda. A fair amount of this is anti trans material from the “TERF” (trans exclusionary radical feminist) world, but as much or more comes from following the activities of religious conservatives usually organising in the background who get far less attention.
Over the last year or so I’ve been being told on Twitter by anxious people that I shouldn’t be linking sources as I’m driving traffic to them, and consequently “The Algorithm” is likely to be pushing more such hate material at people from there, and thus in some way I’m helping radicalise new transphobes, as well as helping people monetise their hate content through clicks. These are both true, to an extent . My own YouTube recommendations are an absolute sewer of right wing anti-feminist reactionary content on account of the material I consume doing my research, and the fact that I forget sometimes to switch to “Incognito” mode to view it. Surveillance capitalism doesn’t provide good means of repudiating stuff you watch online in order to reduce your contribution to its further proliferation in the information ecosystem (dislikes still count as engagement in the surveillance economy). In fact it’s worse than that; if someone has similar tastes and hobbies to you, and you’re in the business of following hate movements, you’re almost certainly going to be affecting their recommended content as well!
So what are our solutions to this problem? We can screenshot. We can capture snippets of video to share for evidence. We can discuss the content abstractly and do our best to only ever access hate material through archival services so as to avoid sending ad click money. We can download online media and upload edited versions with the evidence we needed. But while those are tools available to us, I want to highlight some of the pitfalls there:
- Screenshots and video clips are, by their nature, taken out of context. The work hate researchers are doing is reporting on reality and some amount of accountability to sources and facts is necessary. It’s important to at least provide links/citations on demand so others can replicate. Information chains on the internet are a sewer of rumour and gossip and it’s extremely common for people already fed on a steady diet of misinformation to draw entirely false conclusions from a true piece of data taken wildly out of context. We have a duty to each other to curate a faithful awareness of reality. This obliges us to be open to interrogation about claims and sources and helping others follow our working transparently.
- Screenshots and photography have been subject to manipulation for as long as we have had them, but we live in a world of deepfakes and detecting human-imperceptible modifications to video clips is a serious challenge having millions of dollars thrown at it by serious industry bodies specifically because as we have moved into an information economy, the level of business interest in deception has exploded. This doesn’t just operate at the level of national security threats grabbing news headlines. Falsifying videos is comparatively easy compared with each viewer individually being expected to verify the source for themselves. Where there’s a lot of clout (social media capital) to be had for discovering something particularly awful said by a political adversary, there is potentially money to be made from either subtly or dramatically exaggerating aspects of awfulness in ways which are damaging to the wider quality of social information.
- It’s not just the algorithm which rewards extremely strong negative or positive engagement — this is a function of media in general and the ability to reach a lot of people even without algorithmic recommendations. Columnists like Richard Littlejohn and Julie Burchill have known this for years, making careers and reputations out of incensing target groups for the entertainment of the middle class. There is much more to this problem of trying to avoid radicalising more people than attempts to undermine it at the point it is surveilled by a tech giant and many of us in previous decades were aware of this — the risk of hate clicks in the “pre-algorithm” era, or even just sharing a physical copy of a newspaper with a hate article before that seem to have been forgotten. AI recommendations have accelerated aspects of this issue but people seem to fail to put it in context of a problem that goes back as far as media has existed. Which leads us to…
- Many of the concerns expressed currently about social media are near identical to propaganda fears around the early invention of the Wireless (which was immediately exploited for similar misinformation campaigns to the ones we see today).
- The vast majority of hate research being done today is done by amateurs, with limited equipment, no budget for stronger systems for attestation and recording information chains and demonstrating provable transparency to viewers. In the best-practice cases, we are most of us trying to show good faith and doing our best in our spare time to facilitate attestation via archive services and in the worst-practice cases, there are big context-less mosaics of violently threatening Twitter screenshots sitting in open Google Drives that get linked to without context, over half of which will be from deleted accounts (if they even existed in the first place!) with no 3rd party attestation data available to confirm they ever existed in the first place.
I’m amenable to the idea that we shouldn’t be linking hate to large audiences and should find a balance between source-transparency for verification and reducing the harm and reach of online hate. But I think the public conversation about it in general is far too simplistic, ahistorical, ignorant of the fact that alleged problems of technology are actually extensions of human sociological problems which people have embedded into technology, and lacking in regard to the necessity of actually providing source transparency and information verification procedures in order to continuously improve the quality of information provided on social media against the natural tendencies towards disinformation which are baked in by the information market.
I ask online people who’ve discovered the perils of the alt right pipeline to extend that knowledge and understanding of the problem beyond the alt right pipeline.