Slow progress on algorithms

In her Harvard University commencement address last week, Prime Minister Jacinda Ardern emphasised the urgent need for responsible online algorithm development and deployment.

"Let’s start with transparency in how algorithmic processes work and the outcomes they deliver. But let’s finish with a shared approach to responsible algorithms because the time has come," she told the prestigious gathering.

The forums for online providers and social media companies to work on these issues alongside civil society and governments were there, and "we have every reason to do it", she said.

But it is hard not to be sceptical about the speed of any progress on this.

The commercial success of internet platforms depends on the very processes which encourage our eyeballs to stay online as long as possible by offering us more and more content which the technology has determined will interest us. The substance of that content matters little to the bottom line.

This month, following the killing of 10 people at a Buffalo grocery store in the United States it was reported the 18-year-old accused of the crime had drawn inspiration for livestreaming the attack from the 2019 Christchurch mosques massacre.

Much was made of the early removal of the Buffalo livestream from Twitch, a gaming streaming service owned by Amazon, but there are also reports of some video clips and images still circulating.

The fact the New York Times this month, in a search spanning 24 hours, found more than 50 clips and online links with the Christchurch gunman’s footage, on at least nine platforms illustrates how action so far has only been able to achieve reduction rather than eradication.

As the New York Times said, these clips and links were not difficult to find, even though Facebook, Twitter and other platforms pledged to eradicate the footage in 2019 as part of the Christchurch Call

The reporters also noted some of the ways those sharing the toxic content found to get around detection attempts by the large platforms. This raises the question of why the platforms are not doing more on this themselves.

As well as the internet’s contribution to the sort of radicalisation which leads to such atrocities as occurred at Christchurch, there is increasing concern about the impact of the spread of disinformation on democracy.

We need look no further than the lengthy protest at Parliament earlier this year to gain some understanding of the risk. Research from The Disinformation Project showed how the more moderate goals of the original convoy were replaced with extremist and more violent xenophobic supremacist views. It found a dozen individuals at the occupation created the online content most widely viewed. In many instances, mis- and disinformation pages on Facebook received greater engagement than mainstream media which is subjected to checks and balances.

Ms Ardern encouraged her Harvard audience to look at how they chose to engage with information, deal with conflict and how they addressed "being baited or hated" and to make the choice to treat difference with empathy and kindness.

That might be much easier for someone with the benefit of a Harvard education than for someone with poor literacy skills who has not developed critical thinking. Their understanding of the power of the algorithm might be limited to annoyance about the plethora of advertisements they receive about something they have just bought online.

It seems fanciful to think the technology giants will work more urgently on stopping the spread of disinformation and allow transparency around their algorithms with some more sweet-talking or even hard-talking, whether it is from our PM or other world leaders. They have had years to do this, and they are clearly not short of money to fund real change if they were so inclined.

Regulation is fraught and complex, of course, coming as it does with screams of denial of freedom of speech, as if any freedom is an absolute and does not come with responsibilities. However, that should not rule it out.