To main heading
Headshot of Patanjali Sokaris

Pondering the universe

Politics

Liability for algorithms

We live in a world where many now rely upon their news from social media, bringing attention to the means by which the news seen is decided.

Before getting into the how, we need to be aware of the realities of media. While the printing press did provide an opportunity to democratise the dissemination of opinions and maybe facts, it largely resulted by the 20th century in newspapers being owned by a few wealthy people, and they being the ones who decided what was newsworthy. They had to pander to public tastes to get the readership, but they engaged in a hefty amount of propaganda that suited them and those who shared their lifestyles and politics to make sure they weren't challenged.

Come this century, and Google and Facebook have eaten print media's lunch and dinner, shifting the balance of power to social media, which has made opinions far more democratic, at least in the making of them. Unfortunately, the expansion into the digital world has also enabled the sources of opinions to be far greater and even non-human. The transnational ubiquity of access to social media has enabled many more well-funded corporations and nations to join in the propaganda game, and on a worldwide scale.

However, it is in the distribution of those opinions that many problems lie. Due to the huge numbers of people and their opinions, the only way to manage them is through algorithms, which are the programmatic rules and methods to decide what appears where. While they could just keep it simple by using only popularity as the principal decider, that would mean that those interested in less popular topics would never see them. Of course there could be tags that define topics, so that people could blacklist those they were not interested in, and whitelist those they were.

One way to deal with the issue is to gather data about what people have preferred in the past, but that may only indicate what they might want in future. The companies were also having to have a means of monetising their membership, and while they could have just gone for subscriptions, that would have put off a lot of their users who weren't used to paying. They went for advertising, but that is too expensive if having to target everybody.

That started the push to gather as much data about people as possible, and included offering people who had websites a very small cut of the advertising revenue if they allow the company to track their site visitors. This extended data harvesting gave a much richer profile of each individual, providing multiple criteria for advertisers to reach only those of particular interest to them. That enabled them to only pay for those they wanted. This evolved to be a very precise way to target people down to individuals.

This all allows businesses to be able to accurately target those who would be most likely to want their products or services because of the types of things they are interested in at the time. Of course, this targeting is not confined to commercial interests, and is particularly useful for targeting for political purposes, whether to prompt those more sympathetic to them to get out and vote, or dissuade those that disagree with them to not vote.

The worst of this was with Cambridge Analytica because Facebook made it very easy for them to use their platform to influence the 2016 election in the US, and other elections at the time. State actors like Russia also used the abundance of personal information to sow dissent with the government and the electoral process to weaken respect for democracy. All democratic countries now have considerable numbers of people who have had their feeds pumped with destabilising propaganda due to the use of copious fine-grained private information to feed them disinformation.

Excessive data and ruthlessly exploitable algorithms has highlighted how much social media companies are not neutral, but actively engaged in making it easier for those with money and power to manipulate what information people get to see without being aware of how much they are being manipulated. Social media companies have been shielded from a lot of government scrutiny because they have managed to get themselves labelled as publishers, so they are not liable for the opinions of those that post them. But for that to be true, they would have to be scrupulously fair in what information reaches people.

Selling people's data so they can be targeted down to the individual for exploitation is not fair because those individuals are not made aware of how vulnerable they are and do not have the resources to prevent such targeting. In the days of the dominance of newspapers, everybody knew their owners and thus who was manipulating them, and so could choose whether to indulge them. But social media companies are masking who is trying to do the manipulation while making it easier for them to do so. That is actively supporting exploitation on a worldwide scale, which means they must be reined in.

Certainly, social media companies could be regulated, and that must be done, but how is the real issue. Making them responsible for what people post on them would mean a lot more censorship, though that happens now, but only because some rich and powerful advertisers pull their advertising if their posts are surrounded by ads that undermine their brand. That doesn't help the small players who don't have the threat of withholding money to back them up.

One obvious way is to limit what data is collected about people and who it is made available to. Reducing that would go a huge way to preventing the accuracy of targeting, while also reducing the amount of sensitive member data that would be leaked if they are hacked, which is mostly a matter of when rather than if. Facebook has already had a breach of 520 million of their members' data, who they refused to notify of the breach, leaving them open to more exploitation.

Specifying the types of data that can be used, enforcing them to keep publicly displaying that in prominent places, and enforcing the restrictions with severe penalties would mitigate against them being so cavalier with the data. For multi-billion dollar companies, the penalties have to be commensurate with the size of their turnover, such as with the latest federal data breach legislation in Australia, which pegs the penalties as the largest of AU$50m, 30% of adjusted annual domestic turnover, or three times the value of any benefit obtained through the misuse of the leaked information. That is what being serious means.

Newspaper tycoons were bullies in their day, but pretended to be protecting people from their governments, while actually manipulating their readers to be biased to supporting those politicians who would do the tycoons' bidding. Rupert Murdoch is the example of such manipulators, and one who has successfully carried that into television and online worldwide, though in his native Australia, his ability to manipulate people has severely waned, despite having 70% of the country's mastheads actively promoting right-wing talking points and loony right-wing conspiracy theories.

Social media has tried to hide behind their algorithms, but they are creating the algorithms to be severely biased to helping those who can pay the price to use them in knowingly nefarious ways. While there are efforts to rein them in, they are still running free and have the money to keep government off their backs. However, Elon Musk has shown how easily a social media company can be taken off the rails to become a conspiracy haven, and that might speed up legislative efforts, just to avoid them all going that way.

  • β€’Right-wing and centrist are misanthropy
  • β€’The Great Replacement Theory
  • β€’The right to bear arms
  • β€’Contact   Glossary   Policies
  • β€’Categories   Feed   Site map

  • External sites open in a new tab or window. Visit them at your own risk.
    This site doesn't store cookies or other files on your device, but external sites might.
    Help   Powered by: Smallsite Design ©Patanjali Sokaris