Tackling Algorithmic manipulation
Social media companies are influencing elections by manipulating their algorithms. How can we deal with them? Say hello to LawBots!
The advent of deep learning algorithms, infiltration of political activism and idealogical biases into social and traditional media have prompted these companies to interfere heavily in the general elections in India.
Since the start of the Indian Election process, we have found that those supporting Prime Minister Modi’s reelection are facing many difficulties.
Their videos on YouTube get demonetised (and then remonetised) - the algorithm automatically recognises certain words and phrases and demonetises the content. A human reviewer later reverses the decision after complaints from the channel.
The reach of their tweets (on X) seems restricted—the algorithms pick up handles and topics from a database created by the erstwhile Twitter and use that list to suppress the reach of certain tweets.
Google search results of their blog posts and media articles are restricted, and “preferred” content providers are highlighted. For instance, search the term Vedas and see how many non-Indian links are shown on the first page.
The payment gateways are restricted. In this, the payment gateways like Paypal and others stop working.
WhatsApp numbers of political commentary websites were disabled.
This is not new.
In her book For Love of the Country, Tulsi Gabbard exposed the exact same biases by Google Ads during her first campaign. Other Republican candidates were also subject to social media nudge tactics, but here, Tulsi was a Democrat. The critical difference probably was that Tulsi was not malleable. This influence operation has now become a big worry. So, how can we tackle this?
It is not just money either
The power of social media algorithms is far-reaching.
They are manipulating financial returns and advertisement revenues from their content creators.
They are actively shepherding the viewers to content decided by someone else.
They are introducing bias into the minds of unsuspecting viewers.
They encourage hate and amplify outrage.
They create fertile ground for creating and exploiting social divisions.
But most of all, it is a problem of scale.
Algorithmic manipulation works at a scale that is impossible to control and even determine where and how much it is affecting.
Law and Order has developed to tackle traditional threats and illegalities. It has expanded to take care of coordinated threats. It has also expanded its abilities to counter financial threats and frauds. However, algorithmic manipulation presents an exponential scale problem. It is beyond what the current law and order tools and mechanisms can handle. It presents a few unique challenges:
The victim does not know she is being manipulated.
The amount of harm or extent of damages is difficult to determine.
The sheer number of victims.
How do we deal with this?
Let me lay out this scheme.
Create legal space for the fight.
There is no law that recognises algorithmic shepherding, bias insemination, hate incitement, creating and exploiting social divisions and outrage amplification. We need to recognise these as legitimate legal wrongs for which we should pursue legal remedies.
Algorithms to fight Algorithms - LawBots
Imagine we develop separate algorithms running on server farms, mining content from these social media sites. Let us call them LawBots.
Once a citizen (not a random user but a citizen; includes companies registered in India) enlists with this algorithm, then we create LawBots to test the hypothesis if the content made by the content creator (a) is indeed being suppressed or amplified unfairly, (b) using algorithmic manipulation and (c) there is potential for social harm we described above.
Such algorithms and LawBots can effectively detect illegality and estimate the damage to the content creator and the social damage caused by manipulating society. The damages can then be charged to the relevant media company.
Secrecy of LawBot Algorithms
Just like social media companies keep their algorithms secret, the LawBots’ algorithm will also be kept secret. Its objectives will be public, but not the code.
The LawBot objectives and filters will be open to public judicial scrutiny but not the algorithm. The algorithm will be validated by a Judicial Technology Committee, but its details will be kept secret. The final verdict will be shared.
The LawBot's findings will be conclusive proof for the Courts to act upon. They cannot be challenged.
The burden of proof will be on the social media companies to prove that their algorithms did not suppress or amplify content, demonetise content, or prevent access to content deliberately.
In Sum
We need to think innovatively to solve these modern technology problems. We will have to experiment with the LawBot system. It will surely need iterative improvement. But I think it can solve this problem. What do you think? Help me improve and share this with relevant people.
I often think of a quote from B.F. Skinner:
“The real question is not whether machines think but whether men do. The mystery which surrounds a thinking machine already surrounds a thinking man.”
Instead of more Bots and laws, what if we stepped up and took personal responsibility for our use of the World Wide Web? What if we used the tool for connection and relationship building over manipulation?
Maya Angelou -- "Pick up the battle, and make it a better world, just where you are."
https://www.youtube.com/watch?v=bxrV2J_OjGo