A question has arisen quite expectedly: Is it safe to use artificial intelligence (AI) in an election? Specifically, speech and images based on Large Language Models (LLM) and POS (Photoshop on steroids) broadcast during election campaigns.
The question,while timely in the media, is premature for the policy community. There are two camps looking at AI: the Rationalists and a smaller group, the Empiricists. Both admit to being behind the arc of AI’s adoption.
The Rationalist extrapolates from the still early evidence that AI will lead to a flood of lies, exaggerations, doctored photographs and videos. Digital tabloids will arouse citizens into reckless unprotected voting, electing, if not a dead Elvis, then something worse, a live Donald Trump. Information’s virtue must be protected!
Now, the Empiricist simply says to wait and see then react to the evidence at hand.
Here are the opening remarks from each.
From the U.K.’s Westminster Foundation for Democracy, Ben Graham Jones and Tanja Holstein present a distillation of the Rationalist’s position in “How the election support field must adapt to artificial intelligence.”
“One of the most common fears people express about artificial intelligence (AI) is that its development will lead to less human control over our lives. We have long had an answer to threats to our autonomy as individuals and societies – democracy. Those who serve democratic processes have a special responsibility to shape a democratic future in which to the greatest extent possible AI benefits, rather than harms, our societies.”
Jones and Holstein frame their argument around the need for safety to protect our control of our own lives. Presumably without safety precautions, we might trade away that autonomy for elixirs pedalled by political entities “Driven by the will to win an election” and “often quick to integrate technologies into their work.” Quick being code in bureaucratic circles for knee-jerk self-interested ideas by civilians.
They recommend five responses, sensible if ever passed by quick-minded ministers and an even quicker-minded public. They are international standards, counter-disinformation, strategic communications, voter education, and personal data protection.
What they don’t mention is the existing remedies in libel and slander laws. Why the courts cannot deal with the onslaught is unclear. The legal system is slow and expensive but given time and money it works.
A detailed Rationalist policy exercise will get underway at the end of the month led by International IDEA, “Protecting Elections through Safeguarding Information Integrity”: Launch of the Policy Brief.
The Empirical School of digital electoral safety — both skeptics and cynics — is neither large nor vocal. The demise of Mad magazine signalled the end of “What me worry?” as an accepted design principle in public policy.
As a result, I tuned into the debate within in AI community itself. Tom McGrath, a prominent AI entrepreneur published in Substack a strong argument for an empirical approach to AI safety. He sees an answer grounded in open source and market competition for quality control.
First, he lays out the difference between the rational and the empirical approach.
“The AI community runs along an ancient major philosophical faultline, with much of the safety community on one side and much of the ‘builder’ community on the other. This faultline is the divide between rationalism and empiricism.”
“A key practical distinction between the rationalist and empiricist worldviews is what to do in the presence of uncertainty: empiricists seek to reduce uncertainty and then act, whereas rationalists marginalise over the possibilities (weighted by utility).”
“A reasonable summary of the typical rationalist viewpoint on AI safety might be: ‘It’s logically possible (and perhaps even likely for arbitrarily powerful predictors) that LLMs could become deceptively aligned/have inner misalignment/exhibit a sharp left turn. It might even be happening now! Because this outcome is so bad, and has some nontrivial (maybe even large) weight in our probability distribution of possibilities, we should stop building AI until we know how to make this situation logically impossible.””
McGrath more generously encapsulates empiricism as much more complex than the all too human doubt as to the riskiness of one’s behaviour.
“I would summarise the empiricist worldview as roughly: “It’s certainly possible that AI could be dangerous, even catastrophic. How would we know? What relevant properties do current AI systems have? What do we need to know about them to reduce possible dangers? How can we maximise the rate at which we collectively learn important true things about the systems we’re building? What decisions do we need to make soon, and what do we need to learn to make them?””
In contrast to a detailed five-prong policy exercise, McGrath’s solution is maddeningly simple. He recommends greater openness as to AI’s Large Language Models (LLM) and learning protocols.
“The first and obvious change would be to demand evidence for claims about the dangers of particular systems. This could be for claims of current harms, or potential harms that could occur from changing policies on current systems (e.g. open-sourcing). These experiments should be constructed in a way that puts the central hypothesis at risk, rather than ones where the outcome is known in advance and essentially baked into the setup.”
Openness, he argues,is key even if that has risks, though that seems more relevant to bio-weaponry than the average chatroom. Openness teamed with strong adversarial competition would uncover bias and flaws that could, in time, led to self-generating safety protocols. Marine lighthouses were after all originally a small business selling information as to the location of the rocky shore. That would lessen drawn-out reviews and policy revamps by a bureaucracy.
Philosophically, I am from Kansas. The ‘Show-me’ State. Before I jump inti the Okay Corral, I want to know who will be there and what are they packing. I won’t panic if they are Moscow Poly dropouts with 2016 hot button spam or Chinese models with plates of BBQ duck. If you ask me to believe people are easy marks, I’d have agreed when I was younger. In years since, I am amazed how much the average man and woman in the street has learned.
However (and this is way above my skillset), McGrath does not address the Russian or Chinese State, which have no intention of making “their” AI LLM open to anybody. The observation holds also for the Rationalist approach. Russia and China ignore NGO policy briefs.
In the end, McGrath echoes Frederich Hayek’s analysis of why the nature and use of knowledge itself prevents governments from effective risk reduction.
“Although we might want to anticipate and forestall every issue, in practice this will be impossible – problems and opportunities that never even occurred to us happen all the time.”
Friedrich Hayek wrote that “dispersed knowledge is essentially dispersed, and cannot possibly be gathered together and conveyed to an authority charged with the task of deliberately creating order.”
The protean ability of AI, not to create new knowledge, but to re-arrange existing knowledge in novel ways will elude the consistent enforcement of AI safety.
The question reduces to the compatibility of long-term electoral safety with the integrity of the invidual’s choice. Can they survive together. Is it okay to accept subtle “official guidance” in order to ensure data integrity? Or, is the goal of safety itself a risk? Probably not at the practical level, but it will annoy people and cause counter-productive suspicions of a “Big Brother” power play.
Is it possible for deliberative and informed democracy to survive if the government leaves voters alone to find their own way through toxic FB messages, TikTok agitprop, game show bribery, and their own Gantry-scale gullibility along the road to casting a free ballot?
It frightens me, to answer “Yes.” But, Yes, democracy has nothing to fear because oddball emotions and ideas reach the voters’ eyes and ears.
What is to be feared lies in the hearts of autocrats ready to sacrifice the freedoms of others for power.
It is those who conspire to gain from force, not those who read and watch that pose the threat to elections safe for democracy.