Global

Artificial Intelligence Politicians: More Gimmick than Reality


Non-human candidates frequently grace local and national electoral ballots. Limberbutt McCubbins was the first feline presidential candidate in the US; Darth Vader ran for mayor of Odessa, Ukraine; and a rhinoceros named Cacareco was elected to Sao Paolo’s city council. Typically these candidates are nominated as a joke or as a protest, political or otherwise. In 2015 a new type of non-human candidate emerged: an artificial intelligence (AI) politician. While animals tend to sway electorates by being cute or funny, AI politicians win support by promising unbiased judgments, immunity to bribery and corruption, and the inhuman ability to analyze data. These are mostly empty claims.

Under current technological limits, humans are necessary for every step of an AI politicians’ life, from creation to training to implementation. There are plenty of opportunities for human bias, prejudice, and corruption to creep in and influence the artificial network. We are far from the day when an AI will actually win an election and successfully manage to govern a nation, due to these technological challenges as well as societal issues that still need to be overcome. Until that point, AI does have a lot to offer as a supplement—not replacement—to traditional human political decision-making.
The first virtual politician was developed in New Zealand in 2017. Dubbed “SAM,” she chats with constituents over Facebook messenger. Fellow virtual politicians have since appeared in 2018 elections in Russia and Japan. All three political candidates drummed up support by following a similar refrain: that AI can gather and analyze data about citizens in order to make impartial decisions, without being swayed by human emotion, biases, or other supposed flaws that could lead to unfair or irrational decisions. This argument was appealing enough to win some supporters, but hollow. Artificial neural networks must still be created by a person, at least until the singularity, a hypothetical future featuring an artificial superintelligence that is capable of self-improvement. When human creators are involved, there are a lot of variables and motivations that may not be transparent. This opens up the possibility that the AI has been developed to suit one particular agenda. In the case of foreign policy, this could mean the AI was developed to provide rationale behind the decision to go to war, or will always lean towards censorship. There is also the inevitability that some amount of bias and prejudice on the part of the developer will slip into the neural network, even if it is unintentional.

Image courtesy of Pixabay, © 2018.

Artificial neural networks must also be trained by people. This is a long and challenging process that can result in the network devising rather unconventional solutions that are perfectly rational to the machine, but which a human would consider to be outside the bounds of possibility. An AI developed in 2017 learned in a game to kill itself at the end of the first level in order to avoid losing in the second level. Another algorithm, developed in 2018, learned to bait an adversary into following it off a cliff in order to win more points in the game. AI politicians are trained by chatting with people. In some cases this has led the AI to come up with irrational policies, and has brought more human biases into the equation. The Russian AI candidate, Alisa, was chatting with potential future constituents and ended up expressing support for gulags and agreeing that enemies of the people should be executed. Similarly, Microsoft’s AI chatbot, Tay, though not a politician, was trained by Twitter users to express support for Hitler.

The impossibility of building and training a truly impartial artificial intelligence that acts in a way humans would consider rational is the technological impediment to installing an AI policymaker. There is also a societal impediment. Political systems have always been structured around a human leader. That human may be democratically elected, a king, a benign dictator, an oligarch, or a despot. Outside of science fiction, there has never really been any possibility that these rulers might be anything but fundamentally human. There are practical issues that would have to be solved for an artificially created leader: does an age limit apply to a machine, and how does an AI physically host foreign leaders for a state dinner or attend a function. There is also the issue of acceptance and respect. Would citizens accept a social contract with a machine? Trusting an artificial intelligence to make decisions related to our welfare is an entirely new concept. It still remains to be seen whether or not a society would adhere to a difficult policy introduced by a machine, such as a tax hike or the decision to go to mobilize troops. How we think about political actors and the role of the head of state would have to be altered to accommodate an artificial leader.

But AI has much more to offer policymaking than just politicians. It can be used to supplement traditional decision-making, which seems much more feasible than an AI acting as a full-fledged leader. Machine learning tools are already being studied for their potential applications to help with diverse political challenges like refugee resettlement, detecting tax avoidance, and managing warfare. The Chinese government is reportedly working on developing a diplomatic system based on AI. This system will ideally be able to analyze data from a variety of sources—news articles, images, even rumors—in order to provide recommended courses of action policymakers. The AI can do this much faster than a human analyst and come up with a greater variety of possible outcomes and strategic recommendations. This system is meant to aid human policymakers, not replace them. It is a tool that can be leveraged so the person can obtain the best possible outcome that aligns with their own endgame.

It seems likely that other nations are also working on similar prototypes, in a sort of AI arms race. Having an AI that can conduct deep analysis of every available bit of data related to a particular situation, mull over a multitude of outcomes, and spit out a handful of recommendations is a significant strategic advantage. One that no nation would want to be without, once the first has been successfully deployed. Elements of such a system are already becoming ubiquitous in all aspects of modern life, from Google’s suggested email responses to smart refrigerators that add orange juice to your shopping list. When the singularity comes a superintelligent computer can invent and train itself with no input from humans. Maybe then society will be prepared for an AI politician. Until then, we will have to adapt to working closely with the machines while keeping in mind that they are only as good as the humans who created, trained, and implemented them. We may also have to train ourselves not to think of AI as anything but a tool. The ELIZA effect suggests that we tend to apply human attributes to machines: empathy, gratitude, thoughtfulness. In reality, artificial neural networks can be very good at analyzing, interpreting, and offering suggestions, but have no capacity for the higher-level cognition required for leadership.

Asia
The Necessity of a U.S.-Pakistan Partnership
Americas
Power of Suggestion: Ratifying the Paris Climate Agreement
Americas
U.S. Approves Boeing Fighter Jet Sales to Qatar and Kuwait
There are currently no comments.

Leave a Reply

%d bloggers like this: