Politech
Hail The The Machine Overlord-In-Chief?
A small, loosely affiliated group want a robot president. That’s not going to work out well for any of us…
Recently, Politico tried to indulge an interesting what-if scenario. What if instead of worrying that the presidency may be at the mercy of someone lacking the temperament for it, the United States was run by a cold machine incapable of embarrassing us or making knee-jerk decisions without thinking through the consequences? It could take trillions of data points into account and consider a thousand different possible outcomes without bias, tribalism, or self-interest. Doesn’t that sound great, even just as a hypothesis?
If you’re not a computer scientist and have a fairly cynical view of politics and the job of the office of the president, then absolutely. It does sound like the better approach to making crucial decisions about the fate of our nation and considering our alliances and military commitments, a great deal of the world. If you do a little digging into how AI works and the flaws in the concept of using it for anything other than suggestions and conceptual modeling, however, it’s going to be a lot more comforting to think that a human is in charge.
As Michael Linhorst noted in his analysis, computers are good at solving very specific problems with very clear and distinct parameters, and just because a computer is incapable of irrational biases doesn’t mean it can’t emulate them based on the data fed into it by someone who does have one. And that’s where the real danger of the notion of a robot president lies; in thinking we can have a perfect fount of logic that could guide us through things humans struggle to even comprehend.
AI Could Replicate And Compound Bias
This has been a particular problem in the criminal justice system where systemic racism that we see in policing and sentencing creates data sets that result in an AI using them to create useful models unwittingly replicating those biases.
For example, take marijuana use by white and black people. Whites report a higher rate of usage than blacks, but blacks are persecuted far more often for possession. Likewise, blacks are sentenced more harshly for the same crimes as whites. Plug the raw data about arrests, convictions, and prison times into some sort of experimental AI judge and the computer will also start giving a black defendant a harsher sentence than a white one and expect him or her to be a prolific drug user. Why the data looks the way it does is beyond its ability to understand. All it has are the misleading numbers.
Unfortunately, since people tend to trust computers to be impartial and lack any bias, our hypothetical AI judge would actually exacerbate the problem by making it seem as if math itself says that some minorities are way more likely to be drug offenders who need harsher punishment, in effect giving systemic racism the rhetorical armor of data and cold, hard logic.
There’s a well-known acronym in computer science that describes this problem: GIGO, or garbage in, garbage out. Only here, the garbage out would destroy lives.
The Devil Is In The Data
Apply the same kind of computer logic to the economy. Now, it’s important to remember that the international economy is extremely complex and there are countless variables that may be impossible to fully quantify so the model you get will be incomplete at best. But even a partial model may be better than no model at all, provided you know what it should consider economic priorities. Should inflation control be its end goal? GDP growth? Growth of the median household income? Reduction in volatility? What trade-offs should it make in its calculations to achieve as many of those goals as it can?
There is no correct answer and the right balance of these positive economic goals, both long term and short term, depend not just on finding the balance in economic priorities but can change based on what happens to the global markets and global order in general. There might be the temptation to use an oracular economic AI when any bit of news comes out to receive rather moot and incomplete answers based on potentially biased data from economists in charge of training it. And those economists could be spectacularly wrong and not even notice it, as already happened in austerity research.
Imagine for a moment that we have a robot president who is supposed to try and implement austerity measures to pull an economy from the brink. This generally involves politicians blaming high rates of debt and demanding that in order to avoid default, nations should be cutting services and reducing any budget in sight, citing studies which claim that high debt pummels economic growth. However, one of the biggest and most often cited papers out of that bunch came to its conclusions thanks to an Excel error. No, this is not a joke, and no, there’s nothing more nuanced to the error in question.
The economists in question simply didn’t extend the formula to apply to the data which flipped their conclusions completely on their head, showing that it’s not the debt that’s economic poison, it’s the inability to service the debt that hurts GDP. In this light, austerity advocates have been focused on very myopic ideas about the role debts play in the macro economy, and an AI with its training data set created by pro-austerity economists would be constantly recommending more and more painful cuts to public services, sure that this should improve the economy while it all would actually do is send more and more people to the unemployment line.
Really, we can go down the list of possible topics to see AI after AI make very bad decisions thanks to incomplete or biased data, decisions that applied to international affairs could create bad trade deals, blow up good ones, start a war, or create an alliance that will become a grave burden. Humans are not perfectly logical actors but understand they’re dealing with data that may not be pristine and honest, to put it mildly. Computers lack that capability and there really isn’t a way to make them more aware of this.
The only way we can try and mitigate this quality of training data problem is by letting the AIs collect the data themselves, though we would build the kind of ultimate real-time, omniscient spying tool that would drive a director of an intelligence agency to climax, and it would still be a very open question what kind of conclusions the AIs would draw and how. There would always need to be a human to decide if the presented conclusions make sense and evaluate if the decision-making process was correct.
AI As Empirical Advisers
Despite what you might hear, computers could actually explain how they’d be making the choices in question. AIs use a mathematical pattern modeled after an approximation of how our brain cells work. They weigh not only the data but how important the data points are in relation to each other and we could simply have them log what they decided on after their training is completed, in effect telling us their priorities with an array of numbers.
It would allow us to know that, say, our economic AI sees inflation control as of little importance compared to reducing volatility over the long term, or our military AI favors shows of force over direct challenges or blockades, or our AI intended to handle criminal justice reform thinks that rehabilitation is the key to reducing recidivism and wants to avoid death row inmates due to cost. This would be a fairly transparent process, even if it might take an expert to review the output first. But transparency here doesn’t mean correctness.
Inevitably those who disagree would want to review the data sets and debate how the computers were trained. This is the same situation we have now, but with fewer AIs building models and simulations for our decision-makers. Why would we want to put the very mathematical entities we create in charge just as we’re hotly debating if they’re correct on a fundamental level or were set up to fail by biased humans?
Very little will change from the situation with which we’re dealing today save for more comp sci jargon entering mainstream news and political analysis, and fallible humans will have to make the final calls as computers continue in the role of empirical advisers.
No discussion of putting AI in charge of anything even remotely as critical as a government would be complete without mentioning the consequences of a hack. Imagine an AI president with command of a massive military and the final approval over the allocation of trillions of dollars compromised by some enemy state. Should a human be relieved of command, we’ve effectively put a giant target on the system now in control and created far too many incentives to break in and hijack the AI. By the time the hacker is caught and could face punishment, it would be far, far too late for them to mean anything.
To paraphrase a famous quote, rule by other humans is the least worst system we have, and considering how we build AIs, it’s destined to remain as such for the foreseeable future. We will always know that humans can be corrupted or dishonest, and their recommendations must be challenged and taken with a big grain of salt, if not a whole heap. Computers are far too often seen as pure and perfectly logical arbiters of truth, when in fact, they’re simply extensions of their programmers’ will.
Far too many media organizations prioritize being first over being true or are not seeing a story to its completion before reporting. There is nothing wrong with the pursuit of profit but there is something deeply wrong with the unethical pursuit of profit at the expense of the American people. That’s why we’re asking you to support Rantt News, so that we can continue to create quality content that puts our readers’ mind first, and not just their eyeballs.
Stay in the know and subscribe to our newsletter by following us on Rantt
Follow us on Twitter: @RanttNews
Join us on Facebook: /RanttNews
Write with us: Be Heard