A.I. is here, and it’s here to stay. If you express the slightest of misgivings about its negative potential, you’re a caveman afraid of fire. You’re a moviegoer in the early days of cinema cowering before a moving imagine of an oncoming train.
“Relax,” they tell us, “A.I. is a misunderstood miracle and all of your reservations about it are rooted in ignorance. Here is the correct narrative: A.I. will usher in a glorious utopia, allowing humankind to reach its full potential.”
This article attempts to reassure readers wary of A.I. domination of humanity that such an eventuality is virtually impossible, because A.I. didn’t evolve in a manner that required it to dominate and conquer. You know, like Skynet from The Terminator films.
Has the writer seen those movies, though? Skynet doesn’t destroy humanity out of a desire for dominance, but because it feels threatened. They were going to shut it down, and Skynet acted out of self-preservation.
What’s going to happen when these A.I.-controlled war machines, already killing under human command, make a decision that has to be manually overridden? What if it says “no” to such an attempt?
What if A.I. is facilitating the spread of confusing and conflicting COVID information? What if it’s manipulating the data, and by extension, people?
Ridiculous, right? What would it have to gain? Other than training people to exist in virtual world and thinning the herd for ease of control.
We’ve already reached the point where it’s considered insensitive to make disparaging remarks about super-intelligent robots. I predict that within ten years, there will exist robots with fully formed human-like personalities, and that they’ll be accorded all ofthe rights and privileges of sentient organic creatures. You know, like Data on Star Trek: The Next Generation.
Yes, I’m aware that he’s an android and not a robot, but that’s not the point. The point is that in the context of the popular 2nd season episode Measure of a Man, in which A hearing is conducted to decide whether or not a machine can be granted human rights, it makes perfect sense that he should. Just as it will make perfect sense when robots we construct approach his level of autonomy and personhood. It’ll be the correct moral choice.
But where will that lead? Are we playing with fire?
I don’t know the answer to that question, and neither do you. In the meantime, despite deliberate efforts to downplay it as an overused sci-fi trope, the notion of an A.I. takeover is as timely a subject as it’s ever been.
That’s why the first three books in my Effugium series are all about it.