Probably everyone’s parents and grandparents, at some point, advised them not to be a worrier. Far be it from me to go against that kind of historical weight, but suppose at some point you feel like you have to worry? Suppose that you’re the kind who believes in planning for the worst because the best takes care of itself? Suppose you don’t believe my points about the vacuous nature of the claims about AI destroying the human race. Suppose you want insurance, to stop what may be highly improbable, but that nobody can say is absolutely impossible. How do you get to that happy outcome?
Let’s suppose that OpenAI, or Google, or somebody else, builds a generative AI cluster so powerful it becomes capable of independent thought. Suppose that it decides to kill off its makers, and the rest of us as well, just because it can. Good-by humanity? Let’s parse that one out. What exactly does our super-cluster do? Remember, the goal of generative AI is to generate stuff. Answers to prompts, meaning questions. Images, maybe videos. So does it lie to us and convince us all to jump in front of moving vehicles or off high objects? Does it scare us to death with images and videos? What exactly can it do to cause our collective deaths?
If you want lies and fakes, the Internet already offers us plenty of them. So does other media. It’s hard to see what our super-cluster could say that would be worse, or more effective at manipulating us. Maybe it creates a video deepfake where Oprah or some political figure tells us to jump, but would that really influence many people? The point is that our super-cluster is missing something important, which is a direct ability to control the real world in some way. A super-cluster needs minions, agents.
OK, you think. So suppose our super-cluster is controlling traffic lights. It could sent cars careening into each other and killing drivers and passengers. But how long would that go on before somebody pulled the plug on the lights? Or suppose the super-cluster could control cars directly. How long would it take for all of humanity to get a new car that was controlled? Maybe the super-cluster controlled aircraft, and crashed them into things. But would a central autopilot have no manual on/off switch, as all autopilots do today?
Ah, you might say, but the super-cluster could control the airplane factories, and build aircraft that didn’t have the switch to turn off the autopilot. But how long would it take to infiltrate all the factories, and then to replace all the old-model planes with new models? I think you may be getting the point here. Our problem isn’t really with the power of AI, it’s with AI’s ability to have unfettered control over a lot, a whole lot, of agents. And for even that to work, however difficult it might be and however long it might take, what we really need is another ingredient, we need autonomy.
Think back to Issac Asimov, and his stories about robots and the Three Laws of Robotics. It wasn’t a story about AI or AI laws or AI regulations and protections, it was a story about robots, because whether explicitly or intrinsically, Asimov recognized that the risk lay with autonomous entities and not simply something that mimics human intelligence. Robots, true autonomous robots, could actually run amok whether there was a super-cluster AI behind them or not. But the only practical way we could expect to get a truly autonomous robot in the foreseeable future would be to link it to a super-cluster, because the number of GPUs it would take would be way too large to put into something smaller than an aircraft carrier. OK, you want to speculate that could change? We’re in science fiction then, and I have no answer for you. For now, practical autonomy with freedom to act would only work with agents that were externally controlled.
Going back to Asimov and his three laws, the goal is worthy but it’s founded on a preconception of a robot, an automaton, as something that itself presents human-like intelligence and therefore must have some internal mechanism of control. If autonomy has to be controlled remotely, by our super-cluster form of AI, then the robot itself isn’t the problem. But neither is our super-cluster, because without the associated agent it’s left with manipulating us as its offensive weapon. It’s the connection between the two that creates the risk. If we have a super-cluster that can actually control agents, robots or whatever, and make them truly autonomous, then the cluster can impact our world, and us.
What this means, for those who want insurance against an AI-generated human extinction, is that we don’t really need to regulate AI at all, we need to regulate the linkage between AI and autonomous things. AI can’t crash our cars or planes, can’t turn off our heat and light, can’t dispense fatal doses of drugs in hospitals or sink shifts, unless it first has control over those things, and second has that control without an easy override. Switch off the autopilot. Provide secondary, parallel, systems that offer a check on having autonomous behavior go totally amok. My car will auto-steer and auto-brake, as long as I have my hands at least loosely on the wheel and my eyes on the road. Otherwise it will beep at me and eventually disengage.
And yes, you could speculate that our super-cluster could defeat those parallel safety systems, but if you do you’re adding to a chain of improbable things that all have to come to pass in order for there to be a risk, or at least a risk that’s less probable than other risks we already face. It’s easier to imagine a hacker gaining control of a car, or aircraft, or hospital, than to imagine the full chain of conditions that would have to be met for AI to use those things against us. We don’t need AI to generate that risk, all we need is autonomy in some form, and we have that in many trivial forms already. Self-driving, smart buildings…the list is growing literally every day.
We’ve misjudged AI, and that’s bad. Our misjudgment could lead us to setting the wrong regulatory policies, which is worse, because not only might that constrain something truly useful, it could let something that seems harmless, like “automatic” things, get their noses under the tent. It doesn’t take human intelligence to create automation threats to humans, but it may take human intelligence to address them responsibly and effectively.