I’ve read a lot recently about the emerging danger of increasingly powerful artificial intelligence. Are there dangers? Of course, but I don’t think we have to worry about machines suddenly deciding it’s in their best interest to end humanity. Here’s why:
The debate first assumes that machines develop a “self-interest” that’s distinct from their programming. Again, leaving aside all the research that demonstrates that the relationship in humans between self-interest, rationality, and intelligence is weak, at best, let’s assume that machines do “learn”:
- the need to protect “themselves”;
- acts that can protect them from humans;
- the ability to discern the impacts of taking those acts; and
- acquire enough control to execute those acts.
Big ifs, but should all of these circumstances come to pass, we might conclude that we’re doomed.
But when might that happen? The simplest test would be this: When machines in aggregate can create software faster than humans can is when machines become an extinction threat to humans.
Why? Because so long as humans can generate software faster than machines can, it is reasonable to assume that humans will be able to write software that can counter any threat posed by machine-written software.
But for how long will we collectively be better at writing software than machines collectively? I don’t know, but I also don’t think it matters because the nanosecond self-interested machines can write software faster than humans, those self-interested machines are going to worry about each other, not humans.
The real question regarding AI, then, is: When will machines start attacking each other? Again, I don’t know, but when it happens – when my iPhone decides to take out my iPad – my calendar will go haywire and I’m going to miss a lot of appointments. Sorry for that, in advance.