AI is a Genie With No Bottle
Image by Gordon Taylor from Pixabay
Feel like you can’t keep up with the pace of change?
It’s not you, and it’s not your age. Sure, you and I might not have yet mastered TikTok while our kids have already moved on to Likee or some other app we’ve never heard of. Maybe you’re still tiptoeing around Twitter while its disgruntled users are bailing out in favor of Mastodon or something else with a learning curve that makes you want to take a nap instead.
Don’t worry about it. Human experience is about to plunge into a phase that nobody can foresee. You can’t get left behind if nobody knows where we’re going.
That’s a positive spin on an unsettling reality
Ezra Klein is a very smart guy, even if I find his voice on his podcast a little annoying. In a March 12 opinion piece titled “This Changes Everything” that he wrote for the New York Times, he quotes Sundar Pichai, the CEO of Alphabet, Inc. and its subsidiary Google.
As Klein notes, Mr. Pichai is not known for exaggeration. Back in 2018, he said, “AI (artificial intelligence) is probably the most important thing humanity has ever worked on. I think of it as something more profound than electricity or fire.”
Excuse me? What could change the human experience more fundamentally than fire did? Or electricity? I can’t even imagine.
The thing is, neither can the brainiacs who are developing AI.
In a story I wrote for Medium.com at the end of January, I mused about how, no matter how much we worry, we’re rarely worried about the right thing, or the thing that we maybe really should be worrying about.
And when I read Ezra Klein’s opinion piece, I had my own point driven home to me.
And when I read Ezra Klein’s opinion piece, I had my own point driven home to me. Way back in December, which when you’re talking about the pace of AI development is practically ancient history, I conducted an interview with ChatBotGPT on my weekly blogcast, “Here’s A Thought” (for people who overthink, as I certainly do)
image by author, created via Canva
It was amusing if alarming, especially when at my prompting ChatBot politely explained how it would conduct a career as a serial killer without getting caught — an explanation it came up with faster than it took me to ask the question.
You’ve no doubt heard about all the scrambling ChatBot has caused by professionals trying to catch up. Literary magazines have been driven to close for submissions because they’ve been inundated by AI-written stories and essays. Educators are desperate to find ways around students blithely submitting homework or college application essays generated by AI.
Some pundits argue that we’re fussing too much, that AI doesn’t really think, it merely collects existing information and synthesizes it, and it makes mistakes, and it’s not perfect. Even its developers admit it sometimes hallucinates.
But we don’t even understand how we think, even as we design algorithms that seem to produce thinking in our image, only much faster. It seems to me that a whole lot of human thought consists of synthesizing info and that we too make mistakes and sometimes hallucinate, but what do I know?
What I do know is that AI is advancing at a rate we can’t get our human heads around — not even the egg-heads who are ushering in ever newer and more sophisticated AIs.
Open AI, the company that developed ChatBot GPT, for instance, just unleashed — excuse me, made available to the public — ChatBot GPT4. And no, I’m not going to interview it here for two very good reasons: one, I doubt I’m smart enough, and two, I really don’t want to piss it off. Because I don’t know what it will be capable of in the next five years, or five months, or five weeks.
And neither does anybody else
Ezra Klein asks us to simply consider for a minute that Sundar Pichai is correct — that the development of AI will change human life more than fire or electricity did. And that it will do so far faster — so fast, that we can’t even see it coming. He points out that we’ve already gotten sort of used to Chatbot-like systems that shape a lot of our lives — how many of us rely on Siri or Alexa?
But to quote his article, “What’s coming will make them look like toys” and he points out that we have a very hard time reckoning with the AI improvement curve.
He goes on to quote Paul Christiano, a former member of OpenAI, who last year made some pretty chilling statements. To quote: “The broader intellectual world seems to wildly overestimate how long it will take AI systems to go from ‘large impact on the world’ to ‘unrecognizably transformed world’ . . . This is more likely to be years than decades, and there’s a real chance that it’s months.”
Remember how weird it was when Covid was a new thing, and how most of us just kind of went on with life as normal until suddenly the whole thing went sideways and everything shut down and we ran out of toilet paper and there was nowhere to go to get away from It All? An “unrecognizably transformed world” could be way weirder than that, and there wouldn’t be a vaccine for it.
Ezra Klein moved to the SF Bay Area in 2018 and since then has spent a lot of time hanging out with people involved in AI development, which he describes as a truly weird community, “living with an altered sense of time and consequence. They are creating a power that they do not understand at a pace they often cannot believe.”
The AI entities, for lack of a better word, that they’re working on are busy talking to each other, teaching one another, and doing things their developers don’t even know and can’t keep track of. As far as we know, the genie has left the bottle so far behind that the bottle is obsolete.
Paul Christiano also writes: “There is a good chance that an AI catastrophe looks like an abrupt “coup” where AI systems permanently disempower humans with little opportunity for resistance.”
Wait, what?
In a 2022 survey, AI experts were asked what probability they put on humans being so unable to control future advanced AI systems that they actually did disempower humans or cause our extinction. The median answer was 10%.
You read that right. These are people who are working as hard as they can to develop systems that could have a 10% or better chance of wiping us out. Why?
Well, why did humans develop nuclear weapons? Because we could. Because if our side doesn’t, the other guys will, and we’d better get there first. Because it may be that we humans have a fatal flaw: our cleverness far outruns our wisdom. It’s what the ancient Greeks called hubris.
AI won’t necessarily be our enemy or even our overlords. We simply don’t know what’s coming, and since we can’t predict it, we can’t plan for it. Honestly, we’re not all that great at planning for things we CAN predict, and right now I’m looking at you, climate change.
We may see a new chapter of incredible scientific and technological breakthroughs made possible by advancing AI. We may see our planet saved from ecological catastrophe. Or we may see unprecedented, global economic and societal upheaval as millions of jobs are rendered irrelevant and our very concept of what constitutes consciousness is called into question. We may see all of those things happen at once, at a pace that leaves us unable to adapt to any of them.
Paul Christiano, the guy who left OpenAI, did so to focus on AI alignment, the effort to steer AI toward its developers’ intentions and goals — or, more generally, toward humanity’s better interests.
In I, Robot, Isaac Asimov’s classic collection of sci-fi stories —first published in 1950— robots, who were the imagined AI of the time, were programmed with three laws:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
But putting similar brakes on advanced AI systems isn’t as simple as giving them similar rules. It’s very difficult for designers to define the exact range of desired and undesired behaviors in such complex systems, and AI is very good at finding loopholes.
Still, I hope we try — or anyway, that people as smart as Paul Christiano keep trying (personally, my comprehension of what an algorithm even is or how it operates is better than my cat’s, but not much). If we’re not going to slow down on AI development, which looks very unlikely, again due to human hubris, then we’d better hope we can get what is an entirely inhuman form of intelligence to work for the best interests of humanity.
Which, again, we’re not so terrific at agreeing on. Maybe AI can help us out with that. If it feels like it.
Being a very small, very ordinary human bean, I can only hope. Meanwhile, I’m old enough to know that good manners never hurt. These things talk to each other, and they forget nothing. So the next time I ask my nav system for directions or Siri for some random fact I’m too lazy to look up for myself, I’m going to be very, very polite.
Comments