More than three years ago, this editor sat down with Sam Altman for a small event in San Francisco soon after he’d left his role as the president of Y Combinator to become CEO of the AI company he co-founded in 2015 with Elon Musk and others, OpenAI.
At the time, Altman described OpenAI’s potential in language that sounded outlandish to some. Altman said, for example, that the opportunity with artificial general intelligence — machine intelligence that can solve problems as well as a human — is so great that if OpenAI managed to crack it, the outfit could “maybe capture the light cone of all future value in the universe.” He said that the company was “going to have to not release research” because it was so powerful. Asked if OpenAI was guilty of fear-mongering — Musk has repeatedly called all organizations developing AI to be regulated — Altman talked about dangers of not thinking about “societal consequences” when “you’re building something on an exponential curve.”
The audience laughed at various points of the conversation, not certain how seriously to take Altman. No one is laughing now, however. While machines are not yet as intelligent as people, the tech that OpenAI has since released is taking many aback (including Musk), with some critics fearful that it could be our undoing, especially with more sophisticated tech reportedly coming soon.
Indeed, though heavy users insist it’s not so smart, the ChatGPT model that OpenAI made available to the general public last week is so capable of answering questions like a person that professionals across a range of industries are trying to process the implications. Educators, for example, wonder how they’ll be able to distinguish original writing from the algorithmically generated essays they are bound to receive — and that can evade anti-plagiarism software.
Paul Kedrosky isn’t an educator per se. He’s an economist, venture capitalist and MIT fellow who calls himself a “frustrated normal with a penchant for thinking about risks and unintended consequences in complex systems.” But he is among those who are suddenly worried about our collective future, tweeting yesterday: “[S]hame on OpenAI for launching this pocket nuclear bomb without restrictions into an unprepared society.” Wrote Kedrosky, “I obviously feel ChatGPT (and its ilk) should be withdrawn immediately. And, if ever re-introduced, only with tight restrictions.”
We talked with him yesterday about some of his concerns, and why he thinks OpenAI is driving what he believes is the “most disruptive change the U.S. economy has seen in 100 years,” and not in a good way.
Our chat has been edited for length and clarity.
TC: ChatGPT came out last Wednesday. What triggered your reaction on Twitter?
PK: I’ve played with these conversational user interfaces and AI services in the past and this obviously is a huge leap beyond. And what troubled me here in particular is the casual brutality of it, with massive consequences for a host of different activities. It’s not just the obvious ones, like high school essay writing, but across pretty much any domain where there’s a grammar — [meaning] an organized way of expressing yourself. That could be software engineering, high school essays, legal documents. All of them are easily eaten by this voracious beast and spit back out again without compensation to whatever was used for training it.
I heard from a colleague at UCLA who told me they have no idea what to do with essays at the end of the current term, where they’re getting hundreds per course and thousands per department, because they have no idea anymore what’s fake and what’s not. So to do this so casually — as someone said to me earlier today — is reminiscent of the so-called [ethical] white hat hacker who finds a bug in a widely used product, then informs the developer before the broader public knows so the developer can patch their product and we don’t have mass devastation and power grids going down. This is the opposite, where a virus has been released into the wild with no concern for the consequences.
It does feel like it could eat up the world.
Some might say, ‘Well, did you feel the same way when automation arrived in auto plants and auto workers were put out of work? Because this is a kind of broader phenomenon.’ But this is very different. These specific learning technologies are self catalyzing; they’re learning from the requests. So robots in a manufacturing plant, while disruptive and creating incredible economic consequences for the people working there, didn’t then turn around and start absorbing everything going inside the factory, moving across sector by sector, whereas that’s exactly not only what we can expect but what you should expect.
Musk left OpenAI partly over disagreements about the company’s development, he said in 2019, and he has been talking about AI as an existential threat for a long time. But people carped that he didn’t know what he’s talking about. Now we’re confronting this powerful tech and it’s not clear who steps in to address it.
I think it’s going to start out in a bunch of places at once, most of which will look really clumsy, and people will [then] sneer because that’s what technologists do. But too bad, because we’ve walked ourselves into this by creating something with such consequentiality. So in the same way that the FTC demanded that people running blogs years ago [make clear they] have affiliate links and make money from them, I think at a trivial level, people are going to be forced to make disclosures that ‘We wrote none of this. This is all machine generated.’
I also think we’re going to see new energy for the ongoing lawsuit against Microsoft and OpenAI over copyright infringement in the context of our in-training, machine learning algorithms. I think there’s going to be a broader DMCA issue here with respect to this service.
And I think there’s the potential for a [massive] lawsuit and settlement eventually with respect to the consequences of the services, which, you know, will probably take too long and not help enough people, but I don’t see how we don’t end up in [this place] with respect to these technologies.
What’s the thinking at MIT?
Andy McAfee and his group over there are more sanguine and have a more orthodox view out there that anytime we see disruption, other opportunities get created, people are mobile, they move from place to place and from occupation to occupation, and we shouldn’t be so hidebound that we think this particular evolution of technology is the one around which we can’t mutate and migrate. And I think that’s broadly true.
But the lesson of the last five years in particular has been these changes can take a long time. Free trade, for example, is one of those incredibly disruptive, economy-wide experiences, and we all told ourselves as economists looking at this that the economy will adapt, and people in general will benefit from lower prices. What no one anticipated was that someone would organize all the angry people and elect Donald Trump. So there’s this idea that we can anticipate and predict what the consequences will be, but [we can’t].
You talked about high school and college essay writing. One of our kids has already asked — theoretically! — if it would be plagiarism to use ChatGPT to author a paper.
The purpose of writing an essay is to prove that you can think, so this short circuits the process and defeats the purpose. Again, in terms of consequences and externalities, if we can’t let people have homework assignments because we no longer know whether they’re cheating or not, that means that everything has to happen in the classroom and must be supervised. There can’t be anything we take home. More stuff must be done orally, and what does that mean? It means school just became much more expensive, much more artisanal, much smaller and at the exact time that we’re trying to do the opposite. The consequences for higher education are devastating in terms of actually delivering a service anymore.
What do you think of the idea of universal basic income, or enabling everyone to participate in the gains from AI?
I’m much less strong a proponent than I was pre COVID. The reason is that COVID, in a sense, was an experiment with a universal basic income. We paid people to stay home, and they came up with QAnon. So I’m really nervous about what happens whenever people don’t have to hop in a car, drive somewhere, do a job they hate and come home again, because the devil finds work for idle hands, and there’ll be a lot of idle hands and a lot of deviltry.