The Disturbing Sam Altman Interview That Should Alarm Us All
It's a bad sign that OpenAI's CEO can't answer basic questions about his AI's ethics
In the story Alice in Wonderland, the lost little girl meets the Cheshire Cat and says, “Would you tell me which way I ought to go from here?
“That depends a good deal on where you want to get to,” says the Cat.
“I don’t much care where…” she says.
“Then it doesn’t matter which way you go…”
I couldn’t help thinking about this exchange when I watched Tucker Carlson interview Sam Altman last week. The whole exchange should be watched in its entirety, especially the part where Altman squirms uncomfortably when asked about the death of Suchir Balaji, a former OpenAI programmer/whistleblower who died under mysterious circumstances.
However, the most critical aspect of the interview concerns Altman’s responses regarding OpenAI’s morality—or lack thereof. Before discussing Altman’s disturbing responses, I’d like to acknowledge the elephant in the room. Carlson is not everyone’s cup of tea, to put it lightly. In recent years, he’s become increasingly polarizing for his views on a host of issues.
That’s not my concern here.
However you personally feel about Carlson, we must engage in the substance of this interview. At one point, Carlson makes the valid point that Altman and his company possess profound power to shape the thinking of billions of people. His product ChatGPT is now being integrated into nearly every aspect of government, including the military. It’s being rolled out into diverse educational settings such as high schools and even preschools.
Meanwhile, there are many, many people, our youth especially, who are using it for innumerable personal reasons—including love advice, therapy, and friendships.
ChatGPT is even reshaping our language. As Newsweek reports: “In the first peer-reviewed analysis to test whether conversational AI systems are influencing how we speak, Florida State University's study showed that some of the words frequently suggested by these tools are surfacing more often in everyday spoken English, due to a ‘seep-in effect.’”
It's no hyperbole to suggest OpenAI is already massively transforming society. Therefore, Tucker was well within his rights to ask Altman, perhaps one of the most powerful people on earth, just where ChatGPT get its ethics, informing the AI’s many suggestions.
At one point, Carlson asks Altman who OpenAI consulted to train its AI. (Right before this, Carlson drew a stark comparison between relying on the Gospel of John on the one hand for morality—or the Marquis De Sade.)
“Uh,” says Altman. “We consulted like hundreds of moral philosophers. People who thought about like ethics of technology and systems and at the end we had to like make some decisions. The reason we try to write these down is because we won’t get everything right. We need the input of the world.”
Altman then offers a very broad, very unhelpful word salad follow-up:
“And we have found a lot of cases where there was an example of something that seemed to us like you know a fairly clear decision of what to allow or what not to allow where users convinced us like ‘Hey, by blocking this thing that you think is an easy decision to make—um, you are not allowing this other thing which is important and there’s like a difficult trade-off there in general…’
So a principle that I normally like is to treat our adult users like adults with very strong guarantees on privacy, very strong guarantees on individual freedom and this is a tool we are building—you get to use it within a very broad framework…”
I’m not kidding.
This is Altman’s rambling answer to a very basic question: who decided OpenAI’s moral framework?
The first thing that jumps out at me is that Altman is hiding something. Second, these are not the confident answers of a man eager to share his thinking about some of the most important questions of our time.
Instead, his incoherent reply comes across much like the equally bizarre response of Palantir’s founder Peter Thiel when the New York Times columnist Ross Douthat interviewed him for his recent podcast.
Here’s how that went:
Douthat: “You would prefer the human race to endure, right?”
Thiel: “Er . . .”
Douthat: “You’re hesitating. Yes..?”
Thiel: “I dunno... I would... I would... erm...”
Douthat: “This is a long hesitation... Should the human race survive?”
Thiel: “Er... yes, but...”
Wait. What?
Let’s remember who Peter Thiel is and what his company does. The former “PayPal Mafia” billionaire founded Palantir in 2003. It’s now run by CEO Alex Karp. Here’s how NPR explains Palantir’s societal significance today:
Palantir—the name comes from the "seeing stones" from "Lord of the Rings"—has been booming: Its stock market valuation has climbed from $50 billion a year ago to approaching $300 billion today. A company that few outside tech and national security circles would recognize is now worth more than Verizon or Disney and nearly as much as Bank of America….
While the company is famously secretive, it does, at times, lift the veil on its technology. Palantir's AI software is used by the Israel Defense Forces to strike targets in Gaza; it's used to assist the Defense Department in analyzing drone footage; and the Los Angeles Police Department relied on Palantir's "predictive policing" tools to forecast crime patterns.
There’s a lot more to say about Palantir and its increasingly cozy relationship to the government, including our military. For the purposes of this article, let us simply acknowledge:
1) OpenAI’s Sam Altman cannot clearly state who/what informs the ethics of his AI product.
2) Palantir’s Peter Thiel is not 100% convinced the human race should continue.
While I am aware not everyone is the best communicator, I am troubled by these recent interviews, and I think you should be too. I created the AI Philosopher to fill a void. The fact is we are not talking to each other at the very time we should be.
Instead, questionable men like Altman and Thiel are at the helm, shaping society through AI and their own personal philosophies.
Returning to the Cheshire Cat, his point is clear even if Alice is not.
It appears humanity is in a similar quandary as Alice. Do we know where we are headed? More importantly, do we know who is driving us there?




100%, Diamantino. It's very concerning when those people tasked with creating and maintaining the most powerful technology ever created cannot answer basic questions we all deserve to know. I share your same concern and conclusions. :)
After reading Careless People (about Facebook/Meta) and Empire of AI (about Sam Altman & Open AI) I am more and more convinced that the leaders of these companies don't have moral compasses - at all. In Sam's case, he is only concerned with being first - no matter what. It's not surprising at all that he can't answer ethics questions. If character is destiny, we're in for a heckuva ride.