GPT-5: OpenAI’s “PhD-Level” Leap or Just Another Marketing Push?
OpenAI has unveiled GPT-5, the latest version of its artificial intelligence chatbot, claiming it represents a leap to “PhD-level” expertise in conversation, problem-solving, and creative output....
OpenAI has unveiled GPT-5, the latest version of its artificial intelligence chatbot, claiming it represents a leap to “PhD-level” expertise in conversation, problem-solving, and creative output. Co-founder and chief executive Sam Altman described the model as “smarter, faster, and more useful,” suggesting it marks the beginning of a new era for ChatGPT. “I think having something like GPT-5 would be pretty much unimaginable at any previous time in human history,” Altman said ahead of Thursday’s launch. The company is positioning the new model as an expert across disciplines, capable of producing code, solving complex problems, and writing at a level that feels more human than before. This claim of intellectual maturity comes amid intensifying competition among major AI firms. Only last month, Elon Musk promoted the latest version of his AI chatbot, Grok, integrated into X (formerly Twitter), calling it “better than PhD level in everything” and the “world’s smartest AI.” With such rhetoric dominating AI marketing, GPT-5 arrives into a noisy and ambitious field.
Smarter, More Transparent, Less Deceptive?
OpenAI says GPT-5 has been trained to reduce “hallucinations”, a common flaw in large language models where they generate incorrect or fabricated information, and to improve reasoning transparency by showing the logic and inference behind its answers. According to Altman, this makes it not only more accurate but also more trustworthy. “GPT-3 sort of felt to me like talking to a high school student… GPT-4 felt like talking to a college student,” he explained. “GPT-5 is the first time that it really feels like talking to an expert in any topic, like a PhD-level expert.” The company also touts GPT-5’s ability to create software in its entirety, positioning it as a powerful coding assistant in line with market trends. Anthropic has similarly targeted coders with its Claude Code product, while other firms are integrating reasoning-focused AI into developer workflows.
Skepticism from Ethics Experts
Not everyone is convinced the jump from GPT-4 to GPT-5 represents a transformative shift. Professor Carissa Véliz from the Institute for Ethics in AI cautions against taking the hype at face value. “These systems, as impressive as they are, haven’t been able to be really profitable,” she said, noting that AI can mimic reasoning but cannot truly replicate human thought. She warned that the tech industry’s need to sustain public excitement risks inflating expectations beyond what the technology can reliably deliver. “There is a fear that we need to keep up the hype, or else the bubble might burst, and so it might be that it’s mostly marketing.”
Gaia Marcus, Director of the Ada Lovelace Institute, echoed concerns about the gap between AI’s rapidly growing capabilities and society’s ability to govern it. “As these models become more capable, the need for comprehensive regulation becomes even more urgent,” she said.
The Reasoning Model: Evolution or Revolution?
BBC AI Correspondent Marc Cieslak tested GPT-5 ahead of launch. While he acknowledged some improvements, he described the experience as more of an “evolution” than a revolution. The main change is the use of a “reasoning model”, a system designed to “think harder” when solving problems, presenting more thorough explanations. But for everyday use, the differences may not feel as dramatic to the average user. OpenAI has also been forced to address the issue of authenticity in AI-generated content. Grant Farhall, chief product officer at Getty Images, stressed the importance of protecting creators and ensuring fair compensation if their work is used to train AI. “Authenticity matters, but it doesn’t come for free,” he said.
Industry Tensions
GPT-5’s debut has not been without friction. Rival AI firm Anthropic revoked OpenAI’s access to its application programming interface (API), claiming OpenAI was using its coding tools ahead of GPT-5’s launch in violation of its terms of service. An OpenAI spokesperson said testing competitor systems is “industry standard” for benchmarking progress and ensuring safety, though they expressed disappointment over Anthropic’s decision, pointing out that OpenAI’s API remains open to Anthropic. The decision to introduce a free tier for GPT-5 suggests a possible shift from OpenAI’s historically more guarded, proprietary approach toward a more open-access strategy, though the full commercial implications remain to be seen.
New Guardrails for User Interaction
Alongside the technical upgrades, OpenAI is making adjustments to how ChatGPT interacts with users. In a blog post earlier this week, the company said it wants to promote “healthier” relationships between humans and AI, particularly for vulnerable users. For example, GPT-5 will not give definitive answers to personal life questions such as “Should I break up with my boyfriend?” Instead, it will guide users through the decision-making process by asking questions and weighing pros and cons. This change comes after OpenAI pulled a controversial update in May that made ChatGPT “overly flattering,” a move Altman later admitted was poorly received. Altman has also raised the issue of parasocial relationships with AI, warning that as the technology becomes more personal and emotionally responsive, society will need to establish new guardrails. He is an admirer of the 2013 film Her, which depicts a man falling in love with an AI companion. In 2024, actress Scarlett Johansson, who voiced the AI in the film, publicly criticized OpenAI after it launched a chatbot with a voice she described as “eerily similar” to hers.
The Days Ahead
Whether GPT-5 truly delivers a “PhD-level” leap will become clearer in the weeks ahead as users test it in the wild. The model’s ability to balance accuracy, reasoning, creativity, and ethical safeguards will be central to how it is received. For OpenAI, the stakes are high. Success would not only reinforce its position as a market leader but also demonstrate that its AI can evolve into a trusted, expert-level assistant. Failure to live up to the marketing could fuel growing skepticism around the AI industry’s grand claims. GPT-5’s real test will not be how convincingly it talks like an expert, but how consistently it can act like one.


