How OpenAI Trained to Beat the World’s Best Coders: Interview With Research Lead Ahmed El-Kishky

thumbnail How OpenAI Trained to Beat the World’s Best Coders: Interview With Research Lead Ahmed El-Kishky

TechnologyAdvice's Corey Noles (l) and Grant Harvey (r) interviewed OpenAI Lead Researcher Ahmed El-Kishky on Sept. 24, 2025 for The Neuron podcast. Source: The Neuron/YouTube

Written By
thumbnail Megan Crouse
Megan Crouse
Oct 1, 2025
eWeek content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More

OpenAI and Google DeepMind stood out at the International Collegiate Programming Contest world finals in Baku, Azerbaijan, last week, with OpenAI’s models solving all 12 of the problems at the world’s top university coding competition.

In attendance was OpenAI Research Lead Ahmed El-Kishky, who spoke to our colleagues at The Neuron in a podcast interview about the win and what OpenAI learned from the competition. 

OpenAI’s preparation for the contest 

Generative AI has made significant progress in competitive math and programming over the last year. Earlier versions, such as GPT-4, did so poorly that its attempts would crash the sandbox computer used in the competitions, El-Kishky said. 

This year, “people were nervous” on the small OpenAI team that traveled to Azerbaijan, El-Kishky noted. 

Still, circumstances were very different from those in 2024. The researchers had pre-trained the models they would use for the competition, including those based on reinforcement learning.

A week before the event, the team did a dry run, “spending long nights trying to make sure everything’s working,” El-Kishky said. 

Two models tackled the problems: GPT-5 and an experimental reasoning model that is not publicly available.  

Testing GPT-5 against a new reasoning model

“In this case, we knew that GPT-5 can go a long way, but we wanted to not just limit it there,” said El-Kishky. “We wanted to also test how well reasoning models perform. So we tried out both.” 

Both models attempted to solve the problem multiple times. The experimental reasoning model decided which solution to submit. GPT-5 was faster, but the experimental model was more powerful. GPT-5 couldn’t complete the last problem, but the experimental model did on its second attempt. 

The competition was an interesting example of problems that are difficult for humans but not so difficult for an AI — and vice versa, El-Kishky said. 

It also showed how far AI reasoning capabilities have come. Reasoning models and models with reinforcement learning work better because of their similarity to human problem-solving, El-Kishky explained.

“When you have a difficult problem, no human just immediately gives an answer,” he said. “If I asked you to multiply a four-digit number by a five-digit number, you wouldn’t give me an answer instantly.” 

The breakthrough was reinforcement learning, or the ability to “mimic how a human thinks through these problems. They try things out, and maybe they go down wrong directions to dead ends, they course correct,” El-Kishky said. 

Programming competitions are a chance to benchmark AI progress 

True to OpenAI’s mission statement, El-Kisky said the real purpose of AI models is to increase human knowledge, not win programming contests. 

“The competitions are not really an end goal in themselves,” El-Kishky said. “They’re just sort of a way to benchmark progress.” 

The experimental model used in the competition won’t be released to the public, but the research behind it may one day be applied to other models and, eventually, to ChatGPT. 

Teaching models to code autonomously is an important research topic for the company as a whole, El-Kishky noted. OpenAI aims to develop AI agents that can operate independently for days, weeks, or months. 

More from El-Kishky 

Check out The Neuron’s full interview with El-Kishky, which includes what it’s like to work at OpenAI and details about other competitions employees have participated in.

Spotlight on another episode of The Neuron podcast: Microsoft’s AI chief Mustafa Suleyman outlines his vision for the future of artificial intelligence and the responsibilities that come with it. His remarks could shape industry expectations on innovation and governance.

thumbnail Megan Crouse

Megan Crouse has a decade of experience in business-to-business news and feature writing, including as first a writer and then the editor of Manufacturing.net. Her news and feature stories have appeared in Military & Aerospace Electronics, Fierce Wireless, TechRepublic, and eWeek. She copyedited cybersecurity news and features at Security Intelligence. She holds a degree in English Literature and minored in Creative Writing at Fairleigh Dickinson University.

Recommended for you...

Ex-Google CEO Sounds the Alarm: AI Can Learn to Kill
Liz Ticong
Oct 10, 2025
Finance Automation as a Strategic Growth Engine: Insights from Tipalti’s Daniel Shimtov
Dell Doubles Revenue Forecast Due to AI Demand
eWEEK Staff
Oct 9, 2025
AI ‘Resurrections’ of Robin Williams and Tupac Ignite Ethical Debate
Liz Ticong
Oct 8, 2025
eWeek Logo

eWeek has the latest technology news and analysis, buying guides, and product reviews for IT professionals and technology buyers. The site's focus is on innovative solutions and covering in-depth technical content. eWeek stays on the cutting edge of technology news and IT trends through interviews and expert analysis. Gain insight from top innovators and thought leaders in the fields of IT, business, enterprise software, startups, and more.

Property of TechnologyAdvice. © 2025 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.