Covering Scientific & Technical AI | Saturday, January 18, 2025

Sam Altman Says No GPT-5 This Year in Reddit AMA 

A small team of OpenAI executives participated in an “Ask Me Anything” (AMA) session on Reddit today. The company’s CEO Sam Altman and CPO Kevin Weil most frequently replied to Reddit users’ questions, with others answering as well.

OpenAI's AMA post encouraged users to “Ask us anything about” topics like the new o1 and o1-mini models, the future of AI agents, AGI, and what’s coming next.

Here are a few highlights:


Timeline for GPT-5, o1 Full Release and AVM Vision 

Reddit User laggymaster: “Release date of ChatGPT-5 or its equivalent? What are its features?” 

CEO Sam Altman: “We have some very good releases coming later this year! Nothing that we are going to call GPT-5, though.” 

Reddit User Alternative-Alarm-95: “Any timeline on when we'll get AVM (Advanced Voice Mode) Vision? Why is GPT-5 taking so long? What about full o1?” 

Altman: “We are prioritizing shipping o1 and its successors. All of these models have gotten quite complex, and we can't ship as many things in parallel as we'd like to. (We also face a lot of limitations and hard decisions about we allocated our compute towards many great ideas.) Don't have a date for AVM vision yet.”

New Text-to-Image Model? 

Reddit User SkibidiMog: “When will you guys give us a new text-to-image model? Dalle-3 is kinda outdated.” 

Altman: “The next update will be worth the wait! but we don't have a release plan yet.”

(issaro prakalung/Shutterstock)

Moving Toward a Closed Model and the End of Hallucinations?

Reddit User Available-Resort-951: OpenAI has shifted from a more open-source approach to a more closed model in recent years. Can you elaborate on the reasoning behind this change, and how you weigh the trade-offs between openness and the potential risks associated with widely accessible advanced AI technologies? Is it inevitable in the long run that powerful models end up in the hands of bad actors? 

Altman: I think open source plays an important role in the ecosystem and there are great open source models in the world. We also think there’s an important role in the world for powerful and easy-to-use APIs and services, and given what we are good at, we see an easier way to hit the safety threshold we want to hit this way. We are pretty proud of how much value people get out of our services. 

I would like us to open source more stuff in the future. 

Reddit User Only-Tells-The-Truth: Are hallucinations going to be a permanent feature? Why is it that even o1-preview, when approaching the end of a “thought” hallucinates more and more? 

How will you handle old data (even 2-years old) that is now no longer “true”? Continuously train models or some sort of garbage collection? It’s a big issue in the truthfulness aspect. 

SVP of Research Mark Chen: We're putting a lot of focus on decreasing hallucinations, but it’s a fundamentally hard problem - our models learn from human-written text, and humans sometimes confidently declare things they aren’t sure about. 

Our models are improving at citing, which grounds their answers in trusted sources, and we also believe that RL will help with hallucinations as well - when we can programmatically check whether models hallucinate, we can reward it for not doing so. 

(Dilok Klaisataporn/Shuttesrtock)

OpenAI Agentic AI ... Coming Soon? 

Reddit User potato3445: Will ChatGPT eventually be able to perform tasks on its own? Message you first? 

CPO Kevin Weil: IMHO this is going to be a big theme in 2025.

Reddit User demondehellis: What's one thing you wish ChatGPT could do but can't yet? 

VP of Engineering Srinivas Narayanan: I'd love for it to understand my personal information better and take actions on my behalf.

Reddit User Ok_Course6476: What's the next breakthrough in GPT line of product and what's the expected timeline? 

Altman: We will have better and better models, but i think the thing that will feel like the next giant breakthrough will be agents. 

ChatGPT Plus Context Increase 

Reddit User Ok-One4382: When will you increase the context window for the Plus version? 

Weil: Working on it! I'm excited for longer context. 

Reddit User Mediocre_Line7407: Hello, I would like to ask when the token context field of GPT4o gets increased. In my opinion, 32k especially for longer coding or writing tasks is way to small compared to other AI models out there. 

Weil: Agree. We're working on it! 

AGI 

Reddit User Used_Steak856: Is AGI achievable with known hardware or will it take something entirely different? 

Altman: We believe it is achievable with current hardware.

Reddit User Repulsive-Outcome-20: Once AGI is achieved, what's the first thing you would like to apply it on? Is there a certain field on speed dial for that moment?

Narayanan: I'd love for it to accelerate scientific discovery. i'm personally very interested in health/medicine

One of the many "What did Ilya see?" memes. Source.

Altman Addresses Sutskever Departure

For the final highlight of the AMA in this list, Altman addressed the "What did Ilya see?" meme that came about after the departure of Ilya Sutskever, co-founder and former chief scientist of OpenAI, in May. He was replaced by Jakub Pachocki. The meme suggests that Sutskever may have witnessed something alarming within OpenAI's AI developments or leadership.

Reddit User vigneshwarar: Seriously though — what did Ilya see? 

Altman: The transcendent future.  

Ilya is an incredible visionary and sees the future more clearly than almost anyone else. His early ideas, excitement, and vision were critical to so much of what we have done. For example, he was one of the key initial explorers and champions for some of the ideas that eventually became o1. 

The field is very lucky to have him. 

These questions and responses have been lightly edited for grammar and punctuation. Visit the AMA here for the full picture.

AIwire