OpenAI Group PBC is reportedly developing a new artificial intelligence model optimized for audio generation tasks. The Information today cited sources as saying that the algorithm will launch by the ...
OpenAI published a new paper called "Monitoring Monitorability." It offers methods for detecting red flags in a model's reasoning. Those shouldn't be mistaken for silver bullet solutions, though. In ...
OpenAI wanted GPT-5 to be less warm and agreeable than its predecessor. Some people with conditions such as autism struggled with the change, showing the tricky balance AI companies must strike when ...
After upgrading ChatGPT with its latest GPT-5.2 model last week, OpenAI has rolled out a major improvement for the chatbot’s image generation capabilities, positioning it as a strong competitor to ...
OpenAI is rolling out a new version of ChatGPT Images that promises better instruction-following, more precise editing, and up to 4x faster image generation speeds. The new model, dubbed GPT Image 1.5 ...
OpenAI on Thursday announced GPT-5.2, its most advanced artificial intelligence model. The company said the model is better at creating spreadsheets, building presentations, perceiving images, writing ...
The Walt Disney Company and ChatGPT/Sora parent company OpenAI unveiled one of the most significant generative AI agreements in the technology’s young history today, with the former investing $1 ...
OpenAI on Thursday launched GPT-5.2, an updated version of the model that runs its ChatGPT chatbot and other AI tools. It scores higher than previous models on OpenAI's tests of performance for code ...
Dec 10 (Reuters) - OpenAI on Wednesday warned that its upcoming artificial intelligence models could pose a "high" cybersecurity risk, as their capabilities advance rapidly. The AI models might either ...
OpenAI, Anthropic, and Block have teamed up to co-found a new foundation that promises to help standardize the development of AI agents. The new Agentic AI Foundation (AAIF) will operate under the ...
OpenAI researchers have introduced a novel method that acts as a "truth serum" for large language models (LLMs), compelling them to self-report their own misbehavior, hallucinations and policy ...