OpenAI has returned to open source and released new AI models.

For the first time since 2019, OpenAI has released open-source language models. The new gpt-oss-120b and gpt-oss-20b neural networks are now available for download on Hugging Face and can run on both powerful GPUs and regular laptops.
gpt‑oss‑120b is the more powerful of the two models. According to the company, it is capable of running on a single Nvidia graphics processor, while the lighter gpt‑oss‑20b version is designed to run on devices with 16 GB of RAM.
Both models are based on mixture-of-experts architecture. This allows only some of the parameters to be used for each query: for example, gpt-oss-120b has 117 billion parameters, but only 5.1 billion are used per token. Reinforcement learning in a simulated environment was used for post-training.
The new models performed well in tests. In the Codeforces programming competition, they outperformed DeepSeek’s R1, but lost to the closed versions o3 and o4-mini. On the Humanity Last Exam, gpt-oss-120b scored 19%, and gpt-oss-20b scored 17.3%. However, their vulnerability to “hallucinations” remains high: in PersonQA, they produced errors in about half of the cases.
OpenAI released models under the Apache 2.0 license, which allows businesses to use them without restrictions. However, the company did not disclose the datasets used for training, citing legal risks. In the past, the organization postponed the release for security reasons, fearing that the technology could be used in cyberattacks.
Amazon has already confirmed that gpt-oss will be available on AWS. Microsoft has introduced versions optimized for Windows devices. According to OpenAI, the new models will be suitable for creating AI agents: they can access web search or run Python code.
The release coincided with reports in the Financial Times that OpenAI is in talks to sell shares at a valuation of $500 billion. If the deal goes through, the startup will surpass SpaceX to become the most valuable private technology company in the world.