A team of researchers at the University of California, Berkeley, has successfully recreated the core technology behind DeepSeek AI for an astonishingly low cost of just $30. Led by Ph.D. candidate Jiayi Pan, the team managed to replicate DeepSeek R1-Zero’s reinforcement learning capabilities using a small language model with just 3 billion parameters.
Despite its modest size, the AI demonstrated self-verification and search abilities, allowing it to refine its responses iteratively. To test its problem-solving skills, the researchers used the Countdown game, a mathematical puzzle requiring players to reach a target number using arithmetic operations. Initially, the AI made random guesses, but through reinforcement learning, it improved its ability to revise and optimize its answers.
Pan’s team experimented with different model sizes, noting that a 500-million-parameter model struggled to refine responses, while a 1.5-billion-parameter model began incorporating revision techniques. Once scaled to 3 to 7 billion parameters, the AI displayed significant improvement in solving problems efficiently.
The affordability of this recreation raises questions about the costs of AI development. Currently, OpenAI charges $15 per million tokens via its API, while DeepSeek offers a far lower rate of $0.55 per million tokens. However, AI researcher Nathan Lambert has expressed skepticism about DeepSeek’s claimed affordability, arguing that its operational costs may reach $500 million to over $1 billion annually.
Additionally, concerns over data privacy and national security have led to DeepSeek being banned in parts of the U.S. Some reports suggest DeepSeek may have been trained using OpenAI’s ChatGPT, potentially explaining its lower expenses. While questions remain, Berkeley’s findings suggest that high-performance AI models could become far more accessible in the near future—potentially disrupting the dominance of AI giants like OpenAI, Google, and Microsoft.
Advertisement