Hello readers! As you’ve likely realized, this isn’t a blog about AI news—there are plenty of excellent sources for that. While the world of AI is dynamic and fascinating, not every development has a significant impact on the average person. That said, occasionally something noteworthy emerges that warrants discussion. Today, I want to share some thoughts on one event that I see as pretty significant for the industry and world at large: the release of DeepSeek V3 and more recently R1.
DeepWho?
DeepSeek, an 18-month-old (!!!) Chinese AI company recently introduced a groundbreaking reasoning model R1. This model excels at complex coding, mathematics, and logic, rivaling the performance of ChatGPT-4o. A Chinese company rivaling the leading world model on its own is highly disruptive. But wait… there’s more… namely its open-source nature, cost basis, and affordability. The model is available to download and edit for free, was trained insanely cheap, and offers its API services at significantly lower rates than its competitors like OpenAI.
The rapid commoditization of AI, exemplified here by DeepSeek, has sent shockwaves through the market and caught investors off guard. Despite lacking access to cutting-edge chips, DeepSeek has managed to deliver impressive results, demonstrating that high-performance AI doesn’t necessarily require the latest hardware. They trained these models on the cheap and made them publicly available to everyone in the world. It’s frankly a lot to absorb.
A few quick thoughts about recent developments, in no particular order:
A highly capable, open-source model is now freely available worldwide. DeepSeek’s final training cost was approximately $5.6 million—a fraction of what companies like OpenAI typically invest. This model democratizes AI, making powerful tools accessible to anyone with an internet connection. They offer API services at a substantial discount to OpenAI and others on the market, and everything from DeepSeek points to continuation with this strategy – being an open source and low-cost provider of powerful AI models for the foreseeable future.
Reasoning models was not a secret nut that only OpenAI could crack. With the open-source release of a powerful reasoning model in R1, we should see many more reasoning models and services available soon.
The US Chips Ban has probably backfired and inadvertently pushed Chinese companies to innovate without relying on top-tier hardware. DeepSeek’s success, despite these constraints, demonstrates that effective AI models can still be developed on a budget. If DeepSeek had access to the best chips it would have been even more powerful, or at the very least their models would be much easier to train.
DeepSeek is the leader in efficiency but not the industry leader overall. DeepSeek relies on OpenAI for innovation and trained their models off it blatantly. OpenAI is still broadly seen as the world AI leader and innovator (with o3 already demonstrated), but they increasingly pay the costs for the rest of the world to freeload and catch up to them. (OpenAI does keep their best models in-house though, so who knows what they have under the covers.)
DeepSeek probably did the world a favor - by making their model open source, they have leveled the playing field. They're essentially making AI free for everyone. It’s clear that these tools won't just be for an affluent class - we’re all in this AI game together.
Microsoft and technology product companies are down in the markets today but will be fine in the long term. This is actually probably a positive development for them. If AI is a commodity as it increasingly is, then value exists in serving it through product and distribution channels. That's their bread and butter.
With all that said, the wildcard is Artificial General Intelligence. AGI attainment seems closer with every update like this - things are moving really fast. This could change things dramatically depending on the who and when and whether it can be replicated. I honestly have no idea what all that looks like...
How does any of this affect you?
The main takeaway for most people here, I think, is that AI is being commoditized quickly. AI is becoming free and even more readily available. Access to free and current powerful models are universally available to anyone with a device and the internet and that is likely to continue. Moreover, the ability to download and edit these powerful models are now available to everyone in the world, which will continue to advance usage and research ever more quickly.
It didn’t have to be this way - there could have been some secret AI sauce that allowed for an AI monopoly from a truly unique research entity - but that’s not how it’s playing out. At least not yet (see #7 on AGI above).
Bottom line and primary takeaway from all this: people and organizations that understand and utilize AI capabilities will be the future winners. As AI becomes more a commodity, and universally available, working with AI will transition from a novel to an expected and required skillset. You want to be in this category.
Hope you found this helpful—until next time!
This post was influenced by a lot of media sources, but of particular note is Ben Thompson’s latest.
Do you think there are risks or challenges that could develop from making powerful AI models openly available to everyone?