OpenAI’s Shift from Openness to Secrecy
OpenAI began with a mission of openness. Its name itself promised transparency. Early on, the company publicly released research and some code. But as its AI systems grew more powerful, OpenAI turned secretive, guarding its best models. This sharp turn from open to closed highlights a growing divide in how AI is built.
Open-Source Catches Up Fast
While some companies locked their models behind closed doors, the open-source community went to work. They iterated quickly, drawing on public research and even leaked model weights. The results were striking.
Models like Vicuna and Mistral began as small projects but in mere months reached performance close to the top proprietary systems. Not long ago, only a tech giant could build a top-tier model; now a small team can achieve similar results by pooling open resources. One open model, DeepSeek, even matched a leading closed AI at a fraction of the cost. Open-source AI caught up to the state of the art in record time, undercutting the claim that only closed labs can build powerful AI. With enough collaboration, open projects can rival or surpass what a single company can do.
AGI as Critical Infrastructure
Artificial General Intelligence won’t be just another app or gadget. If achieved, it will become part of our core infrastructure – as pervasive as electricity or the internet – underpinning countless systems and decisions. We trust infrastructure only when we can inspect and understand it. In short, trust requires transparency.
An AGI that affects millions of lives cannot be a secret black box run by a single corporation. We don’t accept opaque control in other vital systems. When the stakes are high, people must be able to see how and why an AI makes its decisions.
The Black Box Problem
Closed AGI is a mystery by design – outsiders can’t inspect its code or training data. That’s a huge problem when something goes wrong. If a closed AI causes harm or bias, we may never know why. You can’t debug what you can’t see. And if you can’t debug it, you can’t truly trust it.
With open models, developers everywhere can examine the system. They can audit its behavior, trace issues, and propose fixes. Transparency turns a black box into an open book. When code and weights are public, issues that would stay invisible in a closed system can be identified and fixed. For AGI, that difference is the line between safety and blind risk.
Many Eyes Make AI Safer
Open-source AI isn’t just transparent; it’s also collaboratively tested. An open AGI would be probed by thousands of researchers and hobbyists. They’d try countless scenarios, edge cases, and stress tests, uncovering bugs and vulnerabilities no single team could catch on its own.
In software, more testers means more robust code – the same principle applies to AI models. A closed AGI, no matter how brilliant its creators, is tested by a relatively small group under one roof. They will miss things — not for lack of skill but for lack of perspective and sheer numbers. In contrast, an open AGI benefits from the wisdom of the crowd. Problems get flagged early and fixed faster. Safety comes not from secrecy, but from collective vigilance.
Closed West vs. Open East
Ironically, as U.S. labs double down on secrecy, elsewhere the trend is the opposite. Notably, AI teams in China have been openly releasing advanced models. Open-sourcing is now seen as a strategic move: by sharing models freely, researchers gain global users and contributors, accelerating development.
Meanwhile, Western companies that keep models closed may fall behind in adoption. There’s a geopolitical angle too. If cutting-edge AI remains locked behind American corporate firewalls, the rest of the world will gravitate toward alternatives they can access and trust. That could shift AI leadership toward open-source efforts blossoming in Asia and beyond. In a global race, openness can be an advantage, not a handicap.
First to AGI vs. First to Trust
It’s possible the first true AGI will come from a closed project. A heavily funded lab might hit that milestone first. But even if a closed system crosses the finish line first, can the world trust it? A secret model that declares itself the first general intelligence will meet skepticism and fear. Trust isn’t awarded to whoever is first; it’s earned by being accountable. And only an open system can be fully accountable. The world’s first AGI might be closed, but the only AGI people will ultimately accept is one that’s open-source and transparent
In the end, “winning” doesn’t just mean creating an AGI that works. It means creating an AGI that people feel safe using and integrating into society. By that metric, open-source AGI isn’t just better – it’s the only approach that can succeed in the long run. Open-source AGI must win because humanity’s trust is at stake, and trust can’t be won in the dark.