Mark Zuckerberg, Chief Executive of Meta, has hinted at a possible shift in the company’s open-source approach to artificial intelligence (AI), suggesting that safety concerns could lead to more selective sharing of powerful models. In a lengthy internal memo published earlier today, Zuckerberg outlined his vision to develop “superintelligence” and signalled that Meta might need to be more cautious about what it makes publicly available.
The Facebook founder cited “safety concerns” as a key reason for becoming more selective. “We need to be rigorous about which models we release,” he wrote, prompting questions about whether this marked a departure from Meta’s previous stance on open AI development. Zuckerberg has long championed open source strategies, even once dismissing the idea of closed platforms with the comment: “fuck that.”
This more guarded tone stands in contrast to earlier public statements and published memos that praised the benefits of open AI. However, during Meta’s recent second-quarter earnings call, Zuckerberg attempted to downplay any shift in policy. When asked directly if his position had changed, he replied, “I don’t think that our thinking has particularly changed on this.”
Clarifying Meta’s current AI stance
Zuckerberg elaborated that Meta has never committed to open-sourcing everything it builds, and that selective sharing has always been part of its strategy. “We’ve always open-sourced some of our models and not open-sourced everything that we’ve done,” he said. “I would expect that we will continue to produce and share leading open source models.”
He went on to explain the complexity involved in managing the latest large-scale models, noting that some are so massive they are no longer practical for external developers or organisations to use. “We kind of wrestle with whether it’s productive or helpful to share that,” he said, suggesting that Meta may now be more concerned about giving competitors an edge.
As the company moves closer to developing more advanced forms of AI, which Zuckerberg refers to as “superintelligence,” he acknowledges the arrival of a new set of safety challenges. “There’s a whole different set of safety concerns that I think we need to take very seriously,” he stated.
Nevertheless, he reassured listeners that Meta would continue to support open source efforts: “I expect us to continue to be a leader there… and I also expect us to continue not to open source everything that we do, which is a continuation of what we’ve been working on.”
Comparing past and present positions
Zuckerberg’s recent remarks stand in contrast to his position just a year ago, when he strongly advocated for open source AI. In a memo published at that time, titled “Open Source AI is the Path Forward,” he stressed the importance of openness for both Meta and the wider developer community.
“People often ask if I’m worried about giving up a technical advantage by open sourcing Llama, but I think this misses the big picture,” he wrote. “I expect AI development will continue to be very competitive, which means that open sourcing any given model isn’t giving away a massive advantage over the next best models, then.”
He also argued that open source models can be safer, stating: “There is an ongoing debate about the safety of open source AI models, and my view is that open source AI will be safer than the alternatives. As long as everyone has access to similar generations of models, which open source promotes, then governments and institutions with more compute resources will be able to check bad actors with less compute.”
While Zuckerberg has made it clear that Meta will not abandon open source entirely, his recent comments suggest that the company is preparing for a future in which it may take a more guarded approach, especially as it moves toward developing significantly more advanced AI technologies.