6 mins read

Meta’s AI research head wants open source licensing to change

In July, Meta’s Fundamental AI Research (FAIR) center released its large language model Llama 2 relatively openly and for free, a stark contrast to its biggest competitors. But in the world of open-source software, some still see the company’s openness with an asterisk.

While Meta’s license makes Llama 2 free for many, it’s still a limited license that doesn’t meet all the requirements of the Open Source Initiative (OSI). As outlined in the OSI’s Open Source Definition, open source is more than just sharing some code or research. To be truly open source is to offer free redistribution, access to the source code, allow modifications, and must not be tied to a specific product. Meta’s limits include requiring a license fee for any developers with more than 700 million daily users and disallowing other models from training on Llama. IEEE Spectrum wrote researchers from Radboud University in the Netherlands claimed Meta saying Llama 2 is open-source “is misleading,” and social media posts questioned how Meta could claim it as open-source. 

FAIR lead and Meta vice president for AI research Joelle Pineau is aware of the limits of Meta’s openness. But, she argues that it’s a necessary balance between the benefits of information-sharing and the potential costs to Meta’s business. In an interview with The Verge, Pineau says that even Meta’s limited approach to openness has helped its researchers take a more focused approach to its AI projects. 

“Being open has internally changed how we approach research, and it drives us not to release anything that isn’t very safe and be responsible at the onset,” Pineau says. 

Meta’s AI division has worked on more open projects before

One of Meta’s biggest open-source initiatives is PyTorch, a machine learning coding language used to develop generative AI models. The company released PyTorch to the open source community in 2016, and outside developers have been iterating on it ever since. Pineau hopes to foster the same excitement around its generative AI models, particularly since PyTorch “has improved so much” since being open-sourced. 

She says that choosing how much to release depends on a few factors, including how safe the code will be in the hands of outside developers. 

“How we choose to release our research or the code depends on the maturity of the work,” Pineau says. “When we don’t know what the harm could be or what the safety of it is, we’re careful about releasing the research to a smaller group.” 

It is important to FAIR that “a diverse set of researchers” gets to see their research for better feedback. It’s this same ethos that Meta used when it announced Llama 2’s release, creating the narrative that the company believes innovation in generative AI has to be collaborative.

Pineau says Meta is involved in industry groups like the Partnership on AI and MLCommons to help develop foundation model benchmarks and guidelines around safe model deployment. It prefers to work with industry groups as the company believes no one company can drive the conversation around safe and responsible AI in the open source community. 

Meta’s approach to openness feels novel in the world of big AI companies. OpenAI began as a more open-sourced, open-research company. But OpenAI co-founder and chief scientist Ilya Sutskever told The Verge it was a mistake to share their research, citing competitive and safety concerns. While Google occasionally shares papers from its scientists, it has also been tight-lipped around developing some of its large language models.

The industry’s open source players tend to be smaller developers like Stability AI and EleutherAI — which have found some success in the commercial space. Open source developers regularly release new LLMs on the code repositories of Hugging Face and GitHub. Falcon, an open-source LLM from Dubai-based Technology Innovation Institute, has also grown in popularity and is rivaling both Llama 2 and GPT-4. 

It is worth noting, however, that most closed AI companies do not share details on data gathering to create their model training datasets.

Pineau says current licensing schemes were not built to work with software that takes in vast amounts of outside data, as many generative AI services do. Most licenses, both open-source and proprietary, give limited liability to users and developers and very limited indemnity to copyright infringement. But Pineau says AI models like Llama 2 contain more training data and open users to potentially more liability if they produce something considered infringement. The current crop of software licenses does not cover that inevitability. 

“AI models are different from software because there are more risks involved, so I think we should evolve the current user licenses we have to fit AI models better,” she says. “But I’m not a lawyer, so I defer to them on this point.”

People in the industry have begun looking at the limitations of some open-source licenses for LLMs in the commercial space, while some are arguing that pure and true open source is a philosophical debate at best and something developers don’t care about as much. 

Stefano Maffulli, executive director of OSI, tells The Verge that the group understands that current OSI-approved licenses may fall short of certain needs of AI models. He says OSI is reviewing how to work with AI developers to provide transparent, permissionless, yet safe access to models. 

“We definitely have to rethink licenses in a way that addresses the real limitations of copyright and permissions in AI models while keeping many of the tenets of the open source community,” Maffulli says. 

The OSI is also in the process of creating a definition of open source as it relates to AI. 

Wherever you land on the “Is Llama 2 really open-source” debate, it’s not the only potential measure of openness. A recent report from Stanford, for instance, showed none of the top companies with AI models talk enough about the potential risks and how reliably accountable they are if something goes wrong. Acknowledging potential risks and providing avenues for feedback isn’t necessarily a standard part of open source discussions — but it should be a norm for anyone creating an AI model. 

Source link