Pineau has helped change the way research is published in several of the largest conferences by introducing a checklist of things that researchers need to present with their results, including code and details of how experiments are conducted. Since joining Meta (then Facebook) in 2017, she has defended this culture in her artificial intelligence lab.

“This commitment to open science is the reason I’m here,” she said. “I would not be here under any other conditions.”

Ultimately, Pineau wants to change the way we evaluate AI. “What we call the most modern thing today can’t just be about productivity,” she said. “It must be modern and responsible.”

Still, giving a big language model is a bold move for Meta. “I can’t tell you that there is no risk that this model will produce a language we are not proud of,” says Pino. “Yes”

Weighing the risks

Margaret Mitchell, one of the researchers in the ethics of artificial intelligence who was fired by Google in 2020 and is now on Hugging Face, sees the launch of OPT as a positive development. But she believes there are limits to transparency. Has the language model been tested rigorously enough? Do the foreseeable benefits outweigh the foreseeable harms – such as generating misinformation or racist and misogynistic language?

“Publishing a large language model in a world where a wide audience is likely to use it or be affected by its results comes with responsibilities,” she said. Mitchell notes that this model will be able to generate harmful content not only on its own, but also through downstream applications that researchers build on it.

Meta AI is auditing OPT to eliminate some harmful behaviors, but the issue is to launch a model from which researchers can learn warts and everything else, says Pino.

“There has been a lot of talk about how to do this in a way that allows us to sleep through the night, knowing that there is a non-zero risk to reputation, a non-zero risk to harm,” she said. She rejects the idea that you shouldn’t run a model because it’s too dangerous – which is why OpenAI doesn’t run GPT-3’s predecessor, GPT-2. “I understand the weaknesses of these models, but this is not a research way of thinking,” she said.

https://www.technologyreview.com/2022/05/03/1051691/meta-ai-large-language-model-gpt3-ethics-huggingface-transparency/

Previous articleSeedLegals closes over £ 1bn funding for UK start-ups
Next articleFlusso launches FLS122 as the world’s smallest air speed sensor –