The famous secret Meta set a cornerstone for transparency.
This week, the company offered the entire research community access to a fully trained Large Language Model (LLM).
Called the Open Search Transformer (OPT), the system reflects the performance and size of the vaunted OpenAI GPT-3 model.
This mimicry is intentional. Although GPT-3 has a stunning ability to create human text, it also has a powerful capacity for bias, bigotry and misinformation.
The creators of OPT said their system could reduce these risks:
Our goal in developing this set of OPT models is to allow reproducible and responsible research on a scale and to attract more voices to the masses in studying the impact of these LLMs.
In addition to sharing OPT for non-commercial use, Meta has released its pre-trained models, their core code and diary of their development. No other company has ever provided this level of access to LLM.
Such openness can they look uncharacteristic.
After all, Meta has often been accused of concealing its algorithms and their harmful effects. And yet the move can don’t be completely altruistic.
Meta could Take great advantage of external experts who research OPT for defects, uses and repairs – without having to pay them.
A public embrace of the company’s transparency can also reduce criticism of its secrecy.
Meta researchers acknowledge that OPT has major shortcomings.
They note that the system does not work well with declarative instructions or precise questions.
It also tends to generate toxic language and reinforce harmful stereotypes – even when eating relatively harmless clues.
“In summary, we still believe that this technology is premature for commercial implementation,” they wrote in their study paper,
Contributions from the wider research community can accelerate this maturation – which may not only help Meta.
We hope that this move will show that both business and society benefit from transparency.