Facebook’s parent company, Meta, last week launched the Open Search Transformer (OPT-175B), a language model with 175 billion parameters trained on publicly available datasets.

Meta said the publication aims to increase community involvement in the study of large language models (LLMs).

However, according to the company’s researchers, the system suffers from the same problem as its predecessors PaLM and Davinci and is terrible at avoiding results that reinforce sexist and racist prejudices.

In paper [pdf] Accompanying the publication, researchers warned that the system has an even higher risk of generating toxic results than Facebook’s two previous language models.

Even when equipped with a relatively harmless prompt, the model has a “high tendency to generate toxic language and reinforce harmful stereotypes,” according to the researchers.

In addition, the system is vulnerable to “competition prompts”, where small phrases can be used to circumvent the system’s safeguards and generate offensive content.

Researchers suspect that this is because the training data includes unfiltered text extracted from chats on social media, which increases the model’s tendency to recognize and create hate speech.

Researchers believe that “this technology is premature for commercial implementation” and “more attention should be paid to training data with additional data characteristics and selection criteria in order to use the data responsibly.”

LLMs, which are sophisticated programs that can create paragraphs of text and simulate human conversation, have become one of the most popular AI concepts in recent years.

However, they come with many problems, such as generating misinformation, bigotry and toxic language.

Google, which is exploring the use of massive language models in its search offerings, caused a stir in 2020 when it fired the head of its AI ethics team after publishing a study pointing out technology flaws.

Meta says their OPT-175B is the first 175 billion-language language model available to the larger AI research community, and will help academics understand how LLM works.

“We trust the whole AIthe community will benefit from working togetherto develop guidelines for responsible LLM andWe hope that wide access to this type of modelswill increase the variety of determining votesethical considerations for such technologies “, concludes the report of Meta researchers.

According to the company, academic researchers, individuals associated with government, civil society and academic organizations, as well as corporate research facilities will have access to the model.

Meta says the OPT-175B is trained on 992 Nvidia 80GB A100 GPUs, with each chip delivering 147 TFLOPS.

Meta also claims that the OPT-175B is comparable to the GPT-3, but has only one-seventh of its carbon impact.

https://www.computing.co.uk/news/4049389/facebook-language-model-propensity-generate-toxic-language-reinforce-harmful-stereotypes

Previous articleResearchers discover new way to measure void in a pair of merging supermassive black holes
Next articleThe best science fiction TV shows on Prime Video