Meta will debut a giant language model for AI research, hoping to combat the toxicity and bias in these systems.

The open pre-trained transformer (OPT-175B) has 175 billion parameters, along with models such as commercial models such as GPT-3.

In the past, developers have used these types of systems to build functionality such as content moderation and automated copying. However, because they are trained on vast volumes of existing text, they can generate results that are biased, inaccurate, or just racist.

Training AI on a “bad” dataset can lead to a system full of flaws and inaccuracies, such as the (never released) Amazon recruitment tool, which rates women lower than men, or facial recognition programs. who misidentify on the basis of race.

Meta believes that restrictions on access to large language models support certain issues such as bias and toxicity. The OPT-175B is the first such model to be made available to the wider AI research community under a non-commercial license.

Academic researchers, people associated with government, civil society and academic organizations, and industrial research laboratories will be able to use the dataset for free, as well as pre-trained models and code to train and use it. Meta also releases subsets of data – up to 66 billion parameters – for everyone to use.

IN document accompanying the message, Meta researchers note that they trained the model using 992 Nvidia 80GB A100 GPUs, reaching a performance of 147 TFLOPS per chip. Using Nvidia’s latest hardware has allowed them to reduce carbon production by up to 1/7 of GPT-3’s footprint.

Click here to access the code for the smaller pre-trained Meta models or fill in this shape to request access to the full version.

https://www.computing.co.uk/news/4049114/meta-releases-massive-ai-dataset-training-avoiding-bias

Previous articleThe game streaming device and Xbox TV app may be coming soon
Next article15-year mortgage rates for May 2022