“Nature is trying to tell us something here, which is that it doesn’t really work, but the area believes so much in its own press clippings that it just can’t see it,” he added.

Even de Freitas’ DeepMind colleagues Jackie Kay and Scott Reed, who worked with him on Gato, were more careful when I asked them directly about his allegations. When asked if Gato was targeting AGI, they would not be attracted. “In fact, I don’t think it’s really possible to make predictions with such things. I try to avoid this. It’s like predicting the stock market, “Kay said.

Reed said the question was difficult: “I think most people with machine learning will diligently avoid answering. It’s very difficult to predict, but, you know, I hope to get there one day.

In a sense, the fact that DeepMind called Gato a “generalist” could make him a victim of the over-publicity of the AI ​​sector around AGI. Modern AI systems are called “narrow”, which means that they can only perform a specific, limited set of tasks, such as generating text.

Some technologists, including some at DeepMind, believe that one day people will develop “broader” AI systems that will work as well or even better than humans. Although some call it artificial intelligence, others say it’s like “belief in magic.” Many leading researchers, such as Meta’s chief AI scientist Yang Lekun, question whether it is possible at all.

Gato is a “generalist” in the sense that he can do many different things at once. But this is a world other than the “common” AI, which can meaningfully adapt to new tasks that are different from what the model was trained in, says Andreas of MIT: “We are still a long way from being able to to do that. “

Increasing models will also not solve the problem that models do not have “lifelong learning”, which would mean that if they are taught something once, they will understand all the consequences and use it to inform all other decisions they make. he says.

Noise around tools like Gato is detrimental to the overall development of AI, says Emanuel Kahembwe, a researcher in AI and robotics and part of Black in AI, co-founder of Timnit Gebru. “There are a lot of interesting topics that have been set aside, that are underfunded, that deserve more attention, but that’s not what big technology companies and most researchers in such technology companies are interested in,” he said.

Technology companies need to take a step back and take stock of why they are building what they are building, says Villas Dhar, president of the Patrick J. Foundation. McGovern, a charity that funds artificial intelligence projects “forever.”

“AGI is about something deeply human – the idea that we can become more than we are by building tools that push us to greatness,” he said. “And that’s really nice, but it’s also a way to distract us from the fact that we have real problems we’re facing today that we need to try to solve with AI.”


Previous articleIntellectual property and cybersecurity disputes are leading legal issues for technology companies – TechCrunch
Next articleStarlink satellite internet for RVs costs $ 25 extra for worse services