The world of AI research is in ruins. From academics that prioritize easy-to-monetize schemes over the demolition of new foundations, to a Silicon Valley elite using the threat of job loss to promote corporate-friendly hypotheses, the system is a broken mess.

And Google deserves the lion’s share of the blame.

How it started

There was about 85,000 scientific articles published worldwide on AI / ML in 2000. Scroll forward to 2021 and there were almost twice as many published in the United States alone.

To say that there was an explosion in the field would be a strong understatement. This influx of researchers and new ideas has led to the transformation of deep learning into one of the most important technologies in the world.

Between 2014 and 2021, high technology almost abandoned its “first place in the network” and “first mobile” principles to adopt “first AI” strategies.

Now, in 2022, AI developers and researchers are more in demand (and paid more) than almost any other technology job outside of C-suite.

But this kind of unlimited growth also has a dark side. In the struggle to meet the market demand for products and services based on in-depth training, this area has become as headstrong and volatile as professional sport.

In the last few years, we have seenGAN dad”, Ian Goodfellow, jumping from Google to Apple, Timnit Gebru and others were fired by Google for differing opinions on the effectiveness of research, and a virtual torrent of questionable AI documents somehow managed to clear the peer review.

The flow of talent that came as a result of the deep learning explosion also brought mud of bad research, fraud and corporate greed.

What’s up

Google, more than any other company, is responsible for the modern paradigm of AI. This means that we have to give a full estimate of the big G for providing natural language processing and table image recognition.

It also means that we can credit Google for creating a researcher-eat-researcher environment in which some students and their faculty with large technology partners treat scientific articles as little more than a lure for venture capitalists and corporate bounty hunters.

At the top, Google showed its desire to hire the world’s most talented researchers. And it has also been shown many times that I will fire them for a moment if they do not follow the company’s line.

The company made headlines around the world after firing Timnit Gebru, a researcher she had hired to lead the AI ​​ethics department, in December 2020. Just months later, she fired another member of the team, Margaret Mitchell.

Google says the researchers’ work did not meet specifications, but both women and many supporters say the layoffs came only after raising ethical concerns about research signed by the company’s AI chief, Jeff Dean. .

It is now more than a year later and history repeats itself. Google fired another world-renowned AI researcherSatrajit Chatterjee, after leading a team of scholars to challenge another document Dean signed.

The effect of mudslides

At the top, this means that competition for high-paying jobs is fierce. And the hunt for the next talented researcher or developer starts earlier than ever.

Students who work for advanced degrees in machine learning and AI, who eventually want to work outside of academia, are expected to author or co-author scientific papers that demonstrate their talent.

Unfortunately, the pipeline from academia to big technology or the VC-led startup world is littered with crappy articles written by students whose goal is to write algorithms that can be monetized.

For example, a quick Google Scholar search for “natural language processing” shows nearly a million visits. Many of the listed articles have hundreds or thousands of citations.

At first glance, this would mean that NLP is a thriving subset of machine learning research that has attracted the attention of researchers around the world.

In fact, the search for “artificial neural network”, “computer vision” and “learning with reinforcement” has led to such an oversaturation with results.

Unfortunately, much of the research on AI and ML is either deliberately deceptive or full of bad science.

What may have worked well in the past is fast becoming a potentially outdated way of communicating research.

Stuart Richie of The Guardian recently wrote article wondering if we should eliminate scientific articles altogether. According to them, the problems of science are deeply rooted:

This system comes with big problems. Central to this is the problem of publication bias: reviewers and editors are more likely to give a good text to a scientific article and publish it in their journal if they report positive or exciting results. So scientists go to great lengths to develop their research, to rely on their analysis to give “better” results, and sometimes even to commit fraud to impress these extremely important goalkeepers. This drastically distorts our idea of ​​what really happened.

The problem is that goalkeepers, whom everyone tries to impress, tend to hold the keys to students’ future work and acceptance of academia into prestigious journals or conferences – researchers may fail to get their approval at their own risk.

And even if a document manages to pass a peer review, there is no guarantee that the people who run things don’t sleep on the switch.

That’s why Guillaume Cabanak, an associate professor of computer science at the University of Toulouse, created a project called Problem paper checking machine (PPS).

PPS uses automation to mark documents that contain potentially problematic code, math, or verbosity. In the spirit of science and justice, Cabanac ensures that every document that is marked receives a manual review by people. But the work is probably too much for a handful of people to do in their spare time.

According to a report from Spectrum News, there are many problematic newspapers. And most are related to machine learning and AI:

The screening estimates that about 7,650 studies are problematic, including more than 6,000 for tortured phrases. Most of the articles containing tortured phrases seem to come from the fields of machine learning, artificial intelligence and engineering.

Tortured phrases are terms that raise red flags in front of researchers because they are trying to describe a process or concept that is already well established.

For example, the use of terms such as ‘fake nervous system’ or ‘man-made nervous system’ could mean using a thesaurus attachment used by bad actors trying to get away with plagiarism in a previous job.

The solution

Although Google cannot be blamed for anything unfavorable in the field of machine learning and AI, it has played a huge role in transferring peer-reviewed research.

This does not mean that Google does not support and supports the scientific community through open source, financial aid and research support. And we’re certainly not trying to imply that anyone who studies AI just wants to make quick money.

But the system was created to encourage the monetization of algorithms first and to complete the field second. To change this, big technology and academia need to commit to wholesale reform in the way research is presented and reviewed.

There is currently no widely recognized body for verifying documents from third countries. The peer review system is more like a code of honor than a set of agreed principles followed by the institutions.

However, there is an advantage to the establishment and functioning of a oversight committee with scope, influence and experience to manage beyond academic boundaries: NCAA.

If we can bring together a system of fair competition for thousands of amateur athletics programs, it is certain that we could form a governing body to establish guidelines for academic research and review.

And as for Google, there’s more than zero chance that CEO Sundar Pichai will be summoned to Congress again if the company continues to fire researchers it hires to monitor its ethical intelligence programs.

American capitalism means that businesses are usually free to hire and fire whoever they want, but shareholders and workers also have rights.

Eventually, Google will have to engage in ethical research or be unable to compete with the companies and organizations that desire it.

Previous articleA conceptual technique for imaging the gravitational telescope described by Stanford scientists: what it can do
Next articleHybrid work requires collaboration tools to keep developers connected