MIT researchers recently made one of the boldest claims about artificial intelligence we’ve seen so far: they believe they’ve built an AI that can identify a person’s race using only medical imaging. And according to popular mediathey have no idea how it works!

Sure. And I would like to sell you NFT on the Brooklyn Bridge.

Let’s be clear in the front, per team documentthe model can predict per person self-reported competition:

In our study, we show that standard models of deep AI training can be trained to predict a race of high-performance medical images across multiple image modalities.

Greetings humanoids

Subscribe now to a weekly summary of our favorite artificial intelligence stories

Forecasting and identification are two completely different things. When a prediction is wrong, it is still a prediction. When the identification is wrong, it is a wrong identification. These are important differences.

AI models can be fine-tuned to predict everything, even concepts that are not real.

Here’s an old analogy I like to draw in these situations:

I can predict with 100% accuracy how many lemons in one lemon tree are aliens from another planet.

Since I’m the only person who can see aliens in lemons, I’m what you call a “database.”

I could stand there next to your AI and point out all the lemons that have aliens in them. AI would try to figure out what’s going on with the lemons I’m pointing at, which makes me think there are aliens in them.

Eventually, AI will look at a new lemon tree and try to guess which lemons I think aliens have.

If he was 70% accurate in guessing, he would still be 0% accurate in determining which lemons have aliens in them. Because there are no aliens in lemons.

In other words, you can train AI to predict everything, as long as:

  • Don’t give him a chance to say “I don’t know.”
  • Continue adjusting the model parameters until it gives you the desired answer.

No matter how accurate an AI system is in predicting a label, if it can’t demonstrate how it came to its prediction, those predictions are useless for identification purposes – especially when it comes to individual issues.

Furthermore, claims of ‘accuracy’ do not mean what the media seems to think they are doing when it comes to these types of AI models.

The MIT model achieves less than 99% accuracy of labeled data. This means that in nature (looking at images without labels) we can never be sure if the AI ​​has made the right assessment, unless one reviews the results.

Even with 99% accuracy, the Massachusetts Institute of Technology’s AI would still misidentify 79 million human beings if given an image database for every living person. Worse, we would have absolutely no way of knowing which 79 million people he mislabeled unless we went to all 7.9 billion people on the planet and asked them to confirm AI’s estimate of their particular image. This would defeat the goal of using AI in the first place.

The important point: learning AI to identify labels in a database is a trick that can be applied each database s all kinds of labels. This is not a method by which AI can determine or identify specific object in a database; he is simply trying to predict – to guess – what label human developers have used.

The MIT team concluded in their paper that their model could be dangerous in the wrong hands:

The results of our study emphasize that the ability of deep AI learning models to predict self-reported race is not an important issue in itself.

However, our discovery that AI can accurately predict a self-reported race, even from damaged, cut, and noisy medical images, often when clinical experts cannot, poses a huge risk to all models embedded in medical images.

It is important for AI developers to consider the potential risks of their creations. But this particular warning has little basis in reality.

The model created by the MIT team can achieve the accuracy of comparative analysis of large databases, but, as explained above, there is absolutely no way to determine if the AI ​​is correct unless you already know the basic truth.

In principle, MIT warns us about the possibility of evil doctors and medical technicians practicing racial discrimination on a large scale using a system similar to this one.

But this AI cannot determine race. It provides labels in specific data sets. The only way this model (or any model like it) can be used to differentiate is with a wide network and only when the discriminator really doesn’t care how many times the machine messes up.

The only thing you can be sure of is that you cannot trust an individual result without double-checking it against the underlying truth. And the more images AI processes, the more mistakes it is sure to make.

In summary: MIT’s “new” AI is nothing more than a magician’s illusion. He’s good, and models like this are often incredibly useful when fixing things isn’t as important as getting them done quickly, but there’s no reason to believe that bad actors can use this as a race detector.

MIT can apply exactly the same model to a grove of lemon trees and, using the database of labels I created, can be trained to predict which lemons have aliens in them with 99% accuracy.

This AI can only provide labels. Does not identify race.

https://thenextweb.com/news/mits-ai-cant-determine-a-persons-race-from-medical-images

Previous articleSnap collapsed after the CEO warned that the company would miss out on profits and revenue
Next articleThe end of Night Sky has been explained and all your questions have been answered