The biggest challenge to the health of the Internet is the power disparity between those who benefit from AI and those who are harmed by AI, Mozilla’s new 2022 Internet Health reveals.
Once again, this new report puts AI in the spotlight on how companies and governments are using the technology. Mozilla’s report examines the nature of the AI-driven world, citing real-world examples from various countries.
TechRepublic spoke with Solana Larsen, editor of Mozilla’s Internet Health Report, to shed light on the concept of “Responsible AI from the ground up,” black-box AI, the future of regulation, and how some AI projects are leading by example.
SEE: Artificial Intelligence Ethics Policy (TechRepublic Premium)
Larsen explains that AI systems need to be built from the ground up with ethics and responsibility in mind, not tacked on at a later date when harms begin to emerge.
“As logical as that sounds, it really doesn’t happen enough,” Larsen said.
According to Mozilla’s findings, centralizing influence and control over AI does not work in most people’s favor. Considering the dimensions that AI technology is taking, as AI is adopted worldwide, the problem has become a major concern.
Market surveillanceThe AI Disruption report reveals just how big AI is. 2022 kicked off with more than $50 billion in new opportunities for AI companies, and the sector is expected to jump to $300 billion by 2025.
Adoption of AI at all levels is now inevitable. Thirty-two countries have already adopted AI strategies, more than 200 projects with over $70 billion in public funding have been announced in Europe, Asia and Australia, and startups are raising billions in thousands of deals worldwide.
More importantly, AI applications have moved from rule-based AI to data-based AI, and the data these models use is personal data. Mozilla acknowledges the potential of AI, but warns that it is already causing harm on a daily basis around the world.
“We need AI creators from different backgrounds who understand the complex interaction of data, AI and how it can impact different communities,” Larsen told TechRepublic. She called for regulations to ensure AI systems are built to help, not harm.
Mozilla’s report also focuses on AI’s data problem, where large and commonly used datasets work, although they don’t guarantee the results that smaller datasets specifically designed for a project do.
The data used to train machine learning algorithms is often pulled from public sites like Flickr. The organization warns that many of the most popular datasets are made up of content culled from the internet that “heavily reflects words and images that distort the English, American, white and male gaze”.
Black Bock AI: Demystifying Artificial Intelligence
AI seems to get away with much of the damage it does, thanks to its reputation for being too technical and advanced for humans to understand. In the AI industry, when an AI uses a machine learning model that humans cannot understand, it is known as an AI Black Box and is branded for lack of transparency.
Larsen says that to demystify AI, users need to have transparency about what the code is doing, what data it’s collecting, what decisions it’s making and who’s benefiting from it.
“We really have to reject the idea that AI is too advanced for people to have an opinion unless they’re data scientists,” Larsen said. “If you experience harm from a system, you know something about it that maybe even its own designer doesn’t.”
Companies such as Amazon, Apple, Google, Microsoft, Meta and Alibaba top the lists of those who benefit the most from AI-driven products, services and solutions. But other sectors and applications such as military, surveillance, computational propaganda — used in 81 countries in 2020 — and disinformation, as well as AI bias and discrimination in the health, financial and legal sectors, are also raising alarms about the harm they are causing.
AI regulation: From talk to action
Big tech companies are notorious for often defying regulation. Military and government-run AI also operate in an unregulated environment, often clashing with human rights and privacy activists.
Mozilla believes that regulations can be barriers to innovation that help facilitate trust and level the playing field.
“It’s good for business and good for consumers,” says Larsen.
Mozilla supports regulations such as DSA in Europe and closely follows the EU Artificial Intelligence Act. The company also supports bills in the US that would make AI systems more transparent.
Data privacy and user rights are also part of the legal landscape that can help pave the way for more responsible AI. But regulations are only one part of the equation. Without enforcement, regulations are nothing more than words on paper.
“There’s a critical mass of people calling for change and accountability, and we need AI creators who put people before profit,” Larsen said. “Right now, a lot of AI R&D is funded by big tech, and we need alternatives here, too.”
SEE: Metaverse Cheat sheet: Everything You Need to Know (Free PDF) (TechRepublic)
The Mozilla report links AI projects causing harm to several companies, countries and communities. The organization cites AI projects that affect gig workers and their working conditions. That includes the invisible army of low-wage workers who train AI technology on sites like Amazon Mechanical Turk, with an average wage of just $2.83 an hour.
“In real life, time and time again, the harms of AI disproportionately affect people who do not benefit from global systems of power,” Larsen said.
The organization is also taking active action.
One example of their actions is that of Mozzila RegretsReporter browser extension. It turns everyday YouTubers into YouTube watchdogs by crowdsourcing how AI works for the platform’s recommendations.
Working with tens of thousands of users, Mozilla’s investigation revealed that YouTube’s algorithm recommends videos that violate the platform’s own rules. The investigation yielded good results. YouTube is now more transparent about how AI works for recommendations. But Mozilla has no plans to stop there. Today they continue their research in different countries.
Larsen explains that Mozzila believes that shedding light and documenting AI when it works in shadowy conditions is paramount. In addition, the organization calls for dialogue between technology companies in order to understand the problems and find solutions. They also contact regulators to discuss the rules to be used.
AI that leads by example
While the Mozilla 2022 Internet Health report paints a rather bleak picture of AI, adding to the problems the world has always had, the company also highlights AI projects created and designed for a good cause.
For example, the work of Driver cooperative in New York, an app used—and owned—by over 5,000 rideshare drivers is helping gig workers gain real agency in the ridesharing industry.
Another example is a black-owned business in Maryland called Melalogical it’s crowdsourcing images of dark skin to better detect cancer and other skin problems in response to severe racial bias in machine learning for dermatology.
“There are many examples around the world of AI systems being built and used in reliable and transparent ways,” Larsen said.
Top challenge to internet health is AI power disparity and harm, Mozilla says