Breakneck speed

The rapid pace of AI research doesn’t help either. New discoveries are coming thick and fast. In the past year alone, tech companies have unveiled AI systems that generate images from text, only to announce—just weeks later—even more impressive AI software that can create videos from just text as well. That’s impressive progress, but the damage potentially associated with each new breakthrough can pose a relentless challenge. Text-to-image AI can infringe copyright and can be trained on datasets full of toxic material, leading to dangerous results.

“Chasing everything that’s really trendy, the hot-button issue on Twitter, is exhausting,” Choudhury says. Ethicists can’t be experts on the myriad of different problems each new breakthrough poses, she says, but she still feels she needs to keep up with every twist and turn of the AI ​​information cycle for fear of missing something important.

Chowdhury says working as part of a well-resourced team at Twitter has helped, reassuring her that she doesn’t have to carry the burden alone. “I know I can go away for a week and things won’t fall apart because I’m not the only person who does,” she says.

But Choudhury works at a large tech company with the means and desire to hire an entire team to work on responsible AI. Not everyone is so lucky.

People at smaller AI startups face a lot of pressure from venture capitalists to grow the business, and the checks written to you from investor contracts often don’t reflect the extra work needed to build responsible technologies, says Vivek Katial, a data scientist at Multitudes, an Australian startup working on ethical data analytics.

The tech sector needs to demand more from venture capitalists to “recognize the fact that they have to pay more for technology that’s going to be more responsible,” Katial says.

The problem is that many companies can’t even see they have a problem at first, according to a report published by MIT Sloan Management Review and Boston Consulting Group this year. AI is a top strategic priority for 42% of respondents to the report, but only 19% say their organization has implemented a responsible AI program.

Some may believe they’re considering mitigating AI risks, but they’re simply not hiring the right people for the right roles and then giving them the resources they need to put responsible AI into practice, Gupta says.

https://www.technologyreview.com/2022/10/28/1062332/responsible-ai-has-a-burnout-problem/