Microsoft will discontinue the sale of so-called emotion detection and artificial intelligence (AI) software and will limit the use of facial recognition tools as part of a new framework for the responsible use and deployment of artificial intelligence. It is becoming the latest Big Tech company to move away from controversial techniques and try to counter the prospect of bias and discrimination in AI.
Emotion detection software has been controversial since its inception, with many researchers claiming gas has no scientific basis. Last year, Microsoft launched a review of its accuracy and Google blocked certain emotions that turned out to be inaccurate from Google Cloud’s own AI emotion detection tools.
“These efforts have raised important questions about confidentiality, the lack of consensus on the definition of ’emotions’ and the inability to summarize the relationship between facial expression and emotional state in different uses, regions and demographics,” Sarah Bird, Microsoft Azure AI Product Manager. per unit said in a blog post.
With the impending threat of stricter regulations, a number of technology providers have begun to withdraw from these uses of artificial intelligence. Last week, Clearview AI, the startup that took billions of images of people from the public network and made them searchable by customers, including police agencies, reportedly cut most of its sales team as it struggles with litigation and difficult economic conditions.
Clearview was fined ยฃ 7.5 million by the UK Information Commissioner last month for collecting and storing images of UK citizens without consent. The company also stopped selling to private companies in the United States due to lawsuits.
Microsoft AI standards: moving to a “more reliable AI”
Microsoft’s latest move will remove unrestricted access to the facial recognition technology offered by its Azure cloud software, and customers will be given one year before access is completely removed. In the future, Azure customers who want to use facial recognition, including to open doors or access websites, will need to request access.
This is part of a broader publication of Microsoft’s new standards for responsible AI that includes current best thinking on how AI can “respect enduring values,” including fairness, reliability, inclusion, confidentiality, transparency, and accountability.
Content from our partners



Microsoft says its framework will “guide how we build AI systems,” describing it as “an important step in our journey to developing better, more reliable AI.”
Face Detection Technology will only be available in a narrower use case that respects end-user privacy, similar to the rules governing Azure’s speech technology, which allows the creation of synthetic voice sounds almost identical to the source.
Speech-to-text technology will also be part of the framework, as “the potential of artificial intelligence systems to exacerbate social biases and inequalities is one of the most widely recognized harms associated with these systems,” the company said in a statement.
Microsoft says this is only the first step, explaining in a statement: “As we move forward with deployment, we expect to face challenges that require us to pause, reflect, and correct. Our standard will remain a living document evolving to meet new research, technology, laws and teachings from inside and outside the company. โ
Google released a similar review of its own last year AI operations with Google Cloudwhich has rejected a number of applications using the technology. This included an unnamed financial firm that wanted to use AI to decide who to borrow money on the grounds that it could not guarantee that it would be race- and gender-neutral. It also blocks a feature that analyzes emotions over fears of cultural sensitivity.
Read more: Discrimination law needs to change to combat the impact of AI bias
Microsoft cans emotion-detecting AI as tech giants shun facial recognition