The tools have repeatedly been criticized by experts. Northeastern University psychology professor Lisa Feldman Barrett says the company’s technology is able to recognize a “frown,” which is not anger identification. Experts note that facial expressions are not universal for different population groups, external manifestations of emotions cannot be equated with internal feelings.
Microsoft’s current decision is part of a review of the company’s AI ethics policy. The corporate regulation emphasizes the need for accountability to find out who is using these services and to increase control over the scope of the tools.
This means that Microsoft will restrict access to some features of its Azure Face face recognition service and remove others. In order to use this Azure Face functionality, customers must submit an application describing where and how it will be applied. The simplified use of the service will remain open.
In addition, Microsoft will remove the ability to identify gender, age, emotions, hair and makeup from Azure Face.
Experts inside and outside the company highlight the lack of scientific consensus on the definition of emotions, as well as problems generalizing findings across individual cases, regions and demographics, said Natasha Crampton, Microsoft’s senior AI officer.
The corporation promises to restrict access to features for new customers from June 21, and for other users from June 30, 2023.
Microsoft will continue to use these features in the Seeing AI app for the blind and visually impaired.
Microsoft’s new restrictions will affect the Custom Neural Voice feature, which allows users to create auditory deepfakes. Sarah Bird, senior product manager for the Microsoft Azure AI group, says the tool has great potential in education and entertainment, but it can also be used to deceive.