Microsoft ‘mostly’ opposes artificial intelligence (AI) in understanding your emotions
Microsoft has announced that it is retiring the emotional and face recognition features of Azure Face while revising its Responsible AI Standard (for the most part).
Microsoft’s internal guidelines for creating AI systems are outlined in the Responsible AI Standard. The business hopes AI will improve society and never be abused by dishonest people. It’s a standard that has never before been disclosed to the general public. Microsoft, however, decided that this new alteration meant that the time was now.
Software for emotional and facial identification has, to put it mildly, generated controversy. The outlawing of this technology is supported by numerous organizations.
For instance, Fight for the Future labeled the creation of emotional tracking software by Zoom “intrusive” and “a breach of privacy and human rights” in an open letter that was published back in May.
Microsoft will update its Azure Face service to comply with the specifications of its new Responsible AI Standard as it has been spelled out. The AI’s ability to scan emotions is first being made inaccessible to the general public by the corporation. The ability of Azure Face to recognize a person’s facial features, such as “gender, age, [a] smile, facial hair, hair, and makeup,” will also be lost.
The reason for the retirement is because there is still no “clear consensus on the notion of ‘emotions’” among scientists worldwide. Microsoft’s Chief Responsible AI Officer, Natasha Cramption, stated that experts both inside and outside the organization have expressed their worries. The issue is “the difficulties in generalizing across use cases, locations, and demographics, as well as the elevated privacy concerns,”
Similar limitations will also apply to Microsoft’s Custom Neural Voice in addition to Azure Face. A text-to-speech program that is startlingly realistic is called Custom Neural Voice. The service will now only be available to a small group of “managed customers and partners,” or those that collaborate closely with Microsoft’s account teams. The technology, according to the business, has a lot of potential, but it may also be used for impersonation. All current users of Neural Voice must fill out an intake form and receive Microsoft’s approval to continue using the service. These customers will lose access to Neural Voice if they are not chosen by June 30, 2023, and they must be approved by then.
Despite what has been reported, Microsoft isn’t completely giving up on its facial recognition technology. Only public access is covered by the announcement. The Principal Group Project Manager at Azure AI, Sarah Bird, wrote about ethical facial recognition. Furthermore, she writes in that piece that “Microsoft understands these features can be helpful when used for a set of restricted accessibility scenarios.” A representative mentioned Seeing AI, an iOS software that aids the blind in recognizing people and objects around them, as one of these examples.
It’s encouraging to see yet another tech behemoth acknowledge the drawbacks and abuse risks associated with facial recognition. Similar work was done by IBM in 2020, however, that company’s strategy was more rigid.
In 2020, IBM declared it was giving up on face recognition research out of concern that the technology would be abused for widespread surveillance. The removal of this technology by these two titans of the business is a victory for those opposed to facial recognition. If you want to understand more about AI and what it can do for cybersecurity, TechRadar has released a feature on the topic.