Microsoft Calls Again for Facial Recognition Regulation
Microsoft’s chief legal counsel Brad Smith called upon lawmakers to regulate facial recognition again this month. The first time was in July 2018. His goal was not to make it harder to develop facial recognition technology further but to start an essential conversation about its risks and shortcomings.
For example, one civil rights organization found that a UK-based company’s facial recognition technology is wrong 98% of the time. Not only is that a radically high failure rate, but it is actively harmful instead of helpful to users.
After Smith’s first attempt with these conversations, Microsoft made an effort to speak about the risks that facial recognition poses with privacy issues. The company talked with technologists, companies, civil society groups, academics and public officials around the world.
Microsoft’s conversations led to the belief that governments need to address three problems with facial recognition:
- The current limitations can lead to biased outcomes that may run break discrimination laws
- Its widespread use could lead to invasions of privacy
- Government use for mass surveillance can encroach on democratic freedoms.
Smith has solutions. Many come back to the idea that companies need to be transparent about how they develop and use facial recognition technologies. For example, companies like Microsoft and Amazon are developing stores using facial recognition technology to track customers. Smith proposes that these two big players should be required to inform shoppers of using the facial recognition technology so they can decide whether or not they want to shop there.
The big ticket issue is with government surveillance. According to Smith, the key will be to “ensure that governmental use of facial recognition technology remains subject to the rule of law” via new legislation.
Regardless, Smith’s proposals are not solely focused on regulators. He also published Microsoft’s principles for developing facial recognition technology.