Microsoft took an ethical stand on facial recognition just days after being blasted for a sinister AI project in China


Microsoft President Brad Smith announced on Tuesday that the company refused a request from a US police department to install its facial recognition software, citing human rights concerns, Reuters reports.

Speaking at a Stanford University conference on ethical AI, Smith said Microsoft had received the request from a California law enforcement agency to install the technology in officers’ cars and body cameras.

“Anytime they pulled anyone over, they wanted to run a face scan,” Smith said, adding the officer would check the person’s face against a database.

Read more: Artificial intelligence experts from Facebook, Google, and Microsoft called on Amazon not to sell its facial recognition software to police

He said the company concluded that the inherent bias in facial recognition — which is largely trained on white male faces — meant that it would be less accurate identifying women’s and people from ethnic minorities’ faces, therefore they would end up being held for questioning more frequently.

Smith called for tighter regulation on facial recognition and AI in general, warning that data-hungry companies could end up in a “race to the bottom.” His comments come as pressure is building on Amazon to stop selling its facial recognition “Rekognition” software to