While Pichai cited the possibility that the technology could be used for nefarious purposes as a reason for a moratorium, Smith said a ban was akin to using a meat cleaver instead of a scalpel to solve potential problems.
“I think it is important that governments and regulations tackle it sooner rather than later and give a framework for it,” Pichai told a conference in Brussels organised by think-tank Bruegel.
“It can be immediate but maybe there’s a waiting period before we really think about how it’s being used,” he said. “It’s up to governments to chart the course” for the use of such technology.
Smith, who is also Microsoft’s chief legal officer, however, cited the benefits of facial recognition technology in some instances such as NGOs using it to find missing children.
“I’m really reluctant to say let’s stop people from using technology in a way that will reunite families when it can help them do it,” Smith said.
“The second thing I would say is you don’t ban it if you actually believe there is a reasonable alternative that will enable us to, say, address this problem with a scalpel instead of a meat cleaver,” he said.
Facebook’s Zuckerberg says company considered banning political ads
Smith said it was important to first identify problems and then craft rules to ensure that the technology would not be used for mass surveillance.
“There is only one way at the end of the day to make technology better and that is to use it,” he said.
The European Commission taking a tougher line on artificial intelligence (AI) than the United States that would strengthen existing regulations on privacy and data rights, according to a proposal paper seen by Reuters.
Part of this includes a moratorium of up to five years on using facial recognition technology in public areas, to give the EU time to work out how to prevent abuses, the paper said.
Pichai urged regulators to take a “proportionate approach” when drafting rules, days before the Commission is due to publish proposals on the issue.
Regulators are grappling with ways to govern AI, encouraging innovation while trying to curb potential misuse, as companies and law enforcement agencies increasingly adopt the technology.
There was no question AI needs to be regulated, Pichai said, but rule-makers should tread carefully.
“Sensible regulation must also take a proportionate approach, balancing potential harms with social opportunities. This is especially true in areas that are high risk and high value,” he said.
Regulators should tailor rules according to different sectors, Pichai said, citing medical devices and self-driving cars as examples that require different rules. He said governments should align their rules and agree on core values.
Twitter admits phone numbers meant for security used for ads
Earlier this month, the US government published regulatory guidelines on AI aimed at limiting authorities’ overreach and urged Europe to avoid an aggressive approach.
Pichai said it was important to be clear-eyed about what could go wrong with AI, and while it promised huge benefits there were real concerns about potential negative consequences.
One area of concern is the so-called “deepfakes” - video or audio clips that have been manipulated using AI. Pichai said Google had released open datasets to help the research community build better tools to detect such fakes.
The world’s most popular internet search engine said last month that Google Cloud was not offering general-purpose facial-recognition application programming interfaces (APIs) while it establishes policy and technical safeguards.
COMMENTS
Comments are moderated and generally will be posted if they are on-topic and not abusive.
For more information, please see our Comments FAQ