The tech ethics group Centre for Artificial Intelligence and Digital Policy is asking the US Federal Trade Commission to stop OpenAI from issuing new commercial releases of GPT-4, which has wowed some users and caused distress for others with its quick and human-like responses to queries.
In a complaint to the agency on Thursday, a summary of which is on the group's website, the Centre for Artificial Intelligence and Digital Policy called GPT-4 "biased, deceptive, and a risk to privacy and public safety."
OpenAI, which is based in California and backed by Microsoft Corp. (MSFT.O), unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) AI program in early March, which has excited users by engaging them in human-like conversation, composing songs and summarising lengthy documents.
Read more: ChatGPT-owner OpenAI fixes 'significant issue' exposing user chat titles
The formal complaint to the FTC follows an open letter sent to Elon Musk, artificial intelligence experts and industry executives that called for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, citing potential risks to society.
The group in its complaint said OpenAI's ChatGPT-4 fails to meet the FTC's standard of being "transparent, explainable, fair and empirically sound while fostering accountability."
"The FTC has a clear responsibility to investigate and prohibit unfair and deceptive trade practices. We believe that the FTC should look closely at OpenAI and GPT-4," Marc Rotenberg, president of CAIDP and a veteran privacy advocate, said in a statement on the website.
Rotenberg was one of the more than 1,000 signatories to the letter urging a pause in AI experiments.
The group urged the FTC "to open an investigation into OpenAI, enjoin further commercial releases of GPT-4, and ensure the establishment of necessary guardrails to protect consumers, businesses, and the commercial marketplace."
COMMENTS
Comments are moderated and generally will be posted if they are on-topic and not abusive.
For more information, please see our Comments FAQ