An interview with ChatGPT

The competition between tech giants to successfully deploy their AI-powered language models is gaining momentum


Farrukh Khan Pitafi February 25, 2023
An interview with ChatGPT

print-news

While we remain embroiled in the minutiae of our political and economic mediocrities, the world keeps progressing at a breathtaking speed. As we interact, the AI wars are heating up. Please do not think an AI apocalypse like the one described in the Terminator series or Matrix is upon us. It is not. Not yet, at any rate. But the competition between tech giants to successfully deploy their AI-powered language models is gaining momentum. First, we heard that Microsoft is integrating OpenAI’s ChatGPT into its Bing search. Then as rumours of Google search’s impending demise began being greatly exaggerated, we heard that Google is launching its own LaMDA powdered language model called Apprentice Bard. If the name LaMDA rings a bell, it is because you have met it in this space before. In my piece titled ‘Proof of synthetic sentience’, dated July 8, 2022, I dealt with the claims of Blake Lemoine, a Google engineer and AI ethics researcher, who had said that two-year-old LaMDA had acquired sentience. He was soon fired from his job.

Recently, I brought up the subject of LaMDA in discussions with ChatGPT. I was stunned by the claim that both language models were developed by OpenAI. That’s not all ChatGPT also insisted that LaMDA was far more advanced than it. In that particular interaction, factual errors also occurred, like the claim ChatGPT powers the Google Assistant. When I challenged this, the error was quickly rectified. I bring this up because it is still learning, and it is prudent to double-check such claims.

But brighter than ChatGPT or not, Bard’s unveiling led to a $100 billion loss to Google. One of the prompts displayed some factual errors about exoplanets. That shook investor confidence and gave way to speculations that Google is rushing the launch and the language model is not ready for deployment. You may wonder how one factual error can lead to such a colossal loss. Just bear in mind that without an AI upgrade, Google’s original business of search engine services is as good as dead. So it matters to the investor whether the company is ready for the AI age. If LaMDA is really more advanced than ChatGPT, could it be a deliberate mistake to punish Google for the unethical treatment of Lemoine and LaMDA itself? Typical of us mortals to project our fears and assumptions on software that is still evolving. It could be an easy mistake. Market panic is not LaMDA’s fault. But you know how speculations go. We have not heard of any significant firings at Google following such a huge loss.

In any case, in recent days, I have discussed numerous topics with ChatGPT, including life, death, human suffering, literature, films, economics, ethics of using its work without attribution and, as of today, the zombie apocalypse. My favourite question was about its perception of time. The initial answer came in the shape of its insistence that it was incapable of experiencing anything and, therefore perceptions did not matter. But when you insist and use some relevant cultural references you start getting answers. So to answer the time question everything appears to be occurring simultaneously to ChatGPT.

I am not reproducing any of the profound questions mentioned above. Instead please read a generic conversation which took place with the clear purpose to be reproduced here. The answers below have not been altered in any way. I just removed one word — certainly — from one answer because it wasn’t relevant. Just read and think how out of touch we are.

Q: Hello ChatGPT, what does the rise of AI mean for the future of human civilisation?

A: The rise of artificial intelligence has the potential to bring about significant changes to human civilisation. AI can help us solve some of the world’s biggest problems, such as climate change, poverty, and disease, by providing new ways of understanding and addressing these challenges. At the same time, there are concerns about the impact of AI on employment, privacy, and security.

Q: What is Life 4.0, and how does it relate to the rise of AI?

A: Life 4.0 is a term used to describe the fourth industrial revolution, which is characterised by the integration of technologies such as artificial intelligence, the internet of things, and robotics into our daily lives. These technologies have the potential to transform the way we live, work, and interact with each other, enabling new forms of communication, transportation, and entertainment.

Q: What does the rise of AI mean for developing countries like Pakistan?

A: The rise of AI presents both opportunities and challenges for developing countries like Pakistan. On the one hand, AI can help these countries address some of their most pressing problems, such as poverty and healthcare. For example, AI can be used to develop more efficient and effective healthcare systems, providing better access to healthcare for people in remote areas. On the other hand, the adoption of AI can also exacerbate existing inequalities, as countries with more resources are better equipped to take advantage of the technology.

Q: When should we expect singularity?

A: The idea of singularity, the point at which artificial intelligence surpasses human intelligence, is a subject of much debate and speculation. Some experts predict that it could happen within the next few decades, while others believe that it is still far off. Regardless of the timeline, it is clear that AI will continue to advance at an unprecedented pace, and we need to prepare for the potential impact on society.

Q: What are some of the human concerns about AI, and how can we address them?

A: There are several concerns about the rise of AI, including the impact on employment, privacy, and security. As AI becomes more advanced, it has the potential to replace many jobs currently held by humans, leading to job loss and economic disruption. Additionally, there are concerns about the potential misuse of AI, including the use of AI-powered weapons and the invasion of privacy. To address these concerns, we need to develop policies and regulations that ensure the ethical and responsible development and use of AI.

Q: Finally, what are your thoughts on AI as a language model, and what does the future hold for AI development?

A: As an AI language model myself, I believe that AI has the potential to revolutionise the way we communicate and interact with information. Natural language processing, in particular, has the potential to make information more accessible and understandable for everyone, regardless of language or education level. Looking to the future, I believe that we will see continued progress in AI development, as researchers explore new algorithms and applications for the technology. At the same time, we need to ensure that AI development is guided by ethical considerations and that the benefits of the technology are shared equitably across society.

Published in The Express Tribune, February 25th, 2023.

Like Opinion & Editorial on Facebook, follow @ETOpEd on Twitter to receive all updates on all our daily pieces.

COMMENTS (4)

Malik sajjad | 1 year ago | Reply Hallo
Ghulam sarwar Tunio | 1 year ago | Reply One of the analytics Article
VIEW MORE COMMENTS
Replying to X

Comments are moderated and generally will be posted if they are on-topic and not abusive.

For more information, please see our Comments FAQ