
There is emerging consensus among the experts working in the area that the rapid development of artificial intelligence (AI) poses real dangers for humanity. Historians start their assessment of what AI could do by going back several centuries. Before 1700, the world economy increased on average at the rate of 8 per cent a century. Over the next 300 years, the Industrial Revolution reshaped the global economy, moving it from basically agricultural to the one in which manufacturing played a greater role. With this change in place, growth averaged 350 per cent a century. Higher growth had demographic consequences. There were declines in both death and birth rates. This boosted living standards with income per head of the population increasing steadily at 2 per cent a year. Eventually more developed countries had ageing and declining populations.
AI will overcome this demographic challenge. There is possibility of a second explosion of economic growth with the development of AI bringing about technological advance without human involvement. According to a projection by Epoch AI, a bullish think-tank, once AI can carry out 30 per cent of tasks done at this time by human beings, annual growth rate could exceed 30 per cent. AI would result in workers' wages at the lower end of income distribution scale to decline. At the same time the owners of capital — especially those who have invested in AI — will see a sharp increase in their incomes. The overall result will be to increase the income divide. This will result in demands for government action which even large AI firms are favoring.
Fears about the damage that growth of AI, especially artificial general intelligence (AGI), are not stopping the leaders in the area from making more investments. The logic is simple. As a review of the situation in the British magazine, The Economist, wrote: They are all convinced that even if their firm or country were to pause or slow down, others would press ahead, so they might as well push on too. The belief that the benefits of attaining AGI or superintelligence are likely to accrue chiefly to those who make the initial breakthrough provides results in even more effort to advance and innovate. All this leaves relatively little time and capacity to mediate on matters of safety. More data and more computing power at one end of raining pipeline has led, over and over again, to more intelligence at the other end.
According to Bloomberg, total investment in AI in 2025 would be $320 billion, more than ten times the amount put in a decade earlier. Amazon, with $100 billion invested in 2025, leads the pack, followed by Alphabet ($75 billion), Meta and Microsoft. In 2025, the AI Futures Project, an American research group, predicted that by the beginning of 2027, top AI models will be as capable as a technical worker in an AI lab.
By then, AI will be capable of improving itself without human intervention. This is called "recursive self-improvement". Those involved in working in the area of AI have coined the term, artificial general intelligence, meaning an AI capable enough to replace more or less anyone with a desk job, or even superintelligence, meaning an AI so smart no human being can understand it. Those labs that have that ability would be capable of doing a great deal of good as well as a great deal of harm.
The same competitive dynamic propelling the development of AI applies even more strongly to governments. Both policymakers in Beijing as well as Washington realise that they are in competition in the area of AI. President Donlad Trump has vowed that America would "do whatever it takes" to lead the world. JD Vance, his Vice President, was even more explicit while addressing a policy conference in Paris: "The AI future will not be won by handwringing about safety." He spoke after it was revealed that China had developed a model it called DeepSeek that matched the performance of the American leading systems but had done it a fraction of the US cost.
Contributors to the AI field in America were fully aware of the problems faced by the systems they were developing. Shane Legg, the founder of Google DeepMind, identified four ways the AI systems could go wrong. The most obvious misuse would be when a malicious individual or a group of malicious people working together in AI to cause deliberate harm. Another is "misalignment", the idea that the AI and its creators might not want the same things. AI might cause harm by "mistake" if real world complexity prevented systems from understanding the full implications of their actions. Finally, a nebulous set of "structural risks", events where no one person or model is at fault, but harm still occurs. One example is the huge amount of power that large systems need to run that may cause climate change by burning fossil fuels.
There is ongoing work in most AI labs to prevent their products from becoming rogue. Among the approaches being adopted by developers is to build on the breakthrough of "reasoning" models, which tackle complex problems by building "faithful" chain-of-thought models in which the model's expressed reasoning for taking an action must be its actual motivation. A similar approach is already being used to keep reasoning models "thinking" in English, rather than in an unintelligible jumble of languages that has been called "neurlese".
For every expert predicting AI's doing great harm to humanity, there are many working in the field who maintain there is nothing to worry about. Yann LeChun of Mata thinks that fears are highly exaggerated; in fact, they are absurd. "Our relationship with future AI systems, including superintelligence, is that we are going to be their boss," he has declared in response to the fears many have expounded. "We are going to have a staff of superintelligent people working for us to prevent AI models from going rogue. " The same sentiment has been expressed by Sam Altman who founded the company, Open AI: "People working in the AI area will still love their families, express their creativity, play games and swim in lakes." They, in other words, would not be working to destroy mankind.
COMMENTS
Comments are moderated and generally will be posted if they are on-topic and not abusive.
For more information, please see our Comments FAQ