The rapid growth in artificial intelligence (AI) technologies are shaping economies worldwide, though significant uncertainties regarding the AI risk bringing with it global economic crises.
The current and developing AI technologies provide the potential to achieve security, efficiency, economic, and social goals of the digital economy today, however, experts warn of AI’s privacy and ethical concerns, as well as other worries over privacy and misinformation.
Recent researches showed that companies investing in AI-powered automation to boost profits lay off workers in times of crises to cut costs, leading to uncertainties of a wider threat of AI on the job market.
Sebnem Ozdemir, associate professor and head of the Data Science Department at Istanbul-based Istinye University, told Anadolu that the AI in question needs to be identified first and foremost to discuss the potential threats each type of AI technology poses.
Ozdemir stated that, in the case of data-based AI, which outputs responses based on collected data, these solutions induce issues regarding unfair competition and explainability.
She noted that the AI models that have already been in use for a long time have had serious use cases, for instance, in forecasting and directing the market with available data.
“A few months ago, there was a serious discussion in the UK regarding the crises these solutions may bring about and the ways we can combat them, the most important of which was the need for implementing regulations on the manipulation power of companies holding data monopolies, though data manipulation is only one of the issues,” she said.
Ozdemir mentioned that black box AI systems fuel even further concerns, as they are impenetrable technologies whose inputs and actions are impossible to ascertain, in which algorithms make decisions without any clear or deductible reasons.
“For example, in the case of economy, a black box AI system can suggest an idea to direct the market, and let’s imagine, the decision to be made turns out to be a big one, I mean, isn’t that just scary,” Ozdemir said.
She stated that in terms of generative AI technologies, which are categorized as generative pre-transformers (GPT) and large language models (LLMs), their decisions and suggestions may lead to even bigger crises.
“Although such AI tools are supposed to act like intelligent humans who have read all the books in the world, they still have human problems, such as making things up and believing them, whether or not their conceptualized ideas reflect real life or have real use cases in economy, for instance,” she said, warning such solutions can lead to further economic crises.
'Financial expert AI machines making financial decisions'
Ozdemir stated that although the use of AI has disadvantages, the advantages are aplenty, as companies can benefit from AI to boost their profit margins and attain market dominance while relying in part on AI for risk management.
She argued that the lack of resources leads to AI generating biased and incorrect responses, as its decision-making will be severely cut down, rendering it unsafe.
She added that the data training is extremely important in making AI models effective.
She noted that currently AI tools are too early in development to completely replace human workers, however, we are at a new point where the competence of people that train AI comes into the forefront.
“Imagine a digital entity making financial decisions with no salary expectations, of course companies will flock to such solutions,” said Ozdemir.
Financial expert AI machines are created and developed very rapidly via human intervention in the process, Ozdemir mentioned.
“However, currently, it is too early to leave all the decisions to AI, and the current practice is to have human teams on the wheel steering AI to the right way, though this solution of constant human intervention in AI decision-making is estimated to last three years or so,” she said.
COMMENTS
Comments are moderated and generally will be posted if they are on-topic and not abusive.
For more information, please see our Comments FAQ