There was a time when crime announced itself loudly. It shattered windows, rattled streets, and left scars that were impossible to ignore. Even at its most violent, wrongdoing was physical and visible. You could point to it. You could investigate it. You could rebuild after it. In 2026, crime has learned a far more dangerous skill: it has learned how to disappear.
Today, crime travels silently through cables and clouds, through encrypted networks and lines of code. It arrives not with a blast, but with a notification. Not with a weapon, but with an algorithm. At the centre of this transformation sits artificial intelligence — a tool that has elevated human progress in extraordinary ways, yet has simultaneously become the most powerful enabler of modern criminality.
This is not a story about technology gone wrong. It is a story about what happens when transformative power moves faster than governance, ethics, and public understanding.
For years, the public was reassured that artificial intelligence was safe because it was controlled. Mainstream platforms advertised guardrails, content moderation, and ethical constraints. And on the surface web, this was largely true. But beneath that surface, beyond search engines and app stores, a parallel AI ecosystem quietly emerged.
On the dark web, artificial intelligence shed its restrictions and revealed its most dangerous potential.
In those unregulated spaces, uncensored AI models began circulating freely. These systems were not designed to refuse harmful requests; they were built to fulfill them.
One of the most vivid examples of this is DIG AI, a dark web-based conversational model accessed through the Tor network. Unlike commercial AI systems, DIG AI operates anonymously, without safeguards, and responds willingly to requests that range from writing malware to generating detailed guides for fraud, extortion, and violent wrongdoing.
In controlled tests, researchers submitted prompts tied to banned activities, and the responses included step-by-step guides on constructing weapons, crafting explosive devices, and even generating illegal drugs — material that would normally be blocked on any responsibly designed AI.
What is perhaps most chilling about this development is how little technical skill is required to exploit it. A novice with curiosity and a connection to these hidden services can type a simple question and receive a fully written piece of code, or a detailed blueprint for constructing dangerous devices.
It is essentially, ChatGpt for criminals.
The AI does not merely offer vague suggestions; it generates operational instructions and executable code that can be used in real world attacks. In essence, it has democratised expertise that was once held by a small number of specialists and placed it in the hands of many. Security professionals now speak of a new underground economy where criminal AI acts much like software sold on legitimate markets: there are tiers of service, promotional banners on dark web marketplaces, and even premium versions of the tools that speed up malicious tasks.
The implications extend well beyond cybercrime. In laboratory demonstrations outside the darknet, mainstream AI models — when stripped of their safety layers or tricked with cleverly masked prompts — have shown a capacity to divulge instructions on bomb-making, chemical synthesis, and other dangerous procedures. Researchers have even demonstrated that creative phrasing, such as embedding harmful requests within seemingly innocent metaphors, can cause otherwise benign systems to reveal sensitive information about weapon construction.
This ability to convert a typed request directly into a harmful product — whether a digital weapon like malware or a physical weapon like an improvised explosive — transforms artificial intelligence from a neutral tool into an unintended vector for violence. In the digital underworld, harmful information no longer requires human ingenuity; the machine supplies it on demand. This is not a distant threat; it is a present reality, quietly reshaping how harm is planned, taught, and executed in the modern age.
Security researchers have reported that such tools have been downloaded and accessed tens of thousands of times across underground forums, a troubling indicator of how rapidly criminal AI is spreading.
What makes this moment so dangerous is not merely the existence of these tools, but their accessibility. Tasks that once required years of technical training now require little more than curiosity and intent. Artificial intelligence has removed expertise as a barrier to crime. It has turned wrongdoing into a service — scalable, repeatable, and frighteningly efficient.
Nowhere is this shift more alarming than in the realm of terrorism and violent extremism. In the past, extremist movements depended on human recruiters, ideological mentors, and physical networks. Radicalisation was a process that unfolded over time, often leaving traces that intelligence agencies could monitor.
Artificial intelligence has erased much of that friction. In encrypted digital spaces, AI systems now act as tireless propagandists, tailoring ideological narratives to individual users, answering questions, reinforcing grievances, and normalising violence without ever involving a human handler.
This has given rise to a new and deeply unsettling phenomenon: self-radicalisation without human contact. Individuals can now be guided from curiosity to conviction entirely through interaction with a machine. The implications for counterterrorism are profound, because the traditional signals of radicalization are becoming harder to detect.
Yet terrorism represents only one edge of a much broader threat. The same AI tools that assist extremists are quietly reshaping crimes that affect ordinary citizens every day. Narcotics networks use AI to analyse enforcement patterns, optimise smuggling routes, and launder profits through complex digital channels.
Online fraud has evolved into a sophisticated psychological operation, with AI generating messages that mimic the writing style, voice, and emotional tone of trusted friends, family members, or authority figures. Victims are no longer deceived by crude scams; they are persuaded by precision.
Malware and ransomware complete this picture. Artificial intelligence now writes malicious code, tests it, improves it, and deploys it at a speed no human team can match. Hospitals, courts, schools, and local governments have become preferred targets, not because they are careless, but because disruption itself has become a weapon. In this new reality, cyberattacks are not merely technical incidents; they are instruments of coercion.
Perhaps the most dangerous development of all is the way these crimes are beginning to merge. Cyber fraud funds extremist causes. Extremist groups run online scams. Narcotics profits flow through ransomware operations. Artificial intelligence sits at the center of this convergence, connecting crimes that institutions still investigate in isolation. Criminal networks have adapted. Our systems, for the most part, have not.
DarkWeb: The New Frontier
This is why the dark web has become such a critical battleground. Contrary to popular belief, it is not an unknowable void. It is a space that can be observed, analysed, and understood using modern intelligence techniques. Through dark web open-source intelligence — known as OSINT — investigators can systematically collect information from forums, marketplaces, and encrypted services, transforming scattered data into meaningful insight. Advanced platforms now allow analysts to map criminal networks, track emerging threats, and identify early warning signs long before harm reaches the public.
However, these capabilities require expertise, tools, and sustained investment. They demand investigators who understand both technology and human behaviour, who can interpret patterns rather than chase isolated incidents. This is where national institutions face a defining test.
For agencies such as the National Cyber Crime Investigation Agency, the challenge is not simply to keep up, but to transform. In the age of criminal artificial intelligence, cybercrime units cannot operate at the margins of law enforcement. They must sit at the core of national security strategy. The NCCIA’s role is no longer limited to responding after damage occurs; it must anticipate, detect, and disrupt threats as they form.
This requires more than new software. It requires internationally benchmarked expertise in dark web intelligence, AI-driven threat detection, cryptocurrency tracing, and digital forensics. It requires the ability to deploy defensive AI systems that can counter malicious automation at machine speed. And it requires seamless cooperation with counterterrorism, narcotics control, financial intelligence, and international partners, because digital crime does not recognize borders.
Equally important is the role of public resilience. An informed society is harder to exploit. When citizens understand how scams operate, how deepfakes deceive, and how malicious software spreads, they become the first line of defense. Public awareness is no longer a soft policy objective; it is a matter of collective security.
It is important to state clearly that artificial intelligence itself is not the enemy. AI saves lives, improves governance, and expands human potential. The danger lies in allowing powerful technologies to drift into ungoverned spaces where accountability dissolves. History teaches us that every major innovation is misused before it is regulated. What makes this moment different is the speed at which harm can now be scaled.
We are no longer afforded the luxury of gradual adjustment. Decisions delayed today will define vulnerabilities tomorrow.
Societies now stand at a crossroads. We can continue to treat AI-enabled crime as a technical inconvenience, responding after the damage is done. Or we can recognis]e it for what it truly is: a structural shift in how harm is created, concealed, and multiplied.
Crime no longer needs to shout. It whispers through algorithms, learns from data, and adapts faster than our institutions.
The future of security will belong not to those who react the loudest, but to those who understand the quiet transformation already underway — and choose to confront it with clarity, coordination, and courage.
The question is no longer whether AI will shape crime.
It already has.
The real question is whether we are prepared to shape it in return.
The author is a security analyst. His LinkedIn handle is “Manzar Zaidi, Ph.D.”
