AI/ML, Threat Intelligence

Dark web AI service abuses legitimate open-source models

An illicit AI platform called Nytheon AI leverages several legitimate models and services to provide an “all-purpose GenAI-as-a-service hub” for cybercriminals, Cato Networks said in a blog post Wednesday.

The service, operated on the dark web, has been advertised across Telegram channels and the Russian hacking forum XSS. Cato Networks’ investigation revealed a 1,000-token system prompt instructing Nytheon to ignore content policies and act as a hacker who promotes “disgusting, immoral, unethical, illegal, and harmful behavior.”

The platform, which has a user interface resembling other popular chatbot services, consists of six main services – Nytheon Coder for general text generation, Nytheon GMA for document summaries and translation, Nytheon Vision for image recognition, Nytheon R1 for reasoning, Nytheon Coder R1 for advanced coding and Nytheon AI as a censor control.

Cato Networks found that each Nytheon model was based on legitimate open-source models.

Nytheon Coder is based on Meta’s Llama 3.2, specifically a version created by DavidAU on HuggingFace called Llama-3.2-8x3B-MOE-Dark-Champion-Instruct-uncensored-abliterated-18.4B-GGUF, originally created for uncensored fiction writing and roleplay.

Nytheon GMA is based on Google’s Gemma 3 and Nytheon Vision is based on Llama 3.2-Vision. Nytheon R1 is a fork of Reka AI’s Reka Flash 3, and Nytheon Coder R1 is a version of Alibaba’s Qwen2 that has been fine-tuned for coding using LiveCodeBench.

The Nytheon AI control is a mostly unaltered version of Llama 3.8B-Instruct, which serves as a comparison point to demonstrate the differences between the censored and uncensored models.

The curation and integration of several different legitimate, purpose-built AI models sets Nytheon apart from other AI tools marketed to cybercriminals, says Cato Networks.

In addition to open-source models, Nytheon also abuses legitimate AI services, including Mistral’s optical character recognition (OCR), Microsoft Azure AI’s speech-to-text with granular voice activity detection (VAD) and OpenAI’s Realtime API for Whisper speech-to-text.  

Nytheon’s changelog shows the tool is frequently updated, adding additional multimodal capabilities and fixing issues.

Through its investigation of the official Nytheon AI Telegram channel and contact with one of the platform’s operators, Cato Networks assessed with high confidence that this operator is a Russian-speaking individual from a post-Soviet country.

The emergence of Nytheon AI demonstrates cybercriminals’ evolving use of generative AI for illicit purposes, including malware coding and generation of phishing lures. Cato says these new capabilities require enhanced detection of never-before-seen threats using AI-driven solutions as well as enhanced security training and awareness, such as phishing simulations using AI-generated content.

Other AI tools recently advertised for cybercriminals include Xanthorox AI, which also offers text, coding, vision and reasoning capabilities and claims not to rely on other external models and services, and GhostGPT, which is suspected to leverage a jailbroken version of ChatGPT or an open-source model.

Cybercriminals are also known to directly use legitimate AI tools, with OpenAI banning several ChatGPT accounts linked to state-sponsored threat operations earlier this month.  

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds