Exposed AI Keys Fuel Cybercrime: Urgent Call for Enhanced Secret Management and Monitoring
February 12, 2026
Attackers monetize exposed AI keys by running high-volume inferences, generating scam content, supporting malware, bypassing quotas, and draining billing accounts.
GitHub acts as a high-fidelity discovery surface for secrets, with keys persisting due to commits, forks, and archived repos, leading to exposure within minutes to hours after discovery.
Cyble warns that AI API keys are production secrets and must be protected with the same rigor as other privileged credentials.
Many keys are hardcoded in source code, configuration files, or embedded in front-end assets, making them accessible to anyone who inspects code or traffic.
Most organizations lack monitoring for AI credential misuse, creating financial, operational, and reputational risks.
CRIL found widespread exposure of ChatGPT API keys across more than 5,000 public GitHub repositories and about 3,000 live sites.
Recommendations include removing secrets from client-side code, enforcing GitHub hygiene with secret scanning and pre-commit checks, practicing least privilege and rotation, adopting secure secret management, and monitoring AI usage with anomaly detection akin to cloud monitoring.
Prefixes such as sk-proj- and sk-svcacct- signal project- or service-account scoped keys that grant privileged access to AI services and billing resources.
Public-facing sites exposing keys enable real-time abuse—high-volume inferences, phishing content, and automation—often without backend mediation.
Summary based on 1 source
Get a daily email with more AI stories
Source

Cyble • Feb 12, 2026
The Rising Risk of Exposed ChatGPT API Keys