A lot of teams called this the "Axiom" attack in Slack threads, but the incident was the Axios npm compromise.
The short version is simple. Two Axios releases were poisoned on March 31, 2026: 1.14.1 and 0.30.4. Both added a malicious dependency (plain-crypto-js@4.2.1) that executed during install and pulled a second-stage payload.
Because Axios is deeply transitive across JavaScript ecosystems, this was a meaningful supply-chain event even though the exposure window was short.
Timeline that matters
| Time | Event |
|---|---|
| March 31, 2026 00:21:58 UTC | axios@1.14.1 published (npm registry timestamp) |
| March 31, 2026 01:00:57 UTC | axios@0.30.4 published (npm registry timestamp) |
| March 31, 2026 | Community reports compromise; issue #10604 opened in Axios repo |
| About 3 hours after publication window | Malicious versions removed and tags reverted |
| April 1, 2026 | Microsoft publishes technical mitigation guidance |
The npm registry still preserves the publish timestamps in the time object even after compromised versions are removed.
What made this attack effective
The Axios source code itself was not the main payload.
The attacker performed a manifest-level dependency insertion. The malicious versions added plain-crypto-js@^4.2.1, which ran a postinstall script. That script fetched and executed platform-specific second-stage malware.
This is exactly why dependency install hooks are dangerous in large ecosystems:
- compromise can happen before app code runs
- CI jobs can become infection paths
- runtime behavior may still look normal after install
Versions to treat as compromised
axios@1.14.1axios@0.30.4
Safe downgrade targets called out by Microsoft were:
axios@1.14.0axios@0.30.3
As of the post-incident registry state, latest reverted to 1.14.0, and the compromised versions are no longer present in the public versions map.
Immediate response checklist
If your org might have installed Axios during the compromise window, this is the minimum response:
- identify all hosts and CI jobs that resolved
axios@1.14.1or0.30.4 - rotate secrets that were present on those environments
- remove and reinstall dependencies from a clean state
- pin to known-safe versions and avoid floating ranges temporarily
- hunt for IOCs tied to the known C2 path and second-stage artifacts
Treat this as possible endpoint and pipeline compromise, not just a package rollback exercise.
Strategic lessons
This incident reinforces a few patterns we keep seeing:
- maintainer account takeover remains a critical control point
- trusted publishing and provenance checks materially reduce risk
- dependency cooldown policies help absorb short-lived poison releases
- lockfile discipline is now a security boundary, not just reproducibility hygiene
If a package with this level of ecosystem reach can be poisoned for a few hours, your default assumption should be that short windows still matter.
Final note
This was not a theoretical supply-chain scenario. It was a real install-time compromise path through one of the most downloaded JavaScript dependencies.
The practical takeaway is clear: harden publishing trust, reduce blind auto-update behavior, and make credential rotation and lockfile response fast enough to run the same day.