On January 20th, 2025, Modrinth was alerted that three users’ servers had been compromised, seemingly by the same person. Initially, it appeared their accounts had been hijacked, however, closer inspection revealed a deeper compromise of our infrastructure. In the following days, more holes were punched through Pyro’s systems, leading to the attacker mass renaming backups and gaining access to our production servers and private GitHub repositories. We’ve hardened our security systems as a response, and as of January 25th, 2025, have no evidence the attacker has access to any part of our infrastructure.
The attacker had access to server IDs, IPs, ports, server metadata (server names, backup names, modloader information), server data, and our private code repositories. The attacker did NOT have access to billing information, Modrinth accounts, or Modrinth’s infrastructure.
With the goal of providing transparency after this incident, we’ll explain how our systems operate, detail the course of events, and outline the steps we have taken to mitigate the damage and reinforce our security.
How Our Infrastructure Operates
Pyro runs all servers on NixOS, a system designed for reproducible environments. Every server uses nearly identical configurations to simplify updates and maintain consistency. These configurations are stored in a private repository we call supercluster.
Historically, to manage deployments, we relied on a special user, “robot.” This user had privileged (root-level) access to our management server, allowing us to push new configurations rapidly across all machines. While useful for quick rollouts, this design meant anyone possessing the robot user’s credentials could gain full access to our infrastructure.
Early Clues of a Breach
The first sign of trouble appeared on January 20th, when a user reported that someone had taken over their server. Despite helping them secure their account by resetting passwords and enabling two-factor authentication, we noticed repeated break-ins. Even more concerning, the control panel showed the server’s owner had changed to an unknown Modrinth account, locking the rightful user out.
We initially believed this to be a one-off incident, perhaps caused by a compromised user password, but the facts soon suggested otherwise. By January 21st, investigation revealed no API route or typical user-level exploit that could account for such persistent ownership changes. Our suspicion of a larger infrastructure breach prompted a broad internal review of logs, endpoints, and code.
Initial Mitigations
Our database used to be self-managed on the same server our backend API is running from. In an attempt to shield our critical data, concerned the attacker had gained database access, we decided to migrate it to Neon DB. The assumption was that moving the database offsite, along with updating and revoking key credentials, would isolate or at least contain any ongoing attack. We increased backup frequency, hardened database access through IP whitelist and ensured our NeonDB account was secured.
Nevertheless, more reports of hijacked servers came in, confirming that something far more serious was happening. By January 23rd, it had become clear we were dealing with a sophisticated intrusion rather than a simple user account compromise.
Further Evidence of a Large Scale Breach
On January 24th, all doubts were erased when we discovered that more than 3,000 server backups had been renamed to racist slurs within the span of a single minute. No minor exploit or user-end weakness could account for such an immediate and large-scale alteration. It was unmistakable: the attackers had access to our database.
At this point, we had rotated our internal secrets several times already, such as the backend API’s master key. We locked down our APIs, specifically deletion endpoints, limiting the damage the attacker could deal and stopped pushing updated secrets to our repositories.
The Real Culprit: A Compromised SSH Key (and a Hidden Token)
After digging in our management server, it became evident the “robot” SSH key had been leaked.
We found a work-in-progress software meant to automate NixOS deployments—an unfinished system not yet ready for production. In order for this software to clone and update certain repositories, it had a plain-text, unscoped GitHub Personal Access Token (PAT) stored locally, granting full read access to our private repositories, including supercluster, where many credentials and keys lived unencrypted. Every time we rotated database passwords or API keys, the attacker re-pulled supercluster using the stolen PAT to discover the latest information.
We came to the conclusion the attacker used the leaked SSH key to gain access to our management server, found the insecure PAT, and used it to keep control over our infrastructure.
Containment and Cleanup
Realizing that the unscoped PAT was the real “open gate” into our infrastructure, we immediately revoked it, along with any other lingering personal access tokens. We severed the “robot” user’s SSH privileges, regenerated new keys, and removed the incomplete deployment tool from the management server altogether.
Once that pathway was closed, the attacker could no longer reacquire our secrets, effectively halting their infiltration. We then launched a thorough internal audit to ensure no other unexpected tokens or processes were left behind. Each server’s logs were scrutinized, suspicious files removed, new monitoring and alert systems put in place, and we kept a close eye on SSH logs. Since these measures were enacted, there have been no further signs of unauthorized access.
On January 25th, the attacker reached out to us, requesting 1 ETH within the hour (eventually lowering to 0.25 ETH, and extending the delay by multiple hours) in exchange for not leaking Pyro’s private codebases, our CEO’s private codebases, and doing further damage to our infrastructure. At that time, there was no evidence the attacker had access to our infrastructure despite claiming otherwise. We have not, and will not, pay the attacker.
Lessons Learned and Steps Forward
This incident has highlighted many points of failure in our security systems, such as:
-
The “robot” SSH key granting root access to every node in our network.
-
Improperly stored secrets, acquirable if any infrastructure team member had their GitHub account compromised.
-
Poor internal diligence with handling critical secrets.
-
Absence of a proper response plan for security incidents.
It also sparked a greater conversation about security at Pyro and Modrinth: we are working closely together to ensure such a breach never happens again and detailing a response plan for future security events, including improving our transparency and response effectiveness.
As of now, the following improvements are being implemented:
-
Personal Security Overhaul: Every infrastructure team and management team member went through their personal security and ensured they were protected against targeted attacks, including resetting our computers, rotating critical passwords, ensuring 2FA is enabled everywhere and securely storing our SSH keys. Future measures may include storing keys exclusively in hardware tokens, such as YubiKeys.
-
Individualized Credentials: We’ve replaced the “robot” key with unique access for each trusted user, and are working on solutions to additionally restrict and log SSH access.
-
Vaulting Secrets: Keeping secrets in plaintext is unacceptable. We’re now storing them encrypted only and ensuring only trusted people have access to them, and working on further solidifying secret access through idP and SSO software. Furthermore, secrets are scoped exclusively to the nodes that require them, preventing a leak of management secrets on customer nodes.
-
Hardened Monitoring: Through our analysis during the security incident, we noticed many areas where we lacked logging and monitoring. We should’ve noticed the malicious SSH logins and database modifications sooner. We’re evaluating our options to integrate improved monitoring and runtime security tooling with our existing alerting systems, and implementing full auditing and centralized logging across all nodes.
-
Cultural Changes at Pyro: This incident highlights our lack of focus on security and safety. We’re determined to engage in thorough security review within our company in the future, including:
-
Detailing a security emergency response plan alongside Modrinth, with steps to take in the case of an intrusion such as revoking secrets and immediately notifying our customers.
-
Rotating secrets and ensuring previous Pyro employees lose all infrastructure access to avoid disastrous leaks.
-
Undergoing 3rd-party security auditing the moment it is financially viable for us.
-
Ever since this incident, our primary focus has been on security. We’ll post a follow-up, including further steps we took to protect our customers, early next month.
Supporting Our Community
We recognize that some users experienced a total loss of control over their servers, and others saw vile, hateful changes to their data. For this, we extend our deepest apologies. We have taken steps to restore every compromised server and remove any malicious alterations such as renamed backups, and fully refunded affected customers who wished so.
Modrinth and Pyro are extending everyone’s subscription by 2 weeks, free of charge. This means your next server bill will be delayed by 2 weeks. No action is required to receive this.
If you notice any irregularity with your server or simply have concerns about the security of your account, please reach out to our support team. We remain committed to transparency, and we promise to keep you informed if any further revelations come to light.
While this incident posed a serious challenge to our entire platform, it has reinforced our determination to maintain a resilient and secure environment for the Modrinth community. We are grateful for your patience and will continue to work diligently to protect our users, their servers, and their data.
— The Pyro Engineering Team