The cybersecurity world has been in a state of alarm since Anthropic’s April 7, 2026, announcement of Claude Mythos Preview. And to many, the hysteria does not look like hype. Anthropic says Mythos can uncover long-buried vulnerabilities in critical code, analyze software even when source code is unavailable, and connect flaws into attack chains at a speed no human team could match.
That is why Anthropic has not released the model publicly. Instead, it created Project Glasswing: a restricted defensive-security initiative that gives select technology companies, infrastructure providers, and open-source maintainers early access to Mythos so they can find and fix critical vulnerabilities before comparable capabilities reach attackers. As Anthropic describes it, Glasswing is not a product launch. It is an emergency hardening effort for the software the modern world depends on.
During our April 24, 2026, webinar, we were joined by Jordan Rae Kelly, Senior Managing Director at FTI Consulting and former Director for Cyber Incident Response at the White House National Security Council, to discuss Mythos and the practical steps companies should be considering now.
Anthropic’s April 7 blog post identifies a series of defensive measures organizations can take, and the Cloud Security Alliance has published an excellent white paper offering additional guidance. Drawing from those resources, our webinar discussion, and the public reporting to date, we summarize key preparedness steps below and examine the legal and regulatory implications of Mythos-class AI.
Governance
Senior Updates. Boards and C-Suites are asking the same questions everyone is asking: What is Mythos? Is it real? What are the risks? What do we need to do to prepare? Do we have the resources to do it? In informal and anonymous polling at two conferences of cyber in-house counsel last week, we learned that over 20% of Boards and C-Suites have been briefed on Mythos. This number is likely growing. Whether it is the CISO, CIO or cyber counsel providing the briefing, or an external speaker, we recommend addressing these questions and preparing to provide updates as the risk landscape evolves and your response matures.
Incident Response Plans. For companies that have not revisited their cyber incident response plans recently, now is a good time to dust them off to make sure that information and processes are current and that the outlines of the plan are top of mind for key parties under the plan.
Technical Steps
Fight AI with AI. You do not need Anthropic’s latest generation of AI to start incorporating AI into your cybersecurity workstreams. Even the current public release, and releases from other AI providers, can help identify vulnerabilities. Consider using these to be positioned to incorporate more advanced AI as it is released.
Change Your Patch Cycles. Up until recently, 30- / 60- / 90-day patch programs may have been sufficient. But the speed needed to patch vulnerabilities will change, and expectations around reasonable practices will change with it. Consider updating your patching framework.
Data Minimization. There is little credit given in an organization for data minimization. If you remove old data that nobody needs, very few people will realize it or applaud it. And if you accidentally remove data that somebody needs, you’ll be having a bad day. But we have learned from many data breaches that removing, or simply air-gapping, data that is no longer in use can save millions, or tens of millions, in damages and fines. It is worth the effort.
Incident Response. If you presume the threat actors can get in, then you need to ensure you can detect their actions and respond quickly. Re-examine your capabilities on both fronts.
Business Processes
Patch Priority. Patching a vulnerability introduces business disruption. This is why patches have indeed been on 30-day (or more) cycles and why they may occur late at night or on a weekend. Often, the decision on patching priority rests with the affected business line due to the disruption. That may remain the case, but because of the new risks associated with this new capability, consider who has authority to decide on patching and whether the information security team may need a few extra votes.
Vulnerability Disclosure Programs. Responsible disclosure programs and bug bounty programs were designed in an age of people finding bugs, not AI. The speed at which that was happening is now far eclipsed by AI. Adapt your programs accordingly.
Audit and Risk Assessments. The April 6, 2026, blog post by Anthropic was a watershed moment in cybersecurity. Consider updating your 2026 audit and risk assessment plans to account for this new risk.
Vendor and Third Party Risk Management. You should put your oxygen mask on before helping others, but that does not mean you can wait long. Right after you triage your internal preparedness, start considering your vendors, partners and other third parties. Prepare to ask them how they are responding.
Legal and Regulatory Considerations
The regulations may not change as quickly as the AI is advancing, but regulatory expectations will change with this news.
Reasonable Security. Many companies are subject to the FTC’s authority and Section 5 of the FTC Act, which requires “reasonable security.” In Europe, the EU AI Act also imposes a “reasonableness” standard. In the financial services sector, SEC Regulation S-P § 248.30 requires covered institutions to adopt written policies and procedures reasonably designed to safeguard customer records and information. All of these obligations may need to be reassessed in light of vulnerabilities that Mythos-class tools are reported to be capable of uncovering. As companies respond and adapt to this new AI-charged vulnerability landscape, the definition of “reasonable security” will evolve. Legal and compliance teams should stay connected to their CISOs and information-sharing councils to understand how companies are adapting and what reasonable security looks like. Will it remain commercially reasonable to examine code without using AI?
Data Protection Obligations. Where Mythos-class tools reveal vulnerabilities in systems processing personal data, companies subject to the GDPR should consider whether they are required to conduct or update Data Protection Impact Assessments under Article 35. Depending on the severity of the vulnerability, notification to supervisory authorities may also be warranted. Companies subject to U.S. state privacy laws imposing reasonable security obligations should similarly reassess their compliance posture.
Risk Assessment. If Mythos is a game-changing moment in cybersecurity, is it a sufficient shift in each company’s cybersecurity posture such that under NYDFS Part 500 a new risk assessment is required?
Vulnerability Scanning. Many regulated entities, especially in the financial services sector, have an understanding with their regulator regarding the need to scan for vulnerabilities and patch. NYDFS Part 500 includes a requirement for automated scans and remediation in line with the company’s risk assessment. Consider whether AI must be a part of your vulnerability scanning program to meet regulatory expectations.
Critical Infrastructure. Companies operating in critical infrastructure sectors should evaluate whether vulnerabilities uncovered by Mythos-class tools implicate reporting obligations under the forthcoming Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA) implementation or related sector-specific directives, particularly where such vulnerabilities affect operational technology or industrial control systems.
Third Parties. Regulations, like NYDFS Part 500, that require companies to oversee their third parties impose obligations to ensure those third parties are also adapting to these changed circumstances. Consider accelerating the timelines within which third parties must notify about security events and include zero-day vulnerabilities in the required notifications. Consider the changes needed to vendor contracts.
Hack Back. As your teams start using AI tools to scan for vulnerabilities, ensure they are not inadvertently scanning another party’s networks or, worse, attempting to exploit another network.
Sector-Specific Frameworks. Beyond NYDFS Part 500, companies should evaluate their obligations under other sector-specific regulatory regimes, including HIPAA for entities handling protected health information, PCI-DSS for organizations processing payment card data, and GLBA for financial institutions more broadly. Each of these frameworks imposes independent security requirements that may need to be reassessed in light of the vulnerabilities that Mythos-class tools reportedly can identify.
Litigation and Enforcement Risk. The discovery of vulnerabilities through AI-powered scanning tools may create heightened legal exposure. Once a company has knowledge of a vulnerability and fails to remediate it in a timely manner, it faces increased risk in both private litigation and regulatory enforcement actions. Companies should document their remediation prioritization decisions and the rationale supporting them to establish a defensible record.
Public Disclosures. As AI-related cybersecurity risks come to the fore, companies should consider whether the vulnerabilities they find, and the risk that known and unknown vulnerabilities could be exploited by threat actors, could be material. In particular, public companies should consider whether changes to cybersecurity and risk disclosures are appropriate, including to ensure that risks which have been realized are not characterized as hypothetical in public disclosures. At the same time, the SEC has recognized that it is not necessary to disclose technical information about vulnerabilities, and companies therefore needn’t make overly detailed disclosures about specific technical risks.
***
To subscribe to the Data Blog, please click here.
The Debevoise STAAR (Suite of Tools for Assessing AI Risk) is a monthly subscription service that provides Debevoise clients with an online suite of tools to help them responsibly fast-track their AI adoption. Please contact us at STAARinfo@debevoise.com for more information.
The cover art for this blog post was generated by Nano Banana Pro 2.