Where AI Meets Elections: What Officials Need to Demand

By Spencer Wood

Recently, Anthropic (makers of Claude) announced that its latest AI model found thousands of previously unknown security flaws in every major operating system and web browser. Some of these bugs had been sitting there for decades. One had gone undetected for 16 years. Another had been hiding for 27 years in software that’s widely considered one of the most secure in the industry.

The headlines focused on AI. That’s the wrong takeaway.

These were not new vulnerabilities created by AI. They were old vulnerabilities that nobody caught. AI just made finding them faster, cheaper, and more accessible. For election officials and the vendors who build election technology, that distinction matters a lot.

The bottom line for election officials: AI is not a new category of threat. It is an amplifier. The problems that surfaced were already there. And the practices that protect against them, vendor oversight, layered security, and foundational hygiene, were already the right answer before this news broke.

The election security community is working through what Mythos means in real time. A companion piece from the AI & Elections Clinic at Arizona State University’s Mechanics of Democracy Laboratory focuses on the operational posture of election offices: multi-factor authentication, phishing awareness, audits, and contingency planning. That guidance is sound and worth reading alongside this piece. What follows here is the other half of the equation: the software that election offices buy, and the vendors who build it.

What happened

Anthropic built a model called Claude Mythos Preview. It can scan large codebases and find serious security flaws without human guidance. It does not just find bugs; it writes working exploits for them. Anthropic considered this capability serious enough that they chose not to release the model to the public. Instead, they provided access to about 50 organizations (including AWS, Apple, Microsoft, Google, and the Linux Foundation) under a program called Project Glasswing, along with $100 million in usage credits for defensive security work.

But here is the part that should get your attention: this isn’t just about one model or one company. What Mythos demonstrates is part of a broader pattern. AI did not create new categories of security threats. It reduced the cost, skill, and time required to execute them. Finding a vulnerability that took a skilled researcher weeks of manual effort can now be done in minutes by a model that costs almost nothing to run. That’s not a theoretical concern. Independent security researchers have already shown that smaller, widely available AI models can detect many of the same types of flaws that headlined Anthropic’s announcement.

The same pattern applies across the threat landscape for elections. Phishing emails and other forms of social engineering used to be generic. Now they can be written to target your specific jurisdiction, reference real local officials by name, and mimic your vendor’s actual communication. Voice cloning and synthetic images have lowered the cost and skill required for impersonation. Automated reconnaissance means threats don’t stay in lanes, where an actor probing your network can simultaneously identify staff, vendors, and community partners as additional partners, or find personal information about election workers to target them or their family members. Small jurisdictions can be targeted as easily as major cities. And because AI operates with speed and scale, the window to detect and respond before damage is done is compressed. AI did not invent these threats. It industrialized them, making them faster, cheaper, and more personal.

To be clear, there is good news here. The fundamentals of election security remain strong: 98% of jurisdictions use paper ballots, audits, chain of custody controls, and bipartisan procedures that still make large-scale vote manipulation extremely difficult. Most successful intrusions still begin with social engineering, not advanced system compromise.  The good news is also this: the practices that protect against AI-enabled threats are the same foundational practices election officials already know. Strong vendor contracts, staff training, multi-factor authentication, and regular patching go a long way. AI raises the urgency for doing those things consistently, but it does not require an entirely new playbook.

What this means for election officials

Election technology sits in an unusual position. Much of it is custom-built or purpose-built by a small number of vendors. The systems that manage voter registration, electronic pollbooks, ballot tracking, and election night reporting are critical infrastructure, but they often don’t receive the same level of security scrutiny as the operating systems and browsers that Mythos targeted.

If a general-purpose AI model can find bugs that went unnoticed for decades in software reviewed by thousands of expert developers, what might it find in election technology that hasn’t been through rigorous, independent testing?

This is a practical question every election official should be asking their vendors. And it extends well beyond elections: any state or local government agency that relies on third-party software faces the same reality.

Know what’s in your software

One of the most practical tools available right now is the Software Bill of Materials (SBOM). Think of it as an ingredient list for software. It catalogs every component, library, and dependency that makes up a product, in a machine-readable format.

This matters because modern software isn’t written from scratch. It’s assembled from hundreds or thousands of open source and third-party components, and each one can carry its own vulnerabilities. When a major flaw was discovered in 2021 in a common software component, Log4j, used by millions of systems worldwide, organizations without SBOMs spent days or weeks just trying to figure out whether their systems were affected. Organizations that maintained SBOMs could answer that question in minutes.

Now scale that up to AI speed. When a tool like Mythos can surface thousands of vulnerabilities in a short window, the only way to know whether your systems are exposed is to know what’s in them. An SBOM makes that possible. Without one, you’re guessing.

RABET-V, the national verification program for non-voting election technology, already incorporates continuous SBOM monitoring for verified products. Vendors in the program are not just tested once and forgotten. Their software components are tracked over time, so when a new vulnerability shows up in a dependency, it gets flagged and can be addressed.

The bottom line: if your vendor can’t tell you what’s in their software, that’s a red flag.

What election officials can do now

You don’t need to start from scratch, and the frameworks and principles to address this already exist. The challenge is adoption and using them consistently.   Here are some concrete steps.

The operational hygiene practices that protect against AI-enabled threats, patching, MFA, phishing training, and regular audits, are covered by both the Election Security Exchange and the AI & Elections Clinic’s recent work. The steps below focus on the procurement and vendor-oversight layer, which is where Mythos has the clearest implications for election technology.

Require independent verification of election technology. When you buy software, you are also buying the vendor’s security practices. RABET-V, developed by the Center for Internet Security and administered by The Turnout, is currently the only national program for verifying non-voting election technology. It evaluates three things: the vendor’s development process, the product’s architecture, and the product itself. It then scales future testing based on the risk of changes. Critically, RABET-V assesses organizational maturity, not just whether a product passes a test on a given day. That’s a meaningful difference, because it can distinguish between a vendor with a strong security culture and one that simply managed to pass a single review.

Build Secure by Design principles into procurement. Don’t wait for a vendor to tell you they take security seriously. Put it in the contract. CISA’s Secure by Design initiative established a clear set of principles that shift the security burden from the buyer to the vendor: no default passwords, built-in multifactor authentication, transparent vulnerability disclosure, and regular security patching. These principles represent sound software development practice, and they belong in procurement contracts. If your vendor isn’t building security in from the start, you’re inheriting their risk.

Require SBOMs and keep them current. If a vulnerability is discovered in software your vendor uses, you need to be able to quickly determine whether your systems are affected. Ask your vendors to provide machine-readable SBOMs in a standard format and to update them with every release. An SBOM that gets generated once and filed away is just a checkbox. A living SBOM, monitored against known vulnerability databases, is an early warning system.

Contracts are your first line of defense

None of this works without strong contract controls, and contract controls don’t work without monitoring and enforcement.

When AI can find exploitable bugs in minutes that humans missed for decades, your contract with your technology vendor may be your most important security tool. Procurement language should spell out clear expectations for secure development practices, vulnerability disclosure timelines, SBOM delivery and maintenance, and participation in verification programs like RABET-V.

But writing good contract language is only half the job. Election officials and local government leaders need to treat these requirements the way they would any other compliance obligation: with regular check-ins, documented evidence of vendor performance, and real consequences for non-compliance. A requirement that nobody monitors is the same as no requirement at all.

Here’s a practical starting point:

  1. Include organizational maturity assessments (such as RABET-V verification) in procurement criteria as a scored evaluation factor, not an optional extra.
  2. Require vendors to deliver and maintain current SBOMs for all deployed products.
  3. Require documented vulnerability disclosure and patching timelines and hold vendors to them.
  4. Build contract review checkpoints (quarterly or semi-annually) where vendors demonstrate ongoing compliance.
  5. Make contract renewal contingent on continued adherence, not just initial certification.

The real takeaway

Project Glasswing is not an AI story. It’s a software quality story. AI is accelerating the discovery of problems that were always there. The election offices and government agencies that will be best positioned are the ones that already demand rigorous, transparent, and continuously tested software from their vendors.

The vulnerabilities Mythos found were not created last week. They were created when the software was written, years or decades ago, by developers who lacked the tools, incentives, or oversight to catch them.

AI did not invent the threat. It industrialized it. The solution remains the same disciplined foundational practices that have always mattered: strong contract controls and vendor accountability, layered security, staff training, and clear public communication. These are not new ideas. They are ideas that work, where AI makes them all the more necessary. The tools and frameworks exist. What’s needed now is the will to put them in contracts, the resources to support them, and the discipline to enforce them.

Spencer Wood is a nationally recognized expert in cybersecurity and election security. He currently serves as an Election Security Consultant with the Election Security Exchange (SecuringElections.org) and advises additional organizations on election security and resilience. Previously, he served as an Election Security Advisor and Cybersecurity Advisor with the U.S. Cybersecurity and Infrastructure Security Agency, and as Chief Information Officer for the Ohio Secretary of State. Today, he consults nationwide as a subject matter expert on cybersecurity, election administration security, physical security, artificial intelligence, and emerging technologies.