Senators Shelley Moore Capito (R-W.Va.) and John Hickenlooper (D-Colo.) have reintroduced the bipartisan Validation and Evaluation for Trustworthy Artificial Intelligence (VET AI) Act.

The bill tasks the National Institute of Standards and Technology (NIST) with working alongside federal agencies, industry, academia, and civil society to create voluntary guidelines for evaluating AI systems. These guidelines would help independent third-party evaluators verify how AI systems are developed and tested.

Capito said the bill would give AI developers a voluntary framework to follow, helping ensure safe and responsible innovation. “I look forward to getting this bill passed out of the Commerce Committee soon,” she said.

Addressing Gaps in Oversight
Currently, AI companies self-report on their training processes, safety testing, and risk management without independent checks. The VET AI Act would establish a process for neutral evaluators—similar to those in the financial sector—to confirm whether companies meet established safety and governance standards.

Key Provisions
The bill would:

  • Direct NIST, in coordination with the Department of Energy and National Science Foundation, to create voluntary specifications for AI assurance, including internal checks and third-party verification.
  • Require these specifications to address data privacy, harm prevention, dataset quality, and governance practices throughout an AI system’s lifecycle.
  • Create an Advisory Committee to recommend certification criteria for individuals or organizations conducting AI assurance.
  • Mandate a NIST study on the current state of AI assurance, including capabilities, resources, and market needs.

Lawmakers say the framework will become more important as Congress works to establish broader AI safety regulations.