Senators Shelley Moore Capito (R-W.Va.) and John Hickenlooper (D-Colo.) have reintroduced the bipartisan Validation and Evaluation for Trustworthy Artificial Intelligence (VET AI) Act.
The bill tasks the National Institute of Standards and Technology (NIST) with working alongside federal agencies, industry, academia, and civil society to create voluntary guidelines for evaluating AI systems. These guidelines would help independent third-party evaluators verify how AI systems are developed and tested.
Capito said the bill would give AI developers a voluntary framework to follow, helping ensure safe and responsible innovation. “I look forward to getting this bill passed out of the Commerce Committee soon,” she said.
Addressing Gaps in Oversight
Currently, AI companies self-report on their training processes, safety testing, and risk management without independent checks. The VET AI Act would establish a process for neutral evaluators—similar to those in the financial sector—to confirm whether companies meet established safety and governance standards.
Key Provisions
The bill would:
Lawmakers say the framework will become more important as Congress works to establish broader AI safety regulations.