CHARLESTON, W.Va. – U.S. Senators Shelley Moore Capito (R-W.Va.) and John Hickenlooper (D-Colo.) recently reintroduced their bipartisan Validation and Evaluation for Trustworthy Artificial Intelligence (VET AI) Act. The bill directs the National Institute of Standards and Technology (NIST) to work with federal agencies and stakeholders across industry, academia, and civil society to develop detailed specifications, guidelines, and recommendations for third-party evaluators to work with AI companies to provide robust independent external assurance and verification of how their AI systems are developed and tested.
“The VET AI Act is a commonsense bill that will allow for a voluntary set of guidelines for AI, which will only help the development of systems that choose to adopt them. I was proud to join Senator Hickenlooper in reintroducing this legislation, and I look forward to getting this bill passed out of the Commerce Committee soon,” Senator Capito said.
BACKGROUND:
Currently, AI companies make claims about how they train, conduct safety red-team exercises, and carry out risk management on their AI models without any independent verification. The VET AI Act would create a pathway for independent evaluators, with a function similar to those in the financial industry and other sectors, to work with companies as a neutral third-party to verify their development, testing, and use of AI is in compliance with established guardrails.
As Congress moves to establish AI guardrails, evidence-based benchmarks to independently validate AI companies’ claims on safety testing will only become more essential.
Specifically, the VET AI Act would:
A copy of the bill text can be found here.
# # #