CHARLESTON, W.Va. – U.S. Senators Shelley Moore Capito (R-W.Va.) and John Hickenlooper (D-Colo.) recently reintroduced their bipartisan Validation and Evaluation for Trustworthy Artificial Intelligence (VET AI) Act. The bill directs the National Institute of Standards and Technology (NIST) to work with federal agencies and stakeholders across industry, academia, and civil society to develop detailed specifications, guidelines, and recommendations for third-party evaluators to work with AI companies to provide robust independent external assurance and verification of how their AI systems are developed and tested.

“The VET AI Act is a commonsense bill that will allow for a voluntary set of guidelines for AI, which will only help the development of systems that choose to adopt them. I was proud to join Senator Hickenlooper in reintroducing this legislation, and I look forward to getting this bill passed out of the Commerce Committee soon,” Senator Capito said.

BACKGROUND:

Currently, AI companies make claims about how they train, conduct safety red-team exercises, and carry out risk management on their AI models without any independent verification. The VET AI Act would create a pathway for independent evaluators, with a function similar to those in the financial industry and other sectors, to work with companies as a neutral third-party to verify their development, testing, and use of AI is in compliance with established guardrails.

As Congress moves to establish AI guardrails, evidence-based benchmarks to independently validate AI companies’ claims on safety testing will only become more essential.

Specifically, the VET AI Act would:

  • Direct NIST, in coordination with the U.S. Department of Energy and National Science Foundation, to develop voluntary specifications and guidelines for developers and deployers of AI systems to conduct internal assurance and work with third parties on external assurance regarding the verification and red-teaming of AI systems.
    • Such specifications require considerations for data privacy protections, mitigations against potential harms to individuals from an AI system, dataset quality, and governance and communications processes of a developer or deployer throughout the AI systems’ development lifecycles.
  • Establish a collaborative Advisory Committee to review and recommend criteria for individuals or organizations seeking to obtain certification of their ability to conduct internal or external assurance for AI systems.
  • Require NIST to conduct a study examining various aspects of the ecosystem of AI assurance, including the current capabilities and methodologies used, facilities or resources needed, and overall market demand for internal and external AI assurance.

A copy of the bill text can be found here.

# # #