A bill backed by Sen. Shelley Moore Capito, R-W.Va., would establish evidence-based rules and standards for testing and checking AI systems.

The Validation and Evaluation for Trustworthy Artificial Intelligence Act, or the VET Artificial Intelligence Act, would require the National Institute of Standards and Technology to develop a set of voluntary technical guidelines and specifications in order to help increase public trust and adoption of AI tools and platforms.

“The VET AI Act is a commonsense bill that will allow for a voluntary set of guidelines for AI, which will only help the development of systems that choose to adopt them,” Capito said.

Additionally, the bill would establish an advisory committee to review the criteria and look at considerations for data privacy protections against potential harm to individuals from an AI system.

Capito is cosponsoring the legislation, along with Sen. John Hickenlooper, D-Colo., chair of the Senate Subcommittee on Consumer Protection, Product Safety, and Data Security.

“AI is moving faster than any of us thought it would two years ago,” Hickenlooper said. “But we have to move just as fast to get sensible guardrails in place to develop AI responsibly before it’s too late. Otherwise, AI could bring more harm than good to our lives.”

AI companies largely police themselves when it comes to how they train their models, run safety tests and manage risks and there are currently few safeguards to check the developer’s claims, according to the senators.

The VET AI Act aims to set up a process for independent evaluators, similar to auditors in the financial industry, to serve as neutral third parties. These evaluators would verify whether companies are following agreed-upon rules for developing, testing and using AI responsibly.

The VET AI Act would:

  • Direct NIST, in coordination with the Department of Energy and National Science Foundation, to develop voluntary specifications and guidelines for developers and deployers of AI systems to conduct internal assurance and work with third parties on external assurance regarding the verification and red-teaming of AI systems.
  • Such specifications require considerations for data privacy protections, mitigations against potential harms to individuals from an AI system, dataset quality, and governance and communications processes of a developer or deployer throughout the AI systems’ development lifecycles.
  • Establish a collaborative Advisory Committee to review and recommend criteria for individuals or organizations seeking to obtain certification of their ability to conduct internal or external assurance for AI systems.
  • Require NIST to conduct a study examining various aspects of the ecosystem of AI assurance, including the current capabilities and methodologies used, facilities or resources needed, and overall market demand for internal and external AI assurance.

The measure has received support from a number of industry, policy, and research organizations, including the Bipartisan Policy Center Action, Federation of American Scientists and the Information Technology and Innovation Foundation.

“The Validation and Evaluation for Trustworthy AI Act would bring much-needed certainty to AI developers, deployers, and third parties on external assurances on what processes such as verification, red teaming, and compliance should look like while we, as a country, figure out how we will engage with AI governance and regulation,” said Dan Correa, CEO of the Federation of American Scientists. “We commend Senator Hickenlooper and Senator Capito for working together on this global policy issue that will showcase America’s leadership in setting standards for AI Safety and Governance.”