A bill backed by Sen. Shelley Moore Capito, R-W.Va., would establish evidence-based rules and standards for testing and checking AI systems.
The Validation and Evaluation for Trustworthy Artificial Intelligence Act, or the VET Artificial Intelligence Act, would require the National Institute of Standards and Technology to develop a set of voluntary technical guidelines and specifications in order to help increase public trust and adoption of AI tools and platforms.
“The VET AI Act is a commonsense bill that will allow for a voluntary set of guidelines for AI, which will only help the development of systems that choose to adopt them,” Capito said.
Additionally, the bill would establish an advisory committee to review the criteria and look at considerations for data privacy protections against potential harm to individuals from an AI system.
Capito is cosponsoring the legislation, along with Sen. John Hickenlooper, D-Colo., chair of the Senate Subcommittee on Consumer Protection, Product Safety, and Data Security.
“AI is moving faster than any of us thought it would two years ago,” Hickenlooper said. “But we have to move just as fast to get sensible guardrails in place to develop AI responsibly before it’s too late. Otherwise, AI could bring more harm than good to our lives.”
AI companies largely police themselves when it comes to how they train their models, run safety tests and manage risks and there are currently few safeguards to check the developer’s claims, according to the senators.
The VET AI Act aims to set up a process for independent evaluators, similar to auditors in the financial industry, to serve as neutral third parties. These evaluators would verify whether companies are following agreed-upon rules for developing, testing and using AI responsibly.
The VET AI Act would:
The measure has received support from a number of industry, policy, and research organizations, including the Bipartisan Policy Center Action, Federation of American Scientists and the Information Technology and Innovation Foundation.
“The Validation and Evaluation for Trustworthy AI Act would bring much-needed certainty to AI developers, deployers, and third parties on external assurances on what processes such as verification, red teaming, and compliance should look like while we, as a country, figure out how we will engage with AI governance and regulation,” said Dan Correa, CEO of the Federation of American Scientists. “We commend Senator Hickenlooper and Senator Capito for working together on this global policy issue that will showcase America’s leadership in setting standards for AI Safety and Governance.”