Experts Warn That OMBs AI Guidance Could Slow Federal Adoption Of The Emerging Tech

Experts Warn That OMBs AI Guidance Could Slow Federal Adoption Of The Emerging Tech

Several AI stakeholders and experts—many from tech industry associations and trade groups—warn that upcoming AI guidelines for federal agencies could hamper low-risk use of AI in government.

Shortly after the White House issued an executive order on the technology in late October, the Office of Management and Budget released draft guidance on the government's use of artificial intelligence. The final version is expected to be ready by the end of March and comments from interested parties will be included on the agenda.

The draft requires authorities to adopt minimum risk management practices for AI tools, such as real-world performance testing of systems deemed to have "security implications" or "rights implications."

For example, systems that significantly influence or control activities such as electric grid operations or government benefits decisions are included in the list of use cases provided by the Office of Management and Budget that agencies are "likely" to consider in cases. high impact. Category.

"This draft guidance takes a risk-based approach to AI harm reduction and limits expanded protections to contexts where the use of AI poses significant risks to people's rights and safety," a spokesperson said. from the Office of Management and Budget . FCW said. .

However, some stakeholders are skeptical that the guidance could hinder AI adoption because the use cases that could be included in the definition would be subject to a new list of processes and requirements.

"I'm very concerned that this will lead to a risk-averse view of AI, even though we should be using the technology now," Ross Nodorft, executive director of the Digital Innovation Alliance, said at the Dec. 6 meeting. Artificial Intelligence Accountability Oversight and Hearings Subcommittee. This coalition is made up of government contractors.

He continued: “There is a gap between the guidance currently provided and how guidance can be achieved across institutions.” “This delta will lead to individuals with individual powers allowing officials to try to use this AI “to make decisions about whether AI is good or bad based on a definition that, frankly, could use more specificity.”

"The definition of 'rights impact' may include AI applications that do not pose a high risk," the chamber wrote in its comments. Agreements considered to affect rights are subject to additional minimum risk management requirements in addition to those affecting security in the draft guidance.

Some technology trade groups responded to the draft guidance with concerns about the definition of artificial intelligence systems issued by the Office of Management and Budget.

"Today, the Office of Management and Budget identifies nearly all applications as high risk, making it difficult or impossible for federal agencies to adopt artificial intelligence tools," the Information Technology Industry Council wrote.

The Software Alliance warned about ambiguity in the Office of Management and Budget's definition of what constitutes high risk and ambiguity regarding the threshold for activating minimum practices, and the Software and Information Industry Association noted that low-risk activities could potentially be classified as high risk.

It is not just industrial groups that are affected.

"The welcome policy faces a persistent challenge: society does not recognize high-risk AI use cases," said comments from the Center for Democracy and Technology, a nonprofit organization, which suggested that the Office of Management and Budget clarify guidance and offer support, such as working in agencies. Groups to help institutions categorize their use cases.

A broader definition of AI is also a challenge.

“Neither the scientific community nor industry agrees on a common definition of AI capabilities,” the Government Accountability Office noted in its recent report. "Even within government, definitions vary."

“While the draft OMB memo is impressive in many ways, key parts could have the unintended effect of limiting innovation,” Daniel Hu, a law professor at Stanford University and director of the Regulatory Laboratory, told Nextgov. , Evaluation and Governance. “Red routine.” FCW .

"That doesn't mean outsourcing is the solution," he added. "What we need is the development of technologists in government who can lead the responsible implementation of AI."

He and a group of five academics and former government officials, including Citizen Technology Director Jennifer Pahlka, cautioned in their joint comments that minimum requirements for particular government benefits or services fall under the Bureau's definition of AI and budget computing. of Management and Budget. Expected use. Affects rights: You can also remove non-controversial and low-risk applications.

They asked the Office of Management and Budget to narrow and clarify the definition of AI that affects claims and to distinguish between types of AI and types of benefit programs.

Minimum requirements for AI systems that compromise rights (including human review, opt-out, and public consultation) combined with a broad scope of application, could "jeopardize core operations across a wide range of government programs" and "hinder critical modernization efforts," the researchers say. say. Writing, particularly in the context of already risk-averse government agencies.

“The OMB memo is a good example of how to articulate the opportunities and risks of AI. But the process must adapt to the risks,” Hu said in remarks prepared for the hearing. “For example, the memo's suggestion that agencies allow people to opt out of using AI for human review does not necessarily make sense given the wide variety of AI programs and applications. For example, the US Postal Service uses artificial intelligence to read handwritten ZIP codes." "Symbols on envelopes. Getting out of this system means hiring thousands of people just to read the numbers.”

The dangers of terrifying artificial intelligence technology | Artificial intelligence | documentary film