This article is the fourth in a series about public finance issues that are likely to capture the attention of lawmakers this year.
Artificial intelligence has quickly become a buzz topic among state leaders, and lawmakers in 31 states considered nearly 200 bills related to AI last year. Recent leaps in generative AI have the potential to create budget-saving efficiencies, such as reducing application processing times and freeing up staff capacity for other work. At the same time, however, states must deal with the risks that AI could pose to vital systems, particularly public information and data security.
In states throughout the country, auditors and public finance departments are exploring the possibility of using AI to lower the cost of monitoring and oversight, reduce risks, and streamline administrative processes. One recent report estimated that AI could boost productivity by $519 billion a year across all U.S. governments.
For instance, the Government Finance Officers Association (GFOA) is working with Rutgers University to pilot how AI can help governments comply with the federal Financial Data Transparency Act, which requires that financial disclosures filed for outstanding bond debt be machine readable starting in 2027. GFOA previously estimated that implementing reforms needed to comply with the law could cost governments at least $1.5 billion by the deadline. But if the GFOA-Rutgers project is successful, an AI-powered data extraction process could make ongoing compliance virtually cost-free while reducing the risk of error.
At the same time, however, AI’s potential to be used in cyberwarfare—such as an AI chatbot spreading disinformation on a state website or hotline—is a major threat to state and local governments, which have historically been slow to modernize technological systems. Although the pandemic sped up governments’ technology adoption, cybersecurity practices have not kept pace. By some estimates, modernizing state computer systems to be compatible with AI is a billion-dollar proposition.
“I think the scalability of attacks greatly increases with the use of generative AI,” said Casey Kopcho, a principal auditor for the Oregon secretary of state. Kopcho co-authored a report about the rise of domestic terrorism in the state and found that misinformation online was one of the primary drivers. “If we have technology that can increase the velocity of misinformation, that’s concerning.”
In light of the significant potential benefits and risks, state officials throughout the country, starting with Vermont in 2020, have begun defining AI policy goals, usage, ethical guardrails, and data protection standards as agencies have explored specific uses for the technology. For example, departments of transportation in several states now use AI-powered modeling and predictive analytics to forecast when bridges and roads will need maintenance, repair, or replacement—a strategy that could help avoid more costly repairs in the future.
Last year, 18 states passed AI-related legislation or resolutions, including a handful that established task forces to make AI policy recommendations or mandated assessments of the technology’s probable impacts—for good or ill—on government operations. Among those was Louisiana, which created a committee to study how AI might affect various government functions as well as legislative, regulatory, and fiscal decisions. In addition, several governors have announced new policy directives around exploring AI, most recently in Maryland, New York, and Ohio. Many other AI-related bills that were introduced last year are still pending in state legislatures.
As 2024 takes shape, perhaps nowhere will these concerns be debated more than in California, where the administration of Governor Gavin Newsom (D) recently released a report on the benefits and risks of generative AI that is intended to guide future policymaking. Lawmakers in the state have proposed at least a dozen bills aimed at targeting risks in election administration, government services, and mental health, to name a few.
Liz Farmer works on The Pew Charitable Trusts’ state fiscal policy project.