What Is Most Important When Evaluating Tax Incentive Programs?
Evaluation expert discusses the fundamentals of effective practice
In this essay, Jim Landers, associate professor of clinical public affairs and Enarson fellow, John Glenn College of Public Affairs, The Ohio State University, describes takeaways from the 2022 National Conference of State Legislatures/The Pew Charitable Trusts Roundtable on Evaluating Economic Development Tax Incentives. The event focused on the fundamentals of incentive evaluation: approaches; data collection, sources, and analysis; and reporting of results.
This piece was originally published by Pew in a February 2023 newsletter distributed to tax incentive evaluators and scholars.
Evaluation Perspectives:
Fundamentals First!
Jim Landers
Associate Professor of Practice in Public Affairs, Enarson Fellow
John Glenn College of Public Affairs
The Ohio State University
The eighth annual NCSL/Pew Incentive Evaluators Roundtable was a focus on fundamentals. While the sessions and group discussions were broad in scope, a common thread running throughout the roundtable was an emphasis on the essential building blocks of incentive evaluations:
- evaluation approaches,
- data collection, sources, and analysis, and
- reporting evaluation results.
Evaluation approaches. This year’s agenda covered the myriad angles from which evaluators have approached their work. Several evaluations focused on program design features and administrative processes; others assessed program activities and cost. Not surprisingly, numerous evaluations addressed program effectiveness and impacts incorporating “but for,” novel ways of measuring progress towards programmatic goals, and distributional effects.
Key takeaways
- Evaluations of incentive design and administrative processes are important ways of fleshing out problems that divert programs from their intended purposes or potentially diminish or negate program outcomes.
- The design of incentive programs with similar purposes or target populations should be analyzed together to determine whether they are complementary or working at cross-purposes.
- Evaluating incentive program effectiveness or outcomes should examine the usefulness and validity of performance metrics specified in statute or being used by program administrators.
Data collection, sources, and analysis. Obtaining valid and reliable data to measure the performance of incentive programs is one of the most challenging facets of evaluation work. The analytical methods that evaluators can employ to measure performance ultimately depend on these data. The Roundtable explored a variety of data sources: administrative data, survey data collected from program participants, and secondary data sources (such as the U.S. Census Bureau’s Service Annual Survey, Annual Business Survey, and the Longitudinal Employer-Household Dynamics dataset and the U.S. Bureau of Labor Statistics Quarterly Census of Employment and Wages). For the first time, the Roundtable included a session on regional economic models (RIMS II, IMPLAN, and REMI), the economic data they house, and demonstrations of how to use them for incentive evaluation.
Key takeaways
- Descriptive statistical analysis of administrative data can provide useful, and sometimes compelling, information about incentive program participation, administration, design flaws, and impacts.
- Surveys of incentive program participants or other stakeholders can be an important primary data source for evaluating administrative processes and effectiveness.
- Regional economic models can be employed to indirectly examine the “but for” question by comparing estimates of existing economic conditions to those that arise due to an incentive.
- Merging administrative data with secondary sources of economic, demographic, and employment data can be time-consuming work. However, it can result in richer descriptive analysis and may permit evaluators to investigate the causal relationship between an incentive and economic outcomes.
Reporting evaluation results. The keynote speaker who kicked off the Roundtable, Maryland State Delegate Julie Palakovich-Carr, discussed her experiences using incentive program evaluations to drive policymaking in Maryland – specifically for the state’s Enterprise Zone program. Delegate Palakovich-Carr’s extensive discussion zeroed in on the important role of evaluators in providing timely information to various audiences including policymakers, program administrators, the public, and the media. The target audience will determine the type, depth, and format of the information that evaluators should provide.
Key takeaways
- Consider other reporting formats to complement or substitute for the conventional report when transmitting evaluation results to policymakers. Evaluation teams in some states are trying alternative reporting methods such as written synopses or briefs, podcasts, and even short videos to report evaluation findings.
- Thoughtful data visualizations can distill complex information into a graphic that policymakers can efficiently and effectively understand and use at other times when technical experts are not present.
- Timing is everything. Consider the time of year along with the depth and format of information you provide to policymakers. When policymakers are normally very busy (e.g., during the legislative session), provide synopses or briefs of evaluation findings. Save the detailed reporting for legislative downtime.
- Evaluator recommendations can be an important and efficient means of informing policymakers about whether and how incentive program weaknesses could be mitigated. If evaluators are prohibited from providing policy recommendations, evaluators might try alternative formats. For instance, some state reports include a section outlining program weaknesses and their implications for efficacy. Others suggest questions for policymakers to consider with respect to the operation and effectiveness of the incentive.
During this year’s Roundtable, we saw how far the field of tax incentive evaluation has progressed in less than 10 years. Evaluators continue to experiment with new approaches, build on their past work, and sometimes return to the basics. This is challenging work, to be sure, but it matters and is having an impact.