How Survey Methods Can Support Tax Incentive Evaluations

Expert shares examples from states

Navigate to:

How Survey Methods Can Support Tax Incentive Evaluations

In this column, originally published by The Pew Charitable Trusts in October 2023, Jim Landers, associate professor of clinical public affairs and Enarson fellow, John Glenn College of Public Affairs, The Ohio State University, uses state examples to show how survey analysis can provide efficient and effective methods for collecting data that is not available via records or secondary sources.

###

Evaluation perspectives

“And the Survey Says,” Written Surveys and Interviews Can Yield Important Information About Incentive Program Operations and Impacts.

Jim Landers
Associate Professor of Practice in Public Affairs, Enarson Fellow
John Glenn College of Public Affairs
The Ohio State University

Introduction

As evaluators, we often prize certain data sources over others. Administrative data (such as incentive program records and tax returns), secondary data (such as Census Bureau or Bureau of Labor Statistics economic, social, and demographic databases), and even economic data from proprietary models (such as IMPLAN and REMI) drive many of our evaluation discussions. As a result, we can overlook other sources that could help us gain insights into incentive program operations and impacts. Surveys and interviews are such examples.

Evaluators may be reticent to put much stock in surveys or interviews of program participants (e.g., taxpayers or businesses receiving tax incentives) or program stakeholders (e.g., program personnel, economic developers, industry lobbyists), thinking that responses don’t reflect their actual motivations. Yet they provide an effective way of collecting information about program operations and results that may not be evident in our standard sources. Written surveys, for instance, are a means of efficiently collecting quantitative and qualitative data from program participants and stakeholders.

Besides being an efficient data collection procedure, surveys provide an effective way of collecting information about program operations and results that may not be evident in program records or other administrative data. Albeit not as efficient to administer as written surveys, especially over large samples, interviews are another option. Interviews can be administered flexibly in person or remotely. They also allow the interviewer to ask follow-up questions, potentially making for a richer, more informative dataset.

Surveys and interviews also provide stakeholders with an opportunity to voice their perspectives as a part of the evaluation process. Giving them this opportunity and building relationships early in the process can yield greater receptivity to the results once they are published.

Implementing Surveys and Interviews

While they can be efficient and effective data collection tools, surveys and interviews must be designed with care and used for appropriate ends, as with any methodology. These tools can yield detailed information about the administrative operations of an incentive program, the work activities of program personnel, or the behavior of program participants. However, this information isn’t collected by evaluators directly observing administrative activities or, particularly, the behavior of program participants. Consequently, it can be difficult, if not impossible, to confirm that what survey or interview respondents say about a program, its operations, or the behavior of program participants is what actually has occurred.

Still, surveys and interviews can yield valid or credible measures of incentive program activities or participant behaviors. Thus, evaluators need to consider validity, threats to validity, and the types of bias that can confound survey and interview results, especially if the end game is to use these results to evaluate the “true” effects of an incentive program. Other considerations include reliability or consistency of measurement and the administrative work necessary to implement informative surveys and interviews.

The following are excellent references on program evaluation that provide deep dives into survey and interview procedures.

  • Josselin, J.M. & Le Maux, B. (2017). Program Evaluation Theory and Practice: A Comprehensive Guide.
  • Linfield, K.J. & Posavac, E.J. (2018). Program Evaluation: Methods and Case Studies. Routledge.
  • Newcomer, K.E., Hatry, H.P., & Wholey, J.S. (Eds.). (2015). Handbook of Practical Program Evaluation. San Francisco, CA: Jossey-Bass & Pfeiffer Imprints, Wiley.

Examples From Evaluations

Evaluations conducted in Indiana, Maine, Minnesota, and Virginia in recent years provide several good examples of how evaluators have used surveys and interviews to produce robust descriptive information about an incentive program or inform conclusions about the impact of an incentive program.

Indiana evaluators from the Legislative Services Agency surveyed local tax officials in 2017 and 2018 to determine the intensity with which certain property tax incentives were being used. The 2017 survey revealed that the state’s Infrastructure Development Zone property tax exemption was not being awarded and that some local tax officials were not aware of the incentive. The 2018 survey gleaned information about four renewable energy-related incentives that were otherwise being aggregated in county property tax reports. The survey generated separate multiyear county-level reports for each of the incentives.

In 2021, Indiana evaluators again used surveys and interviews to inform evaluations of property tax incentives and two of the state’s place-based incentive programs. Like the earlier evaluations, analysts surveyed county tax officials to determine the intensity of usage of property tax incentives for brownfield revitalization and infrastructure development. In addition, they interviewed representatives of two of the state’s place-based development programs. The interviews yielded much descriptive information about the state’s 22 certified technology parks. Finally, evaluators interviewed local economic development administrators to gather information about challenges to redevelopment in the state’s 10 Community Redevelopment Enhancement Districts and their approaches to revitalizing these areas.

Evaluators in Virginia (staff of the Joint Legislative Audit and Review Commission and contractors from the University of Virginia Weldon Cooper Center) employed surveys and interviews quite extensively to assess workforce incentives in 2018, data center incentives in 2019, and infrastructure incentive programs in 2020. In 2018, evaluators surveyed 1,300 businesses and interviewed program personnel, industry stakeholders, and economic developers to assess the economic impacts of two grant programs—the Virginia Jobs Investment Program and the Small Business Jobs Grant Program. While an overwhelming percentage of the incentive recipients indicated that they would have proceeded with projects had the incentives not been available, still large percentages indicated that the projects would have proceeded on a small scale. Businesses also reported that the grants affected their decision to train workers and that the training resulted in workforce improvements.

The 2019 evaluation of Virginia’s pollution control equipment and facilities sales tax exemption surveyed 280 businesses to estimate the fiscal impact of the exemption since administrative data was limited. Survey respondents reported exemption-eligible purchases, and this data was used by evaluators to estimate the total revenue loss attributable to the exemption. Evaluators also conducted interviews with seven data center companies operating in Virginia regarding site selection and the impact of the sales tax exemption. The interviews produced a rich source of information, revealing the relative importance of various site selection factors, the importance of the sales tax exemption for site selection decisions, and the exemption’s impact on the initial capital investment cost and additional investment cost over time to replace and upgrade equipment.

In 2020, Virginia’s evaluators again took advantage of interviews and surveys to assess the use and impact of infrastructure and regional development incentive programs and to assess the relative importance of different business attraction incentives and the prevalence and impact of business-ready sites around the state.

Evaluators in Minnesota and Maine also provide examples of how surveys can be useful tools to analyze administrative processes or assess incentive impacts. The 2017 evaluation of Minnesota’s research tax credit surveyed 1,431 companies that claimed the tax credit in 2012, 2013, or 2014. Survey questions assessed the importance of various economic factors for a firm to conduct R&D in the state, including the research tax credit, reasons for not claiming the tax credit, and potential structural changes to improve the efficacy of the tax credit. Notably, 58% of the survey respondents indicated that the tax credit encouraged their decisions to conduct R&D, although respondents also indicated that several other economic factors were stronger influencers of R&D activity. Also, less than 16% of respondents indicated that the tax credit was an important factor when considering relocation of business activities to Minnesota.

In 2019, Minnesota evaluators from the Office of the Legislative Auditor employed surveys to assess administrative processes of the state’s Economic Development and Housing Challenge Program. The program provides grants and loans to cities, Tribal housing corporations, nonprofit organizations, and private developers to develop affordable rental and owner-occupied housing. Evaluators surveyed past applicants to assess the efficiency and transparency of the program’s application process. Notably, survey respondents reported ways in which application processes were complicated and burdensome. The survey also revealed that applicants (including repeat applicants) spent an average of 100 hours to complete a funding application.

In 2020, evaluators from Maine's Office of Program Evaluation and Government Accountability used interviews as part of their assessment of the state’s Business Equipment Tax Reimbursement Program and Business Equipment Tax Exemption. Evaluators interviewed program personnel, businesses, stakeholders representing the business community, and municipal officials. Key findings of the interviews included: (1) small businesses lack awareness of the two incentive programs; (2) required work of municipal assessors is time-consuming; and (3) equipment purchase decisions are influenced largely by business and market factors rather than personal property taxes on business equipment.

Conclusion

Surveys and interviews can be important components of the evaluator’s toolbox, providing efficient and effective methods of collecting data that isn’t available via administrative records or various secondary data sources. Indeed, many examples exist of incentive evaluators using surveys and interviews to develop rich descriptive and analytical datasets. Evaluators must, however, conduct surveys and interviews with care to ensure that data provides a valid representation of how an incentive program performs. As well, they must interpret and use the results of surveys and interviews with a critical eye, reasonably framing and presenting their findings from surveys and interviews.