How Indiana’s Process Can Inform Tax Incentive Evaluation Planning

An expert reviews strategy from a leading state

Navigate to:

How Indiana’s Process Can Inform Tax Incentive Evaluation Planning

In an essay published in April 2022, Jim Landers, associate professor of clinical public affairs and Enarson fellow, John Glenn College of Public Affairs, The Ohio State University, summarized key lessons learned from Indiana’s experience planning tax incentive evaluations. Landers explains that ensuring evaluations provide actionable evidence about incentive performance to policymakers, program administrators, and the public requires careful planning that balances rigor and feasibility. He presents a series of factors and questions to help lawmakers conduct effective evaluations.

This summary was originally published in a newsletter distributed to tax incentive evaluators and researchers by The Pew Charitable Trusts.

Evaluation perspectives

Best Practices for Planning Tax Incentive Evaluations: Lessons Learned From Indiana’s Evaluation Process

Jim Landers
Associate Professor of Clinical Public Affairs, Enarson Fellow
John Glenn College of Public Affairs
The Ohio State University

Tax incentive program evaluations should ultimately provide valid, reliable, and convincing evidence about incentive performance to policymakers, program administrators, and the public. Achieving this goal requires evaluators’ talent and expertise as well as various evaluation strategies and techniques, which are particularly important during the evaluation planning stage. This is when evaluators must balance how rigorous and exhaustive their research can be with the time and effort required to conduct such analysis. Whether initiating an incentive evaluation process or continuing to implement a well-established one, there are a range of practices outlined in this article that incentive evaluators can implement to facilitate timely completion of robust incentive program evaluations.

Set a baseline

As evaluators begin planning, they should first ensure that the plan is consistent with the statutory provisions governing the evaluation process. For example: Indiana’s tax incentive evaluation statute, like many other states’ evaluation statutes:1

  • Identifies the agency responsible for conducting the evaluations;
  • Defines what qualifies as a “tax incentive” for purposes of evaluation or, in some cases, identifies specific tax provisions that must be evaluated;
  • Specifies multiyear scheduling requirements and annual deadlines for evaluations;
  • Describes various evaluation methods and practices that could be used for evaluations;
  • Specifies reporting requirements for evaluation findings and results;
  • Provides for policymaker involvement in the evaluation process.

Next, evaluators should become expert in the statutory provisions, rules, and program documents such as promotional and application language, tax forms and instructions, for the individual incentive programs being evaluated. Along with this information, evaluators should also consider several best practices that can help them produce useful, strategic, and impactful evaluations. These best practices include:

  • Set a strategic evaluation schedule aligned with policymaker priorities and evaluation team capacity.
  • Gather information about the incentive’s structure and administration.
  • Develop a framework of questions to inform evaluations.
  • Assess the strengths and weaknesses of administrative data.
  • Determine the feasibility of conducting an exhaustive evaluation.

As evaluators begin the hard work of planning evaluations, they should bear in mind that ideally, the information included in evaluations will inform policy decisions, guide program management practices, and improve incentive results. As evaluation teams deliberate over when and how to conduct incentive evaluations, they should be mindful of who will ultimately consume this information, and how it will be received and applied.

Set a strategic evaluation schedule aligned with policymaker priorities and evaluation team capacity

Evaluation teams should set a strategic multiyear evaluation schedule that meets statutory scheduling requirements and deadlines, but also aligns with policymaker priorities and the skill set of the evaluation team. This simple step can facilitate systematic planning of upcoming evaluations.

Statutory provisions establishing incentive evaluation processes vary from state to state. Some state statutes specify a schedule for incentive evaluations while others allow an evaluation oversight committee2 or the incentive evaluators themselves to set and adjust the multiyear incentive evaluation schedule.3 When setting up a multiyear schedule, evaluation oversight committees and evaluation teams need to be mindful of the incentives they select to evaluate each year. Setting a multiyear schedule isn’t as simple as dividing the number of incentives that must be evaluated by the number of years in the evaluation schedule. The annual schedule should be set strategically to facilitate timely and robust evaluation work each year.

Regardless of how a schedule is produced, it’s important to plan and develop the list of incentives to be evaluated in each year of the multiyear schedule and then stick to that schedule (so long as this schedule continues to meet statutory requirements, and policymaker and evaluator priorities). This enables evaluators to conduct current evaluations in an orderly and effective manner while also planning for upcoming evaluations. The worst possible scenario is to have evaluation plans for the upcoming year disrupted by a schedule change that potentially forces evaluators to plan and execute an evaluation or evaluations on a short timeline.4 Legislative committees and citizens panels that set the evaluation schedule must be aware that schedule changes could significantly impact on-time evaluation completion and evaluation quality. When planning an evaluation schedule, evaluators should maximize the ability to facilitate evaluator capacity and expertise and leverage complementary evaluation work.

Facilitate evaluator capacity and expertise

While the priority of the evaluation schedule is to facilitate timely and robust evaluation work, a well-thought-out evaluation schedule also can build evaluator capacity and expertise and ultimately improve the quality of the evaluation process. The learning curve for evaluators can be significant during the initial years of an evaluation process, so some thought should be given to the mix of incentives scheduled for evaluation when the program is initiated. Limiting the number of incentives or the complexity of the incentives evaluated in the initial year or two allows evaluation teams to develop and improve their expertise in planning evaluations, employing data and methods to carry out evaluations, and presenting evaluation results to policymakers and incentive program managers.

For example, evaluators in Indiana intentionally limited the scope in 2014 when initiating the evaluation process to include only a few incentives that were legally and programmatically uncomplicated and for which administrative data was readily available.5 This allowed evaluators to compute descriptive statistics, produce simple simulations, and conduct sensitivity analysis to evaluate the potential effects of the incentives. While the scope in 2015 was more expansive, the incentives (except for tax increment financing (TIF)) were largely straightforward with considerable administrative data that again allowed for descriptive analysis, simulations, and sensitivity analysis.

Leverage complementary evaluation work

A strategically planned evaluation schedule can also create efficiencies when data, methodology, or analysis and findings from one incentive evaluation inform and improve the efficiency and effectiveness of other incentive evaluations. This can be accomplished by grouping evaluations of incentives that share common or connected purposes, policy areas, or affected industries or demographic groups during the same year or in sequential years.

  • Purposes:The incentivized activity or behavior such as property rehabilitation, job creation, or capital investment or place-based incentive programs like TIF, enterprise zones, or other development/revitalization programs.
  • Policy areas:The motivation for the incentive such as economic development, community development, environment, energy, income support, or workforce development.
  • Industry sector:The industry sectors or types of businesses using the incentive such as manufacturing, logistics, pharma, information technology, or aerospace.
  • Demographic groups:The individuals, households, or taxpayers using the incentive such as homeowners, property owners, elderly, low-income, investors, charitable donors.

Evaluators in Indiana used this approach by evaluating common and even complementary incentives each year. As a result, the evaluation team capitalized on potential synergies in evaluator expertise and knowledge relating to specific taxes or incentive programs, evaluation approaches, data and data sources, and statistical methods. What’s more, the logical format of evaluations of similar or connected incentive programs has likely enhanced the ability of policymakers to grasp the design and operation of evaluated incentive programs as well as the evaluation findings.

For instance, in 2015 the evaluation team examined an array of commercial and residential property rehabilitation incentives – both state income tax credits and local property tax deductions. The property tax deductions had similar design features, administrative data from property tax records, and methodological limitations. Thus, evaluators could use the same approach for each incentive and produce informative descriptive statistics presentations, return on investment (ROI) simulations, and sensitivity analysis.

In 2016, the evaluation team focused on a group of incentives intended to encourage regional development and revitalization activities. The incentives comprised two place-based incentive programs (community revitalization enhancement districts and enterprise zones) and the state and local tax incentives complementing these programs.

The 2017 evaluation followed-up by focusing on the state’s array of economic development incentive programs, principally state income tax credits and local property tax abatement programs focusing on business attraction, business retention, and major economic development projects.6 These incentive programs share important common purposes and target similar businesses and industry sectors. In addition, some of these programs are awarded together for specific projects (e. g. the EDGE jobs tax credit, the Hoosier Business Investment tax credit, property tax abatement programs, and worker training grants) and potentially have significant interaction effects. As a result, this dynamic was examined by the evaluation team.

Current evaluation work can also impact future evaluation work when there is overlap between the design and operation of incentive programs or the data or methods used to evaluate the incentive programs. Indiana evaluators capitalized on this type of overlap several times. Data and methods used in 2015 to evaluate the state’s TIF program complemented the 2016 evaluation of the state’s enterprise zone program. GIS mapping methods used in 2016 to evaluate enterprise zones and community revitalization enhancement districts in the state were used again in the 2017 evaluations of the state’s certified technology parks and professional sports and convention development districts.

Gather information about the incentive’s structure and administration

Once there is an established evaluation schedule, evaluators should develop a framework to guide their evaluations. The framework should identify important components of an evaluation starting from the planning phase, conducting the evaluation, and finally, to reporting evaluation findings.

Planning an incentive evaluation starts with an evaluator’s knowledge about the incentive program. Consequently, there must be a pre-planning period devoted to developing an extensive understanding of the statutes providing for the incentive, including the intended purposes, operating rules, program documents, the day-to-day implementation, and desired results of the incentive. Without this knowledge, it would be difficult for an evaluator to develop an evaluation plan that would generate valid and credible information about an incentive program or effectively focus on the incentive elements that require evaluation.

Evaluators in Indiana developed various questions and identified data sources to guide their efforts to gather the information they needed. Their questions focused on both the structure of an incentive program (purpose, eligibility, and incentive design, etc.) and program implementation.

Program structure questions:

  • Based on statutory reading, what is the purpose of the incentive? Any major legislative changes since the program’s inception?
  • Does the incentive include stated performance goals and objectives?
  • Is the incentive discretionary (e.g., an individual or business must apply and receive approval from a government agency prior to being awarded incentive dollars)?
  • Is the incentive nondiscretionary or an entitlement (e.g., a taxpayer meets statutory qualification criteria to receive the program benefits)?
  • What are the qualification criteria/requirements for the incentive?
  • Against which tax or taxes does the incentive apply?
  • Does the incentive impact the tax base (e.g., exemption or deduction) or tax liability (e.g., credit, deferral, abatement)?
  • Is the incentive a dollar amount, a percentage of the value of what is being incentivized, or some other value measure?
  • Are there established minimum and maximum incentive values allowed?
  • If the incentive is a credit, is it refundable or nonrefundable?
  • How is the incentive supposed to operate or be implemented under statute?

Program administration questions:

  • What agencies are involved in administering and implementing the incentive?
  • What does the application or qualification process look like?
  • How do entities claim awarded dollars?
  • What are the administrative processes and practices used to implement the incentive?
  • How do those implementation processes or practices impact incentive performance?

Program structure and administration information sources:

  • Authorizing statutes
  • Administrative rules and guidelines
  • Application materials and processing documents
  • Tax returns and tax return instructions
  • Annual reports
  • Consultation with program administrators, incentive applicants, incentive recipients, and other stakeholders

Develop a framework of questions to inform evaluations

There are various ways to understand, analyze, and evaluate the impacts of tax incentive programs from examining administrative processes and describing program outputs to causal modeling and economic impact estimates. It’s likely no incentive evaluation team has the wherewithal (e.g., time, data, technology, expertise) to employ all these methods in an incentive evaluation. So, during the planning process, it’s imperative for evaluators to examine potential approaches to evaluate an incentive program and honestly assess whether they have the capacity to employ those approaches. During Indiana’s first evaluation year, the evaluation team considered these factors and developed a basic framework to follow.

To develop their framework, Indiana evaluators referred to the statutes establishing the incentive evaluation process. The approach they developed was based, in part, on these statutory provisions which specified the goals and purposes of the incentive evaluation process as well as the descriptive and analytical methods and information that could achieve these goals.7 However, statute did not mandate that evaluators use any or all the methods and information specified. Ultimately, this allowed the evaluation team to tailor an evaluation approach that reflected the legislature’s intent but also aligned with their expertise and capacity in addition to data limitations. The framework leveraged evaluators’ extensive experience producing fiscal impact statements and other fiscal and economic research for the legislature and reflected the evaluation team’s judgment about which descriptive and analytical methods and information would be most informative to policymakers, program managers, economic development officials, and the public.

Indiana’s evaluation team adopted a framework approach that coupled methodological questions with potential data sources relating to incentive programs and incentive recipients. Evaluators also developed several standard components of incentive evaluation reports that would be efficient to produce, provide background and context about an incentive program, and effectively inform policymakers about incentive program utilization, effectiveness, and impacts.

Methodological questions:

  • Can the reduction in tax liability due to an incentive be estimated for incentive recipients?
  • Can the reduction in wage cost or other operational cost due to an incentive be estimated for incentive recipients?
  • Can the increase in investment return due to an incentive be estimated for incentive recipients?
  • Can the wage, spending, or investment response rate to an incentive be estimated or are there academic or professional studies that could provide such response rates?
  • Can the monetary impacts of an incentive be estimated?
  • Can the monetary response estimate be used with a regional economic model (e.g., IMPLAN or REMI) to simulate the direct, indirect, and induced impact of the incentive on wage income, employment, and output?

Potential incentive program/recipient data sources:

  • Return-level income tax data
  • Parcel-level property tax data
  • Business establishment-level Quarterly Census of Employment and Wages (QCEW) data
  • Data collected from incentive applications and processing documents
  • Interview/survey data
  • Government databases (e.g., Census, BLS, BEA)
  • Data, indicators, measures collected from academic and professional studies

Standard components to incentive evaluation reports:

  • Overview of program statutes, rules, and guidelines
  • Written or schematic “logic model” explaining the implementation and operation of the incentive program
  • Reporting annual incentive program activity such as incentive utilization, incentive dollars claimed, average incentive claimed, impact of incentive on recipient tax liability, and income distribution of incentive utilization and claims
  • Review/meta-analysis of pertinent academic and professional research literature evaluating aspects of the incentive program or similar incentives

Assess the strengths and weaknesses of administrative data

There are several dynamics to planning data collection efforts for an incentive evaluation. First, can administrative data be obtained? More importantly, can administrative data be used to evaluate the effectiveness or impact of an incentive? The key administrative data are tax return or tax record data and data collected from the applications and other administrative processes relating to discretionary tax incentives.

Obtaining administrative data: Administrative data can range from public information to data that is protected from disclosure under state statutes or federal law. Consequently, data sharing relationships between state agencies or local governments and evaluation teams may not exist when an evaluation process is initiated and may be challenging to establish. One key requirement for establishing and maintaining data sharing relationships is the trust factor. That is, whether economic development agencies and tax administrators trust evaluators’ capacity to use the data responsibly and to appropriately secure any protected or confidential data.

Responsible use requires due diligence by evaluators to fully understand the data, how data are collected, and what the data measure. This means that evaluators need to research data documentation and program documents, and correspond with incentive program managers about the data. This due diligence requires that evaluators review data, use procedures to identify data problems such as missing or erroneous data, and work with program managers to correct or mitigate identified problems with that data. Responsible use also has implications for reporting evaluative findings. When using protected or confidential data, evaluators should follow accepted procedures for aggregating and reporting statistical measures to align with confidentiality protections. Second, evaluators should avoid overstating findings and conclusions gleaned from administrative data and should only state findings and conclusions that are supported by the data. Nothing could damage an evaluation team’s credibility more than sloppy use of administrative data or unfounded claims based on misunderstanding or misuse of administrative data.

Indiana’s incentive evaluation statute requires the Legislative Services Agency (LSA) to conduct annual incentive evaluations. When this process was initiated in 2014, evaluators were fortunate that LSA had a history of obtaining and using datasets from state and local government agencies to produce fiscal impact statements and other fiscal and economic research. LSA also had a proven track record of having the technical capacity to appropriately warehouse, secure, and maintain data extracts including protected or confidential data. As a result, state and local policymakers and program managers had a high level of confidence in LSA’s capacity to use administrative data responsibly to generate relevant and effective research. LSA had a long-standing data sharing relationship with Indiana’s Department of State Revenue and state statute required county property tax officials to annually submit digital extracts of all parcel-level property tax data to LSA. However, after the evaluation process was initiated, LSA developed an important data sharing relationship centered on accessing data from several discretionary incentive programs administered by the Indiana Economic Development Corporation (IEDC). Ultimately, the relationship was initiated because of the trust factor – IEDC leadership was confident that LSA’s evaluation team would secure data and use it responsibly.

Tax return or tax record data:Annual income tax return data (individual and corporate) or property tax records are a robust source of information that can be used to describe the utilization and fiscal impact of an incentive, but on its face provides no information about incentive effectiveness or economic impact. Despite these limitations, tax return or tax record data can be useful in producing informative descriptive statistics, constructing hypothetical scenarios, and conducting sensitivity analysis that can be quite revealing about the potential effects of an incentive.

In 2014 and 2015, Indiana evaluators employed descriptive statistics and hypothetical scenarios to evaluate and provide in-depth analysis of several income and property tax incentives for which only limited administrative data was available. While these evaluations did not employ statistically valid causal methods, the descriptive statistics and hypothetical scenarios created, in part, with tax return/record data provided a basis for policymakers to make reasonable judgments about the potential effectiveness of these incentives.

Income tax return data was used in 2014 to evaluate income tax deductions for installing home insulation products and solar-powered attic fans.8 The evaluations focused on the impact of these incentives on the recipient’s tax liability and scenarios relating their impact on installation costs. Evaluators concluded that the deductions were, at best, only minimally effective based on their tax liability impact, project cost, and spending response of taxpayers claiming the deduction. The evaluations of these deductions were straightforward and compelling and, consequently, the legislature repealed both deductions in 2015.9

  • The deduction for installing home insulation products totaled $26.9 million in 2012 with a revenue loss of about $916,000.10 The median tax liability reduction was about 1% and almost two-thirds of taxpayers claiming the deduction experienced a tax liability reduction of no more than 1.3%. Evaluators estimated that the deduction reduced project cost by an average of only 1.5%. Based on the price elasticity of installation projects derived from existing studies, the evaluators estimated that the deduction induced only about $738,000 in additional spending.
  • The deduction for installing solar-powered attic fans totaled only $153,000 in 2012 with a revenue loss of about $5,000.11 The deduction was relatively new, enacted in 2009, and almost two-thirds of those claiming the deduction reduced their tax liability by 1% or less. Evaluators estimated that the average discount on project costs due to the deduction was only 2.3% and that the induced spending on installation projects was insignificant.

Likewise, data from property tax records was used in 2015 to evaluate two property rehabilitation deductions: (1) the rehabilitated residential property deduction for repair, replacement, or improvement of certain dilapidated residential dwellings and (2) the rehabilitated property deduction for repair, replacement, or improvement of structures at least 50 years old.12 Both deductions can be claimed for five years. Evaluators developed scenarios of typical properties and rehab investments for which the deductions were claimed and computed the tax savings and the increase in project profit and ROI due to the deductions. The scenarios suggested that the tax savings over five years and improvement in project ROI was relatively small and probably would not significantly increase the volume of rehabilitation projects undertaken in the state. Additionally, evaluators conducted sensitivity analysis by changing parameter values of the scenarios and found that under some limited circumstances, the impact of the deduction could potentially increase such that it would impact investment.

In some cases, merging tax return data with other administrative or secondary datasets enables evaluators to pursue more complex and rigorous statistical modeling. In Indiana’s 2015 TIF program evaluation, evaluators merged a statewide parcel-level property tax database with several other data sources, including a statewide database identifying TIF parcels, establishment-level data from the Quarterly Census of Employment and Wages (QCEW), and measures from U.S. Census datasets, to estimate the causal relationship between TIF and economic outcomes. The property tax database alone would not have permitted causal analysis. Evaluators used the merged database to successfully estimate the causal relationship between TIF programs and employment, wage, and property value changes. The TIF parcel database resulted from a new reporting requirement for local governments operating TIF areas. Upon conducting the TIF study, evaluators revealed additional data needs for future research and evaluation of TIF programs. As a result, legislation was enacted in 2016 requiring local governments to report these additional TIF data elements to the state.13

Administrative data from discretionary incentive programs:Discretionary performance-based incentive programs may provide more robust administrative data than what can be gleaned from tax returns or tax records. These data are generated by the administrative processes of the government agency overseeing the incentive program, including applications, eligibility and incentive award determinations, and provision of incentive dollars once required economic activity occurs (e.g., investment, job creation, job retention). Such data can be a rich source of descriptive information about program operations and administrative processes, incentive recipients (e.g., firm information), incentive amounts awarded, and planned and realized investment or employment by incentive recipients; and could be used for descriptive presentations, simulations and sensitivity analysis, and causal econometric modeling.

In 2017, Indiana evaluators studied the state’s EDGE job creation tax credit, which is a discretionary incentive program administered by the Indiana Economic Development Corporation. Evaluators merged firm-level administrative data from the EDGE program with firm-level Quarterly Census of Employment and Wages (QCEW) data. The data merge augmented the administrative data by adding industry sector information and actual wage and employment totals from the QCEW, enabling evaluators to produce a variety of informative and compelling descriptive presentations and comparisons of firms that received EDGE credits to non-recipient firms.

The data merge, however, failed to produce a firm-level panel data set of sufficient size and length to permit evaluators to conduct more sophisticated causal econometric modeling. Evaluators faced significant technical challenges matching firms in the EDGE data set with firms and establishments in the QCEW due to discontinuities over time in firm structure, firm ownership, and federal employer ID numbers. And these technical challenges were without question exacerbated by time constraints and competing workload requirements. Ultimately, a sizeable number of matches could be achieved over short periods of time (e.g., two years or eight quarters of QCEW data), but a long-term panel of firm-level EDGE and QCEW data spanning eight to 10 years could not be constructed for a sufficiently large percentage of the EDGE credit recipients.

Determine the feasibility of conducting an exhaustive evaluation

Evaluators must balance how extensively they will study a tax incentive with statutory evaluation deadlines, competing workload, data availability, database and data analysis tools, and methodological expertise. While planning, evaluators should consider the potential scope of the evaluation, what evaluative efforts are reasonable, and what research and analysis is most informative for policy and program operations.

To determine the feasibility of conducting an exhaustive evaluation – and what kinds of evaluations are feasible and useful – staff should consider the following factors:

  • Incentive evaluations laws: Often, incentive evaluation laws specify evaluation purposes, analytical methods for achieving these purposes, and, in some cases, measures of incentive performance. Sometimes an evaluation law may require certain components that are not feasible (i.e., due to data constraints or how the program is designed or implemented). Evaluators should recognize that statutory provisions are a starting point for planning the different elements of an evaluation, but some may not be addressed in a final evaluation.
  • Policymaker and program manager information needs: Evaluators should assess what information is most important to inform and guide policymakers and program managers about an incentive program. In the planning stage, evaluators should query policymakers, program managers, and other stakeholders about what information would be most useful and gauge evaluation scope and focal points based on these needs.
  • Program significance:In some cases, evaluators have flexibility to adjust the evaluation scope based on incentive program significance. Potential measures of significance could include utilization levels, fiscal impact, whether the incentive has a sunset date, or policy importance (e.g., the incentive serves an important industry sector).
  • Formative vs. summative evaluation: Formative evaluations examine the processes used to administer and implement programs and strive to assess how program implementation may affect program performance and how it can be improved. Summative evaluations study program outcomes, impacts, and effectiveness. While most incentive evaluators tend to focus on examining incentive effectiveness or the economic impacts, evaluators should not dismiss the importance of formative evaluation. Examining the administration and implementation processes of an incentive program may be the key to improving such policies and processes, extending the reach of the program, and improving program outcomes and impacts.
  • Data, data, data: It’s not about the evaluative methods you want to employ, it’s about the ones you can employ with the available data. Evaluators should assess during the planning process what data is reasonably accessible (administrative data and otherwise), how much time it will take to assemble clean, usable databases, and what evaluation methods can be implemented using that data.

Deeper dives into the nuances of incentive evaluation

While the breadth of this initial planning work is extensive, it builds the foundation for conducting robust, rigorous, and informative incentive evaluations. It is also important to remember that approaching evaluations is often an iterative process subject to competing workload priorities, policy climate, policymaker demands, staff resources, and improvements in evaluator expertise, among other factors. As such, conducting evaluations and managing the evaluation process will likely evolve over time. Separate written pieces will detail how to choose a methodology, manage an ongoing evaluation process, prepare reports, and present findings to policymakers and other stakeholders.

Endnotes

  1. Indiana Code 2-5-3.2-1.
  2. For example, incentive evaluation schedules administered by the Washington Citizen Commission for Performance Measurement of Tax Preferences; Minnesota’s Legislative Audit Commission; and Oklahoma’s Incentive Evaluation Commission.
  3. For example, incentive evaluators in the District of Columbia Office of the Chief Financial Officer; Indiana’s Legislative Services Agency; and Colorado’s Office of State Auditor schedule incentives for periodic evaluation.
  4. Indiana’s initial incentive evaluation law (enacted in 2014) gave the authority for scheduling incentive evaluations to the Commission on State Tax and Financing Policy. The Commission was an interim committee reappointed each year by legislative leadership and typically met only two to three times annually from August to October. The potential for frequent schedule changes by the Commission on very short notice before the October 1 deadline for completing annual evaluations led to legislation in 2015 that shifted authority over the evaluation schedule to evaluators in Indiana’s Legislative Services Agency.
  5. The 2014 evaluation analyzed three income tax deductions to encourage: (1) homeowners to install insulation products; (2) businesses to install solar-powered roof vent fans; and (3) individuals to purchase long-term care insurance.
  6. The evaluation defines economic development as policies and programs that attempt to improve the economic well-being and quality of life for a community by creating jobs, supporting, or growing incomes, and wealth creation.
  7. Goals for annual evaluations: (1) to ensure tax incentives accomplish the purposes for which they were enacted; (2) to account for the cost of tax incentives in the biennial budgeting process; and (3) to provide information needed by the legislature to make policy choices about the efficacy of tax incentives. Descriptive and analytical methods and information to accomplish these goals: (1) the attributes and policy goals of the tax incentive; (2) the tax incentive’s equity, simplicity, competitiveness, public purpose, adequacy, and conformance with the purposes of the legislation enacting the incentive; (3) the activities the tax incentive is intended to promote and the effectiveness of the tax incentive in promoting those activities; (4) the number of taxpayers applying for, qualifying for, or claiming the tax incentive, and the tax incentive amounts (in dollars) claimed by taxpayers; (5) the tax incentive amounts (in dollars) claimed over time; (6) the tax incentive amounts (in dollars) claimed by industry sector; (7) the amount of income tax credits that could be carried forward for the ensuing five-year period; (8) an estimate of the economic impact of the tax incentive, including a return on investment calculation, cost-benefit analysis, and direct employment impact estimate; (9) the estimated state cost of administering the tax incentive; (10) the methodology and assumptions of the tax incentive review, analysis, and evaluation; (11) the estimated leakage of tax incentive benefits out of Indiana; (12) whether the tax incentive could be made more effective through legislative changes; (13) whether measuring the economic impact of the tax incentive is limited due to data constraints and whether legislative changes could facilitate data collection and improve the review, analysis, or evaluation.
  8. The income tax deduction for the cost of installing various home insulation products authorized under Indiana Code 6-3-2-5 and the income tax deduction for the cost of installing solar-powered roof vent fans in residential and commercial buildings authorized under Indiana Code 6-3-2-5.3.
  9. Section 1 of Public Law 36-2015 (HEA 1142-2015).
  10. Tax year 2012 comprised the latest year of claims data for the deduction.
  11. Tax year 2012 comprised the latest year of claims data for the deduction.
  12. Property tax deductions for residential and non-residential property rehabilitation expenses authorized under Indiana Code 6-1.1-12-18 and 6-1.1-12-22.
  13. Indiana Code 36-7-14-13(e)(7) enacted in Section 33 of P.L. 204-2016 (HEA 1290-2016).