The Infrastructure Investment and Jobs Act (IIJA) of 2021 allocated an unprecedented $65 billion for programs—including the Broadband Equity, Access, and Deployment (BEAD) Program and the Digital Equity Act (DEA)—aimed at expanding connectivity and closing the digital divide between those who have access to broadband internet and those who do not. States are deciding how to use BEAD and DEA funds to put in place affordable broadband infrastructure and expand access both to devices and to digital skills resources. This historic effort to reach universal broadband access presents an opportunity for states and researchers to monitor and evaluate broadband infrastructure and adoption programs and to develop a framework for sustaining these efforts. Evaluations of these programs can help build an understanding of how broadband adoption affects economic development, health care, education, and more.
The Quello Center for Media and Information Policy at Michigan State University, in partnership with The Pew Charitable Trusts, developed a comprehensive assessment framework that states and communities can use to evaluate IIJA’s impact and ensure that resources are allocated effectively. Johannes Bauer is the director of the Quello Center; Elizabeth Mack is a professor in the department of geography, the environment, and spatial sciences; and Megan Knittel was, when this interview was conducted, an assistant professor in the department of media and information (as of January, she is a digital opportunity project manager at Merit Network Inc.).
This interview has been edited for clarity and length.
Mack: We’re building on earlier research to develop a next-generation framework for evaluating federal and state policies targeted at broadband deployment, adoption, and access. We hope this framework will help states as they implement the Broadband Equity, Access, and Deployment Program and the Digital Equity Act and can help guide tracking progress toward outcomes such as availability and adoption of broadband as well as digital skills proficiency.
Bauer: Some previous broadband grant programs, like the Broadband Technology Opportunities Program that was created in 2009, were implemented quickly, and evaluation was often an afterthought. The programs funded by the Infrastructure Investment and Jobs Act, on the other hand, contain many provisions to guide monitoring and evaluation. We wanted to develop a comprehensive framework for evaluating broadband programs to ensure that the IIJA’s BEAD Program and the DEA have a lasting impact. Robust monitoring will help track the progress of these programs, and evaluation of this progress will help policymakers learn what works so they can adjust future broadband initiatives.
Mack: Our report provides practitioners and researchers with a high-level overview of approaches to broadband policy evaluation as well as of the IIJA’s opportunities and challenges. We’re offering a methodological framework that acknowledges the diversity, dynamic development, and complexities of the broadband ecosystem—and that can be adapted to local circumstances and customized to specific community needs.
Knittel: States will need to adopt a systematic approach to data collection, analysis, and reporting. Some practitioners and researchers have created surveys assessing barriers to digital use and adoption, but there’s still a need for a concerted effort to collect data on digital skills and digital literacy.
To assess the impact of broadband access and digital equity programs—which can provide technical resources, training, and devices to help everyone thrive in the modern digital economy—on aspects such as health care and economic development, broadband policy interventions such as deployment and digital inclusion programs need to be analyzed in a larger context, taking into consideration variables that affect access and adoption. This will require a longitudinal approach, but in the meantime, establishing the baseline measurements now can assist in future evaluation.
Bauer: Our report outlines many data sources that practitioners and researchers can leverage. During the early program stages, data reported will provide important information. Once this information is integrated into other data, several sources are available, including state and local broadband data collection maps, the Federal Communications Commission’s (FCC) Urban Rate Survey, and the National Telecommunications and Information Administration’s Local Estimates of Internet Adoption. For infrastructure deployment, state broadband offices can use improved data from the revised FCC Broadband Data Collection, which provides information on service availability by location, to monitor changes to metrics over time. For digital equity, it will be challenging to establish a causal link between policy interventions and increased broadband uses and digital skills. For broader community outcomes, it’s critical to select appropriate statistical models to account for the effect of factors other than broadband programs, such as income and demographics.
Mack: Incomplete or missing data is one of the biggest obstacles to monitoring and evaluation. Secondary, publicly available data is missing in several areas, including digital skills, aspects of broadband service quality, and the price of broadband at various speeds. This year, the FCC began requiring internet service providers to publish labels that include clear information about cost and performance of service. These labels could fill the gap on availability and price, but at present the data isn’t systematically curated and analyzed, and the labels document only a subset of prices, not the whole spectrum of available rates. We’re recommending that states partner with one another to share the burden of data collection efforts and to make publicly available any data that’s collected. That would facilitate monitoring and evaluation efforts.
Mack: Data from private sources can complement data from public sources and narrow some of the gaps. But because private companies are trying to make money from their data, some of it isn’t available in the public domain. What’s more, it can be difficult to ascertain if the data from private sources is statistically accurate.
It would be desirable to develop partnerships between private firms and researchers to make such data available publicly. Some companies have made initial steps in this direction by hosting selected data in public archives or by making portals available to access some of the data, but there is still often a delay before the public can access the information. Expanding such models would help build a community of researchers and experts on broadband; it’s that community that our project is intended to promote.
In general, public sector and private sector data—and the associated metadata that outlines the data collection process and/or methods used to derive the data—should be made available in the public domain to facilitate the monitoring and evaluation of funded programs, particularly for projects funded by federal, state, and local agencies.
Knittel: Monitoring and evaluation provide the evidence needed to assess the programs’ effectiveness in achieving the desired outcomes of digital equity and community economic development.
They also help identify what works and what doesn’t, which leads to continuous improvement and ensures that public funds are used efficiently. Monitoring and evaluation will both be critical as IIJA is implemented—to justify to state and federal policymakers the ongoing efforts toward universal broadband access as well as future efforts to find sustainable funding sources.
Knittel: Monitoring is an ongoing process that focuses on a project’s progress in real time. A regular review of project milestones and reporting requirements helps identify any risks that might jeopardize a project’s success and allows for corrective measures to be taken.
Evaluation, on the other hand, aims to assess the outcomes and overall impact of projects and programs. This includes measuring the effect of state policies on network infrastructure rollout, the effect of digital equity measures, and the joint effect of these programs on broader outcomes such as health care and economic development. Planning for evaluation needs to start early so that the data needed to do it well gets collected as part of the monitoring process.
Bauer: Our framework can provide a structured approach, offer guidance on best practices, and highlight the importance of transparency and accountability in program implementation.
Clear goals and baseline metrics need to be established upfront. Monitoring and evaluation frameworks should examine outcomes for unserved and underserved areas and populations, as well as the effects of BEAD and DEA beyond these targets. States must select appropriate indicators and metrics to ensure effective program design.
Knittel: It’s important to start monitoring and evaluation as soon as a program begins to ensure that programs are designed effectively—and that valuable information is preserved from the beginning. Programs should also be flexible enough to account for new broadband access and adoption issues.
There needs to be a standard set of principles for evaluation and data collection so that states can benchmark their accomplishments against peers and learn from each other.
Overall, appropriate steps need to be taken now to maintain current investments and plan for improvements in the future. The success of IIJA programs hinges on our ability to monitor and evaluate their impact. These programs are not just a moment in time; they’re a starting point for sustained work toward making sure that everyone is included in the digital world.