Customer-related Decisions
Product Scaling vs. Evaluation of Impact
How much effort should I focus on scaling and iterating my product versus evaluating how it impacts learning?
In the world of the education technology market, platform and service providers regularly wrestle with the desire to deliver results quickly versus the need to conduct rigorous evaluations that validate their claims.

The north star for the ecosystem is of course to establish a continuous improvement process around the use of learning data to enhance product design and implementation while also scaling. Seasoned courseware providers attest to the challenge of designing and developing solutions and recruiting adopters, while simultaneously collecting and analyzing data. Despite the tension inherent in “test-flying the plane while building it,” courseware providers understand that impact studies are a necessary condition of scale. Insights gained from rigorous evaluation inform scaling, which in turn creates more opportunities to engage customers, inviting subsequent cycles of evaluation.

RESOURCE

Tyton Partners’ Courseware in Context (CWiC) framework is a rubric that helps vendors and institutions assess the quality of courseware offerings.

Successfully navigating this tension requires a growth mindset and a commitment to ongoing improvement at your organization. The intentional use of robust data helps your communications and marketing teams to elevate your offering. It also provides evidence for the continuous improvement of your product, and enables changes to teaching practices at your partner institutions—all of which give your organization a competitive advantage. Ultimately, rigorous evaluation of your tools and products also demonstrates a genuine commitment to improve student learning.

The first step for any courseware provider in developing a data-intensive improvement strategy is to agree on a shared understanding of what efficacy means in education. Often, providers find that researchers have an entirely different understanding of efficacy and evaluation than the courseware development teams. This situation may lead to garbled signals.

Courseware providers must then internalize the criteria for determining efficacy in their development processes. Provider-institution misalignment aside, courseware teams sometimes underestimate the importance of ongoing rigorous evaluation for educational outcomes. They may be unfamiliar with methods to collect and analyze data that yields credible results. One point of confusion: the synonymous use of the terms “impact study” and “implementation research” to mean evaluation. The two have vastly different meanings.

Impact Study

...is not an interpretation of data that something (e.g., learning) happened as a result of your product being implemented.

...is the statistically significant evidence of the impact made on different dimensions of outcomes (e.g., improved grades, engagement) generated by your product.

...requires comparison classes not employing your product that are as similar as possible to classes using the courseware (e.g., same instructor, same curriculum content, etc.). Access to student-level data (prior achievement measure, final grade, demographics) is essential.

Implementation Research

...is not the study of how well your product is being used in the classroom.

...is a study of barriers to and facilitators for promoting the systematic application of your offering in practice. It takes into account context, institutional setting, stakeholders and unique conditions.

...requires small-scale pilots, access to relevant stakeholders (e.g., faculty, students, IT, provosts, deans) for interviews about their perspectives and experiences with the courseware.

Most colleges that aren’t executing on a digital learning strategy aren’t knowledgeable about improvement science. They’re unfamiliar with the requirements for collecting, managing and analyzing data to make evidence-based decisions for driving improvement. The lack of knowledge about improvement practices exposes other challenges:

  • Unclear leadership support for this type of recurring learning and evaluation (i.e. not an institutional priority)
  • Attitudes about sharing data and the perceived associated risks for product marketing are complex
  • Data-informed instruction in postsecondary education is still a relatively new concept
  • The process of collecting and analyzing data can be expensive and is time-consuming
  • Uneven institutional capacity in institutional research (especially at under resourced public two-year colleges)
  • Small-scale implementations produce small sample sets for evaluation
  • Difficulty recruiting busy faculty to participate in research and evaluation
  • Employing the appropriate metrics that render useful and credible insights can be difficult to achieve

NOTE TO INSTITUTIONS

Institutions can make the most of courseware platforms by proactively using the generated data to identify and connect students to support services.

No matter how well prepared a vendor, there’s no accounting for the natural chaos of scaling and evaluating courseware in real classrooms. Institutions that you recruit to pilot products might opt in and out without warning. Students in a tracked group may take and drop courses on a whim. Throughout such challenges, keep conducting rigorous impact studies and implementation research early and at regular intervals. You can still accumulate data that will begin to build brand awareness, faculty trust and student satisfaction. Going forward, it will be important for institutions and providers to be transparent about their capacities, capabilities and priorities in order to enter into a mutual continuous improvement relationship that works.

Ideally, your institutional partner has invested time and resources in an established Institutional Research Office that can coordinate with you on your data needs. In situations that require more groundwork to set up a proper study design, consider these strategies:

Best Practices
Product Scaling vs. Evaluation of Impact

Set expectations up front. Internally and with your institutional partner, carefully consider the expertise and conditions required (e.g., a comparison course, access to institutional data) in order to do a rigorous evaluation.

Obtain a Data Sharing Agreement (DUA). In light of the difficulties in getting DUAs in place, consider joining forces with faculty from the institution. They can help make sure you have the necessary DUAs and Internal Review Board (IRB) processes in place.

Leverage incentives. If feasible, provide incentive payments to the institutions that will provide institutional data at the student level.

Partner with an evaluation entity. The inclusion of an evaluation partner can create efficiencies (e.g., obtain DUAs and IRB) and rigor in the research and evaluation process.

Define quality and scaling goals. State explicit goals around both scaling and quality of the courseware that you’ll be measuring.

Be careful of claiming evidence. To gain faculty’s trust, be candid about the level and quality of evidence that you have. Beware of reporting on outcomes for which you do not have credible evidence. Transparency is best.

Share data from similar institutions. Practitioners are very eager to see evidence that courseware works specifically in their discipline and at a similar institution.

Consider the institutional context. If your research design requires a comparison course, how would you collect data from institutions that are using the product in a new course?