≡ Menu
Niels Hoven

Income Share Agreement Incentives and Dynamics Part 3: Metrics and product teams

One overlooked challenge of ISAs is the interdependence between admissions and outcomes and how much harder that makes traditional metric-driven optimization.

This essay is the last of a three part series exploring the impact of ISAs on customers, companies, and day-to-day operations on product teams. The previous essays can be found here:

  1. The customer perspective – How demand for ISAs will evolve
  2. The company perspective – How ISAs shift company strategy
  3. The operator perspective – Key metrics for ISA product teams

Imagine a traditional online courses company like Udemy or Coursera. When revenue drops, diagnosing the problem is straightforward. If conversion is down, there’s a problem with the purchase funnel. If retention is down, there’s problem with content quality.

But for a program like Lambda School whose revenue comes from ISAs, what happens if one of their cohorts has a low success rate? For example, what if employment of graduates drops from 90% to 60% for a cohort? Did admissions become less selective? Did the teaching become less effective? How do you diagnose the problem?

For an ISA payment model, the interplay of selectiveness and effectiveness make it significantly more challenging to troubleshoot. It’s more difficult to isolate the effects of each variable. Nearly every metric that is affected by the quality of admissions is also affected by the quality of education, so how can you isolate and measure their effects?


It turns out that we can think of this as a mix-shift problem. Mix-shift describes what happens when the makeup of your audience changes (e.g. a sudden influx of new users), resulting in major metric changes even without any changes in the product.

Ideally, most inbound traffic has attribution tracking – i.e. you can tell whether a user arrived via an Adwords ad or a particular Facebook lookalike audience. Thus it’s straightforward to see if there has been a significant change in the ratio of traffic coming from different channels/audiences.

But imagine if you didn’t have that information? What would you do? This is essentially the situation that an ISA school runs into when trying to diagnose a root cause as either a change in the product, or a change in their class makeup.

The dual role of admissions

Any applicant pool will consist of a mix of potential students with wildly varying attributes – different levels of grit, motivation, ability, experience, etc. Essentially, this is like the ecommerce example where an audience is composed of a mix of different segments. But unlike in the ecommerce example, identifying the audience segments is a lot more difficult.

For an ISA, the admissions process has to do much of the heavy lifting for user segmentation – using a limited amount of data to understand as much as possible about applicants, and then making a prediction about whether or not the current program can help the applicant be successful.

Thus admissions has two jobs:

  • Characterizing applicants based on limited information (e.g. “how much motivation, commitment, ability, etc do we think this applicant has, based on their essays and homework?”)
  • Making a call which groups are most likely to benefit from the current curriculum (e.g. “are people who can only commit one night a week likely to be successful?”)

So Admissions must use metrics to evaluate both their predictive validity and their admit/no-admit criteria. With that in place, Outcomes can evaluate how effective their educational programs are within various segments.

Diagnosing problems

If applicants with similar traits in the admissions process perform wildly differently over the duration of the course, then the questions asked in the admissions process need to be re-examined, as their predictive validity is low.

If applications with similar traits in the admissions process perform similar over the duration of the course, but one particular segment is consistently struggling, then the admit/no-admit criteria is likely flawed.

And finally, if the curriculum is improving over time, then for any particular audience segment, success rates should increase over time as well. If a particular audience segment suddenly begins performing differently, then some issue with the effectiveness of teaching is a likely culprit.

Industry challenges

I expect these challenges will cause existing institutions to struggle to adapt to an ISA-based education model. ISAs open up untapped market and the advantages of personalized pricing, but companies that offer them will have to develop new core competencies. The clear winners here are students and consumers, who benefit from better aligned incentives and the flexibility of new payment models.

We are at the very early stages of the ISA movement. Expect to see momentum behind ISAs grow significantly over the next few years, and I, for one, could not be more excited.