Every product team searches for their north star metric.
Facebook’s Chamath Palihapitiya famously found their metric in discovering that if they could get a user to “7 friends in 10 days”, they were much more likely to become an active lifetime user. Dropbox discovered that 1 file upload per folder led to higher engagement, and Zynga focused on “day 1 retention”. This type of analysis has traditionally required a sophisticated data engineering and machine learning infrastructure. It can take months to find even a single north star metric for a particular product goal.
At ClearBrain, our mission is to enable companies to identify which of their users will convert or churn before they do so. We build and retarget look-alikes for your users in minutes, without a single line of code. But knowing who will purchase isn’t as powerful if you don’t understand the context of why they will purchase. And hence we are excited to give you a peek under the hood of one of our newest features - Benchmarks.
The Benchmarks feature enables any company to, in minutes, discover the drivers for any conversion event. By the click of a button, you can now determine the threshold in user activity (i.e. “benchmark”) you need to activate a user throughout their customer journey. Below, we walk through how this new feature works and how to use it to discover your own “7 friends in 10 days”.
Setting a Product Goal or North Star Metric
Identifying the benchmarks for your product in ClearBrain is as simple as point-and-click.
ClearBrain integrates with data layers like Segment or Heap and automatically ingests all of your historical user behavior. This enables you to select any attribute you’ve tracked as a goal for which you’d like to assess the important benchmarks.
ClearBrain’s automated machine learning infrastructure will then evaluate your users’ past attributes in relation to the goal via a decision tree methodology, and determine their relative importance towards the product goal you have selected. This works in two steps of: a) determining the right benchmarks, and b) ranking the benchmarks.
Determining the Benchmarks For Your Product Goal
The first step in the process is to determine the best benchmark for each of your user attributes. For Facebook’s attribute of “X friends in Y days”, this is the step that determines that 7 and 10 are the optimal values for X and Y, respectively.
In order to determine the best benchmark for a given attribute, we first calculate the Gini impurity for all possible benchmarks of the attribute. Gini impurity is a statistic that assesses how well the benchmark splits your users into those who have performed the goal and those who have not.
For instance, if all the users who reached the benchmark performed the goal and all the users who did not reach the benchmark did not perform the goal, then the benchmark perfectly splits your users and would have a Gini impurity of 0. Conversely, if the rate at which users performed the goal does not change irrespective of whether they have surpassed the benchmark, then the benchmark would have a high Gini impurity. Thus, the lower the Gini impurity, the better the benchmark. (For more on Gini impurity scoring, you can follow this guide on decision trees).
In the image above, you’ll see we have calculated the best benchmark for a hypothetical user action named “Upgrade Page Viewed”. No user did this action more than 3 times, so the possible benchmarks are on users who performed the action more than 0, 1, or 2 times. The best benchmark for the Upgrade Page Viewed action is “Upgrade Page Viewed > 1”, since that is the benchmark with the lowest Gini impurity.
Ranking Benchmarks And Finding Your “7 Friends in 10 Days”
Now that we have a benchmark for each attribute, the second step is to display these benchmarks in order of importance. In the Facebook example, this is the step that compares “7 friends in 10 days” to the benchmarks of other attributes such as “3 posts in 5 days”, and decides which one is more important.
For each benchmark, ClearBrain calculates the probability and respective confidence interval that a user will perform the goal if they surpass the benchmark. We use a confidence interval of 95% to provide a statistical estimation of the range of possible probabilities. This means we can be 95% certain that the true probability that the user will perform the goal if they surpass the reported benchmark is between the lower and upper bound of the benchmark’s confidence interval.
We in turn rank all the benchmarks by the lower bound of their confidence interval, to ensure only benchmarks with enough users to provide statistical significance end up near the top of the list.
It should be noted that in the development of this feature, we also experimented with a few other metrics for ranking benchmarks, such as using Gini impurity again or computing the AUC of a holdout set. However, these metrics do not accomplish the purpose of the Benchmarks product - which is to help you understand why your users will convert or churn. For most of our customers, telling them that a benchmark has a Gini impurity of 0.2 or an AUC of 60% is not as intuitive as telling them that we are 95% confident that users who reach this benchmark will perform the specified goal with 20-30% probability.
The Many Many Benchmarks For Your Product
Benchmarks is a powerful feature that has already helped companies discover their own analogue for Facebook’s “7 friends in 10 days”. The Benchmarks feature automates the data engineering and statistics necessary to identify the thresholds in product behavior that lead to the highest likelihood of your users to convert, upgrade, or churn.
In leveraging Benchmarks, it should be noted however that your product does not necessarily have a single benchmark to focus on. The metric that applies for one stage of your product journey does not necessarily apply to the next stage.
You may find that getting a user to activate takes 50 prior actions, but retaining them as a customer takes 100 actions. It is important to hence evaluate different benchmarks for every stage of your user funnel. Don’t settle for 1 north-star product metric. Have 10, or 20, or 50.
Before ClearBrain this process would have taken months. But now, you’re able to evaluate the benchmarks for every goal by the click of a button.
Interested in trying out Benchmarks yourself? Sign up for a 30 day free trial of ClearBrain today!