"In order to seek impact, you need to start with prioritized areas of opportunities, not just ideas or even prioritized ideas. Problems can be noisy, but they don’t necessarily correlate with being the biggest growth opportunity." - Brian Balfour
The role of a growth team, much in the same way as a data science team, is to learn as much about the customer as possible.
The added benefit of a growth team is that they can utilize experiments to learn/test hypotheses that existing datasets cannot provide the answer to.
For example, your dataset might show you how many people are visiting your site from LinkedIn, but it might not tell you which product features they're most interested in. For that, the growth team might launch an experiment to create specific use case landing pages, and using ads, see which use case has the highest click-through rate with their target audience.
When I first started out at Qatalog, I set up a simple experiment template in the format of:
- North Star Metric - What is the metric that we're trying to improve?
- Problem - What is the customer problem that we're trying to fix?
- Solution - What is the solution that would fix the customer problem?
- Test Method - How are you going to test the solution to the control?
- Success Criteria - How much does the metric need to change in order to give us confidence in the result?
- Result & Decision - Did we change the metric significantly & learnings.
This template was useful in helping us create experiments that were solely focused around "hacking" or improving metrics such as conversion rate or unique visitor numbers.
But after running a few successful experiments, we realized that although these "hacks" did produce improvements in our target metrics, we failed to derive any sort of new learnings from our experiments other than "the new redesign was an improvement over the old design".
This made us rethink how we approached experiments.
During a long weekend, I stumbled upon Hubspot's Growth Process.
Hubspot's focus was on forming quality experiments. The growth team at Hubspot spent time looking at data and formed hypotheses around what they were seeing.
They added learning outcomes to ensure that each experiment gave them new insights, and a business outcome so that stakeholders were also incentivised from the experiments.
Taking inspiration, I created a new template for experiments going forward:
- Background - Provide context on the research you’ve done and the thought process behind why this experiment is worth running.
- Hypothesis - Something that we believe to be true based on what we know from research.
- Learning Objective - What do we hope to learn by running this experiment?
- Business Objective - What do we hope to accomplish by running the experiment?
- Prediction - What do we believe will happen if our hypothesis is true?
- Experiment Design - How are we designing this experiment to test our hypothesis?
- Predicted Outcome - The expected results of the experiment.
- Experiment Post Mortem - An open discussion at the conclusion of a project, where you and your team can identify and analyze all aspects related to the project.
- Process Post Mortem - Is there anything we should do better next time?
- Observation - If the experiment was successful and deployed, after 2 weeks what was the impact?
Note: We share all of our experimental findings with the rest of our company using Qatalog's Post Feature
Every company has a set of core metrics that are essential to its business. If you're Facebook, it's the average time that is spent on the site or if you're Groupon it's the number of coupons redeemed.
In order to increase those core metrics that power your startup. You have to first understand the customer. Start by looking at your power users - these are users who are the outliers in your database. For example, if people on average post on Facebook twice a day, then your power users are those who post ten to fifteen times a day.
Your aim in doing this is to increase these types of users or find out what causes them. Start by asking "Did these power users all come from one single channel? What do they have in common with each other? What led them to become power users? How can we make it easier for them to post on Facebook?"
The opposite can also be helpful. "Why are these users not engaging with the platform? What's common among non-active users? What led them to become inactive?".
These questions will lead you down a research rabbit hole that will in turn inspire experiments to be created to find out if certain hypotheses are correct.