CXL Week 5 Review AB Testing

Megha Pg
4 min readJul 6, 2021

Hi Everyone, I am back with another experience with CXL Institutes growth Marketing. I know I am late this week, but I assure you will be on time next week.

It seems that if you want to run growth experiments user research is THE path. To write last week’s post I dedicated extra time to the chapter called user centric marketing thinking that I will learn everything I needed and next I would dive into growth experiments.

Little did I know that this week I will be learning about conversion research.

The good part is that now I understand why growth hackers get a bad reputation. When using listicles to “generate experiments ideas”, the chances to get relevant results are minimal. Also, this is why a part of the optimization program is to get deeper customer insights and can take up to three months.

It seems that if you want to run growth experiments user research is THE path. To write last week’s post I dedicated extra time to the chapter called user centric market thinking that I will learn everything I needed and next I would dive into growth experiments.

Little did I know that this week I will be learning about conversion research.

The good part is that now I understand why growth hackers get a bad reputation. When using listicles to “generate experiments ideas”, the chances to get relevant results are minimal. Also, this is why a part of the optimization program is to get deeper customer insights and can take up to three months.

The chapter on how to run experiments begins with a workshop by Peep Laja. It is a high-level overview of how to generate ideas, prioritize them, and decide when a test is winning. On the topic of A/B testing, I will write an article next week because the amount of information is over 9000.

First leson of growth experiments is that you simply don’t trust listicles. The articles are written by SEO people to get traffic and is a good practice to you don’t build a business on listicles.

But where do you start?

  • best practices? yes. but it is not optimization. When you build a website it should follow the best practices
  • design trends — not necessarily something to follow in general
  • market leaders or competition — just copy Amazon? Not a great path and benchmarking is also not optimization

Start your own optimization process following these steps:

  1. where are the problems
  2. what are the problems
  3. why is this a problem
  4. turn these issues into test hypotheses
  5. prioritize test and instant fixes

The discovery of matters is problem-solving 101. The foundation is the research with better data that answer business questions. For this you need a process and Peep Laja is presenting is the researchxl framework.

Do you have enough data to run an A/B test?

· If you have below 1000 conversions (transactions, clicks, leads…) per month, do not A/B test. Significance would be too low. At this stage, just focus on purely growing. Optimization will come later. Above 1000 conversions per month, you can start A/B testing.

· If you have above 10000 conversions per month, you can run 4 A/B tests per week. 10000 conversions is the “DNA border”. If you are below, take more risks, grow your company above 10000 conversions and then create a real proper structure. Once you reached that mark, you will need to get more optimization teams and A/B testing will become part of the DNA of your company.

Overall evaluation criterion (OEC): different teams may have different (thus sometimes conflicting) goal metrics. It is really important to have a goal metric that fits each team’s purpose in the company. That is what we call an overall evaluation criterion. OEC should be defined by short-term metrics that predict long-term value and translate factors of success. Do NOT rely on vanity metrics.

Setting an hypothesis: Formulating an hypothesis gets all stakeholders aligned on the same page. It frames what we are going to do and why, and saves time on discussions during and after the experiment. The hypothesis shows everyone the direction to go.

Prioritizing your A/B tests: There are some well-known prioritization frameworks that you can use such as PIE or ICE. No matter which framework you choose, they are quite similar. Here, we are going to introduce the “PIPE” model. It is about putting the hypothesis at the center of the framework.

· Potential: what is the chance of the hypothesis to be true

· Impact: where would these hypothesis have a bigger impact

· Power: what are the chances of finding a significant outcome

· Ease: how easy is it to test and implement

In the execution phase of your A/B test, you should also consider quality insurance. If some browsers or devices perform less, you might want to check if everything is right. Analyse the potential loss of income before deciding whether you will do some QA in these segments or not: it is a matter of time VS money. What you surely need to do is checking if the interaction with other pages is still working fine. Make sure your A/B test didn’t create any technical issue for the other parts of your website.

The last part of the execution phase is monitoring. Should you stop the experiment earlier than expected? There are three main things you should check out:

· Monitor graphs and stats. If there is something abnormal, it’s probably broken. If it is broken, stop the experiment.

· Check the main user segments and analytics. If traffic driven is 50/50, then you should have the same number of users in A and B. If not, there is what we call a sample ratio error. Stop the experiment.

· If you lose too much money, stop the experiment. This is probably something abnormal. Talk to the customer service, analyze chat logs… Try to find out what is wrong, whether it is caused by the experiment or an outside event.

This is what I learnt this week hope will be able to get back to you with a new exciting leaning next week.

--

--