CXL Conversion Optimization Minidegree — Week 10 review

Dhiya A Hani
6 min readMar 28, 2021

We’re on Week 10 of CXL’s Conversion Optimization course!

This week, we’re delving deep into testing — the thing that actually attracted me to the minidegree in the first place.

This week, we’ll be covering:

  • Google Analytics Audit with Fred Pike
  • A/B Testing Foundations with Peep Laja
  • How to Run Tests with Peep Laja
  • Testing Strategies (again) with Peep Laja

Since the three testing courses will be way too much to cover individually, I’ll put them in one section later on. Without being too detailed, of course.

If you’re interested in the nitty-gritty of testing but still not sure if you should take a course on it, you can read CXL’s blog instead. There is a lot of info about testing there, as I belatedly realized earlier this week.

So, let’s go to Part 3’s remaining module: GA Audit.

GA Audit

So, this course actually has a thorough checklist you should follow when you do a GA Audit for one of your clients. And this checklist is the core of this course too, as the rest of the course is us going through the checklist with Fred Pike. However, I’m not sure if I’m allowed to share it publicly.

Instead, let me walk you through what you should check on a GA Audit:

Accuracy

This is basically just a brief checkup to see if the right data with the right format is entering your GA or if there’s something that shouldn’t be there (hello, filters).

Besides that, you’d also want to check if your website is sending page views correctly to GA. To do this, you need the DevTools console in your browser, which usually you can access using Ctrl + Shift + I, and Tag Assistant, the chrome extension to detect tags in a web page.

Channel group and site crawl

In this section, you’re making sure that each of your incoming traffic is allocated to the right channels. We also saw this in Mercer’s Intro to GA course, a couple of weeks ago.

While site crawl is making sure that every page on your website has a GTM tag or a GA script embedded using site crawl tools like Screaming Frog. In this step, we also want to check for any on-site UTM since that breaks sessions.

User interactions

In the user interactions session, we’re just making sure that events and goals are set up correctly and the setup makes sense. This section is more freeform than the others, since each business will have its own events and goals, so you have to use your brain more in this section.

PII and EEC

PII stands for Personally Identifiable Information. For example, email addresses, names, addresses, etc.

Basically, PII = bad.

If Google notices that you have PII in your Google Analytics data, your entire data will be wiped out. So you’d want to check for PII throughout your entire GA setup by using regular expressions. Often, you’ll find PII in search terms, page content, events, and custom dimensions.

While EEC is just making sure that your enhanced e-commerce tracking is set up correctly, the tracking makes sense, and it actually correlates with the reality, the money flowing into your wallet.

Testing

Phew! Now we’re going into Part four of CXL’s Conversion Optimization minidegre: testing. Let me sum up the first three modules.

Let’s start with a mindset shift: you test to learn. To validate, yes, but also to learn.

By continuously learning about what your visitors respond to, you can build a customer theory that will guide future tests. A customer theory also makes it easier to predict which hypothesis has the most potential to improve your conversion rate.

With that out of the way, let’s get to our first step: coming up with a hypothesis.

A good hypothesis comes from a problem.

Find what the problems are → the root cause of the problem → possible solutions (also goes by “your hypothesis”).

Now, most likely, you’ll have a list of hypotheses you want to test. However, you probably don’t have enough traffic (or time) to test all of them. This is where you prioritize.

There are three models you can use to prioritize which hypothesis to test:

  • the PIE model
    PIE stands for potential, importance, and ease. You rate each hypothesis with these criteria, ranging from 0–10. Score your hypothesis by averaging its PIE rating, and you’ll find out which tests you should
  • the ICE model
    ICE stands for impact, confidence, and ease. There are two versions of the ICE model. The first one is almost identical to the PIE model. However, the second model is a little different, with only 4 points in total and a weighted scale. Here’s the scoring system, taken from the worksheet in the lesson.
  • the PXL model
    The PXL model is the one used and created by CXL. It uses a series of yes/no questions, which you’ll answer with either 0 or 1. Sum up the ‘yes’s for a hypothesis and you’ll find which hypotheses you need to test first. This model is the most flexible one out of the three, since you can add a criteria that matches your business goals. It’s also more objective since you don’t need to rate your hypotheses’ potential or impact.

Now, let’s talk about tests. There are other methods of testing other than A/B testing, such as multivariate testing, bandit tests, and existence tests. However, if you have limited traffic and standard conversion rates, you’ll most likely be doing A/B tests most of the time.

So let’s discuss what you need to be aware of when doing A/B tests.

  1. Sample size
    Before running a test, make sure that you have adequate traffic. Determine how long you’ll be running your tests by using your sample size and number of conversions as a guide. Try not to end tests that haven’t reached your desired sample size yet. You can’t really trust tests with inadequate sample size.
  2. Test duration
    Try to run a test for at least 2 business cycles, except if your sample size and conversion rate don’t change daily.
  3. Statistical significance
    Honestly, the definition of statistical significance is still a little bit abstract in my mind. So, I’ll borrow this definition from Optimizely:
    Statistical significance is the likelihood that the difference in conversion rates between a given variation and the baseline is not due to random chance.
    Check your statistical significance before ending a test. A statistical significance of 95% means that it’s 95% likely that your test results aren’t caused by chance. The program actually talk more about this particular statistic in the next module, Statistics for A/B testing.

Another thing you need to be aware when analyzing tests is validity threats. Just as its name suggests, validity threats are what can cause your test results to skew. There are four types of validity threats discussed in this module:

  1. History effect
    Did something happen outside of your control that can cause your test results? A pandemic, perhaps? Or maybe your competitor is doing a clearance sale that caused all your loyal visitors to their website? Be aware of what’s happening in the outside world. When in doubt, you can always redo your test.
  2. Instrumentation effect
    This one is caused by bad quality assurance. An error when implementing your tests causes flawed data. It could be that you forgot to put in your A/B testing script or maybe the goal or metric you’re tracking isn’t being sent over properly. In any case, always check your setup when launching an A/B test!
  3. Selection effect
    When you mistakenly assume that a portion of your traffic represents the totality of traffic, your data will naturally skew. For example, if you’re using ads to send more traffic to your test page even though you designed the page for organic traffic.
  4. Broken code effect
    While the instrumentation effect happens when there’s an error when implementing your testing instruments, broken code effect happens when there are bugs in one of the variations. A bad user experience clearly isn’t what you intended, so it’ll affect your test results later on. Check on your test pages to make sure they’re performing just fine.

So, those are the gist of this week’s lessons. There are so much more info on these courses that I, unfortunately, can’t write all of them. My notes are already reaching the 20th page mark just from three modules!

Again, if you’re interested in learning more about testing, I find CXL’s blog very helpful.

I’m on the statistics for A/B testing right now, but let’s not spoil the fun by having another half-half lesson in this article.

Spoiler alert: Turns out, I’m bad at statistics.

Like, repeating-the-video-at-least-3x-before-I-can-move-on bad.

And I took an introductory course in Uni too! (which actually covers a lot of what George Georgi explained in this course.)

The shame.

Next week, we’ll try to do a “sprint” on the remaining courses, since I’m behind on my desired progress by a lot. (thanks, time management)

--

--