Thank you for subscribing!
Thank you for subscribing!
Join 3,500+ Marketers
Every 2 weeks get insights to become
better at growth (or life)

When and how to iterate on inbound personalized pages

Overview:

1. Launch lightweight tests fast

2. Measure results

3a. If it works, do more of it

3b. If it doesn’t work, hypothesize & iterate

Growth comes from experimentation + exploitation

Personalization can lead to huge lifts in conversion rate, but sometimes you may need an iteration or two before you get that big win. If your experience does not have a huge lift, don’t fret, and don’t deactivate it. Iterate on your experience until you find what works, and then promote it. This method leads you to huge lifts and compounding ROI.

Below are the 3 primary signs to look out for when deciding whether you should iterate on your experience.

Case 1: Negative lift + medium or high volume

TLDR: Iterate on your experience if your conversion rate lift is consistently negative, and your experience has at least 100 visitors in each variation.

When to iterate

You do not need to wait for your experience to reach a statistically significant result on a negative lift before deciding to iterate. If you see a consistent negative trend with enough visitor traffic, you should consider iterating on your experience.
Make sure you have enough volume in the experience before deciding to iterate. When visitor traffic is low, conversion rates are unstable and you can easily see negative trends reverse as visitor traffic increases.

For example, if you have 2 conversions from 10 visitors in the control and 1 conversion from 10 visitors in your personalization, you will see a -50% lift. As a benchmark, try to wait for at least 100 visitors in each experience before making conclusions.

Example: too soon to consider this an underperforming experience


Example: negative lift, time to iterate

Consider the difference in total conversions when deciding if your experience is really underperforming. Only take this approach if you have a similar number of visitors in each variation. Look for a difference of at least 5 conversions between the personalized experience and the control before making any conclusions.

Another gut check on a negative lift is consistency of performance. If you checked on your result yesterday and you had a positive lift and today you see a negative lift, no need to iterate just yet. Give your experience some time to normalize before making conclusions.

How to iterate

Hypothesize why the experience underperformed.

What did you change that is performing worse? Did you lose context? Is your content clear and concise? Are you using less effective company logos or social proof? Did you remove a CTA or change the text?

Revert the hypothesized issue or update to a more strategic version.

Relaunch the experience and reset results. When you make an update to a live experience in Mutiny and hit “Launch”, Mutiny will ask if you want to keep results or reset results. Click ‘reset results’ to create a new revision and track your changes separately. Mutiny will store previous revisions for review and historical context.

Example: track revisions from each iteration

Optional but recommended: write down insights by segment. Keep track of what works and what doesn’t to grow your program even more.

Case 2: Flat result + high volume

TLDR: Iterate on your experience if your conversion rate lift is flat and you have at least 300 visitors in each variation.

When to iterate

Personalization should generate a large lift (normally at least 20%). If your lift is smaller (within -20% to 20%) and the experience has not reached statistical significance, you should iterate on the experience to create a larger impact.

Make sure your experience has enough visitors to determine the result is flat. Look for at least 300 visitors in each experience before making this conclusion.

Example: flat result

How to iterate

Hypothesize why the experience is flat and make a few changes. If you fall into the flat performance category, most likely your personalizations were too subtle.

How can you go bolder? Is your personalized content below the fold? Is it only slightly different than your control? If your personalizations are not subtle, are you perhaps counter balancing a really positive change with a really negative change?

Revert the hypothesized issue or update to a more strategic version. Relaunch the experience and reset results. As we mentioned above, be sure to click 'reset results' so that you can track changes separately.

Optional but recommended: write down insights by segment. Keep track of strategies that didn’t drive a big impact for the segment so you can continue to grow your program.

Case 3: Low visitor traffic

TLDR: Expand your segment size or promote the experience if you get less than 100 visitors per variation per month.

When to iterate

If your experience has been running for more than 1 month and has fewer than 100 visitors in each experience, it will be difficult for you to measure performance differences. In these cases you should try to expand your segment size -- we recommend doing this whenever possible anyway.

If your segment size is still small, you should promote the experience and watch the conversion rate. In this case, it’s better to just show the personalized experience to all traffic to avoid drawing false conclusions.

Example: low traffic

How to iterate

Try to expand segment size.

Use “or” conditions in the segment creator to build a larger version of your segment. For instance, if your segment targets a certain industry, add various definitions from IP data (like “Industry”, “Industry Group”, “Sub-industry” and “Company Tags”) + UTM attributes + Behavioral Audiences (“Vertical”) + any relevant first party attributes.

As an example, if your experience targets Financials, your expanded segment definition might look like this:

If the estimated segment size still looks small (like in the example above), you should opt to promote your experience vs testing in experiment mode.

If you are promoting your experience instead of experimenting, keep an eye on your conversion rate to get a sense for impact. It’s a little more difficult to measure, but you should have a sense for what is a “good” conversion rate for your website.

If you think you can beat it, try something else! Compare before / after with Mutiny’s revision tracking.

Summary

Here’s a handy comparison guide you can use when determining when and how to iterate.

*If your experience has been running more than 1 month and visitor traffic is still low, either try to expand your segment size or promote the experience. At small visitor sizes, split testing will lead you to draw false conclusions.












Author

Evan Burton

View Comments
You May Also Like
Join 3,500+ Marketers
Every 2 weeks get insights to become
better at growth (or life)

Success! Your membership now is active.