Try our conversational search powered by Generative AI!

The problem of registering a conversion from a page containing multiple A/B tests

Vote:
 

The A/B testing add-on is really well done, but I have a logical question.

You can A/B test at the page or block level. So, for any page view: (1) the page might be in an A/B test, (2) a block on the page might be in an A/B test, or (3) any combination of #1 and #2 might be true. You could view a page where the page itself and every block on it are being tested in some way. A single page view could "activate" a dozen different tests.

Let's assume that this is a case: Page X and Block Y are both in the middle of separate A/B tests. Block Y is embedded in Page X, so they both appear on the same page view. When a visitor views Page X, the tests for both Page X and Block Y activate and are "looking" for the conversion from that visitor.

Now, assume that both Page X and Block Y both have Page Z selected as their conversion landing page.

If the visitor navigates to Page Z, which test logs the conversion?

Spoiler: they both do (I tested it).

So, how does an editor manage this? Logically, the conversion should really be logged against the combination of the two tests, but this is clearly multivariate testing, rather than simple A/B testing.

Is this just a best practices thing? Do we just advise editors to be careful with this, so as not to pollute their test results and make it hard to determine which change actually prompted the conversion? Editors would need to be aware of what tests were going on, so they didn't inadvertently embed a block on another page which was the subject of an active test (either on that page directly, or on any block embedded in that page), because that might pollute the results.

Any thoughts on this? Is it just a training/knowledge issue, or is there something more specific we can do to help prevent issues here?

#179632
Edited, Jun 16, 2017 21:57
Vote:
 

The overlap of tests between block and page level testing is something that we discussed when building the A\B Testing feature. You are right that it is something that the editors should be aware of when creating their tests so that their pages are not completely fluid between users and so they can trust the results of the test they are running.

The impact on the results of multiple tests running on the same page can be minimized by setting fairly conservative participation percents so that even if a user is included in one test, they are not garunteed to be included in all the tests running on the page. When we were looking into values for default participation percents we found that typically 10% participation was the recommended value, which is why we set it as a default value in the add test screen. So while there is definitely a chance that some users will fall into a situation where they are included in multiple tests it should be fairly rare and the impact on the overall results should be minimal given a large enough sample of test participants.

#179747
Jun 20, 2017 16:47
* You are NOT allowed to include any hyperlinks in the post because your account hasn't associated to your company. User profile should be updated.