<?xml version="1.0" encoding="utf-8"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/"><channel><language>en</language><title>Blog posts by Lindsey Rogers</title> <link>https://world.optimizely.com/blogs/lindsey-rogers/</link><description></description><ttl>60</ttl><generator>Optimizely World</generator><item> <title>The A/A Test: What You Need to Know</title>            <link>https://world.optimizely.com/blogs/lindsey-rogers/dates/2024/4/the-aa-test-what-you-need-to-know/</link>            <description>&lt;p&gt;&lt;span style=&quot;font-weight: 400; font-size: 12pt;&quot;&gt;Sure, we all know what an A/B test can do. But what is an A/A test? How is it different?&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;font-weight: 400; font-size: 12pt;&quot;&gt;With an A/B test, we know that we can take a webpage (our &amp;ldquo;control&amp;rdquo;), create a variation of that page (the new &amp;ldquo;variant&amp;rdquo;), and deliver those two distinct experiences to our website visitors in order to determine the better performer. Also known as split-testing, the A/B test process compares the performance of the original versus the variant, according to a set list of metrics, to learn which of the two produced better results.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;font-weight: 400; font-size: 12pt;&quot;&gt;Thanks to experimentation platforms, such as Optimizely Web Experimentation, many of us also recognize the undeniable value of being able to test two variations of a web page without any alterations to back-end code. Talk about speed-to-value!&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;font-weight: 400; font-size: 12pt;&quot;&gt;So then, what is an A/A test? And why do we need to incorporate this type of &amp;ldquo;test&amp;rdquo; into the planning and strategy phases of our experimentation program?&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;/h3&gt;
&lt;h2&gt;&lt;span style=&quot;font-size: 18pt;&quot;&gt;What is an A/A test?&lt;/span&gt;&lt;/h2&gt;
&lt;p&gt;&lt;span style=&quot;font-size: 12pt;&quot;&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;The A/A test comes before all other tests on the experimentation platform to essentially serve as &lt;strong&gt;the &lt;/strong&gt;&lt;/span&gt;&lt;strong&gt;practice run of your experimentation program&lt;/strong&gt;.&lt;span style=&quot;font-weight: 400;&quot;&gt; &lt;/span&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;Unlike an A/B test, the A/A test does not create a variation from the original webpage. Instead, the A/A model tests the original webpage against a copy of itself in order to check for any issues that could potentially arise before an actual test is conducted on the platform. &lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;font-size: 12pt;&quot;&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;So, while an A/B test compares two different versions of a webpage (no matter how minor the difference between the two); a&lt;strong&gt;n A/A test compares two versions of a webpage that are exactly the same. &lt;/strong&gt;&lt;/span&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;There is no difference between the control or the variant in an A/A test (Version A = Version A). You are basically testing a webpage against itself!&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;font-size: 12pt;&quot;&gt;Of course, that begs the question...&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;/h3&gt;
&lt;h2&gt;&lt;span style=&quot;font-size: 18pt;&quot;&gt;Why is an A/A Test Useful?&lt;/span&gt;&lt;/h2&gt;
&lt;p&gt;&lt;span style=&quot;font-weight: 400; font-size: 12pt;&quot;&gt;The purpose of the A/A test is primarily two-fold:&lt;/span&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;span style=&quot;font-weight: 400; font-size: 12pt;&quot;&gt;Platform-level: To test the functionality of the web experimentation platform&amp;nbsp;&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;font-weight: 400; font-size: 12pt;&quot;&gt;Experiment-level: To test the validity of data capture and test setup&lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;span style=&quot;font-weight: 400; font-size: 12pt;&quot;&gt;An A/A test, therefore, &lt;strong&gt;gives you assurance that your platform is ready to conduct your experimentation program &lt;/strong&gt;before it is put into widespread use.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;/h3&gt;
&lt;h2&gt;&lt;span style=&quot;font-size: 18pt;&quot;&gt;When to Use an A/A test&lt;/span&gt;&lt;/h2&gt;
&lt;p&gt;&lt;span style=&quot;font-weight: 400; font-size: 12pt;&quot;&gt;The A/A test should, therefore, be the first test that you launch immediately following the implementation of the Optimizely Javascript tag on the website.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;font-weight: 400; font-size: 12pt;&quot;&gt;Note: If you have multiple websites and/or use single-page application (SPA) web pages, you should perform an A/A test whenever the tag has been newly implemented into the header code, even if using the same tag across all experiences.&lt;/span&gt;&lt;/p&gt;
&lt;h2&gt;&lt;span style=&quot;font-size: 18pt;&quot;&gt;&lt;strong&gt;How to Set Up an A/A Test&lt;/strong&gt;&lt;/span&gt;&lt;/h2&gt;
&lt;p&gt;&lt;span style=&quot;font-size: 12pt;&quot;&gt;Once the tag has been installed, you can begin your A/A test setup. Similiar to how you would build an A/B test, the creation of an A/A test starts by clicking onto the &lt;strong&gt;Create New Experiment&lt;/strong&gt; button in the top-right corner of Optimizely Experimentation and then selecting&amp;nbsp;&lt;strong&gt;A/B Test&lt;/strong&gt; from the dropdown menu. (Yes, A/B; but our B will remain our A for this test.)&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/link/dff558e04875444f88deb3cc44dff84e.aspx&quot; width=&quot;230&quot; alt=&quot;&quot; height=&quot;143&quot; /&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;font-size: 12pt;&quot;&gt;The setup of an A/A test is fairly simple and quick, yet should take into account the primary and secondary metrics that should be included in this type of test.&lt;/span&gt;&lt;/p&gt;
&lt;h4&gt;&lt;strong&gt;&lt;/strong&gt;&lt;/h4&gt;
&lt;h3&gt;&lt;span style=&quot;font-size: 14pt;&quot;&gt;&lt;strong&gt;A/A Test Metrics&lt;/strong&gt;&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span style=&quot;font-size: 12pt;&quot;&gt;While it may be tempting to include as many metrics as possible, refrain from doing so!&amp;nbsp; Remember that Optimizely recommends no more than five total metrics per test to avoid test lag: the primary metric plus 2-4 secondary metrics.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;font-size: 12pt;&quot;&gt;The metrics in an A/A test are meant to &lt;strong&gt;look for any noticeable differences between the data that was captured&lt;/strong&gt; from both webpages. In other words, the test lets you compare the Control A results against the Variant A results to be sure that the data is comparatively the same.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span style=&quot;font-size: 12pt;&quot;&gt;Page Metrics&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span style=&quot;font-size: 12pt;&quot;&gt;Page-level metrics, such as&amp;nbsp;&lt;strong&gt;pageview&lt;/strong&gt; and&amp;nbsp;&lt;strong&gt;bounce rate&lt;/strong&gt; are generally included in an A/A test as these metrics capture basic yet important data points. The data collected from these metrics can help you quickly check to see if your website is sending data to Optimizely Web Experimentation via the Javascript code.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;font-size: 12pt;&quot;&gt;If no pageviews are recorded in the results of an A/A test, then we immediately know that there is an issue to address.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;font-size: 12pt;&quot;&gt;Note: To troubleshoot this issue, double-check the implementation of the Optimizely tag to ensure that it appears in the header code of your website.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span style=&quot;font-size: 12pt;&quot;&gt;Click &amp;amp; Custom metrics&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span style=&quot;font-weight: 400; font-size: 12pt;&quot;&gt;Metrics that track specific user actions may also be helpful to add to your A/A test. These metrics might include a&amp;nbsp;&lt;strong&gt;click event&lt;/strong&gt; and/or a&amp;nbsp;&lt;strong&gt;transactional event&lt;/strong&gt; that can let you know if a specific user action has been captured correctly by the experimentation platform.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;font-weight: 400; font-size: 12pt;&quot;&gt;Depending on your website, some of these events may also be custom-developed, which gives further reason to include them your A/A test in order to ensure that these custom events have been developed to provide the type of data that you expect to see.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;font-weight: 400; font-size: 12pt;&quot;&gt;As a general rule of thumb, you will want to use metrics that would likely appear in most tests of your experimentation program. Add the metrics that would make the most sense to include: those that can help you validate the functionality of your experimentation platforms and the accuracy of the metrics that will collect data for your planned tests.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;font-size: 12pt;&quot;&gt;&lt;em&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;Need help deciding? Read: &quot;&lt;a href=&quot;/link/34e9e123cc764ee2bbfa2056bbc1f4be.aspx&quot;&gt;How to Determine Which Test Metrics to Select&lt;/a&gt;&quot;&lt;/span&gt;&lt;/em&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;/em&gt;&lt;/p&gt;
&lt;h4&gt;&lt;span style=&quot;font-size: 18pt;&quot;&gt;&lt;strong&gt;What Should I Look for in an A/A Test?&lt;/strong&gt;&lt;/span&gt;&lt;/h4&gt;
&lt;p&gt;&lt;span style=&quot;font-size: 12pt;&quot;&gt;The ideal outcome of an A/A test is to see similar results between the two same versions of the webpage. Since an A/A test compares the metrics of two identical webpages against each other, the expectation is that there will be no difference in the metric values between them.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;font-size: 12pt;&quot;&gt;&lt;strong&gt;A successful A/A test would show nearly the same results between the Control (A) and the &quot;Variant&quot; (A).&lt;/strong&gt;&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;font-size: 12pt;&quot;&gt;While the collected results may not be&amp;nbsp;&lt;em&gt;exactly&lt;/em&gt; the same, any differences between the results of Control A and those of Variant A should be very slight.&amp;nbsp;&lt;strong&gt;If any major differences are seen in the A/A test results for any metric, then we know that should take a closer look at the setup of that metric.&lt;/strong&gt; (Again, if the pageviews are simply not recording in the results of the platforms, then our first step would be to check the installation of the tag on the website.)&lt;/span&gt;&lt;/p&gt;
&lt;h4&gt;&lt;strong&gt;&lt;/strong&gt;&lt;/h4&gt;
&lt;h3&gt;&lt;span style=&quot;font-size: 14pt;&quot;&gt;&lt;strong&gt;Zero Statistical Significance&lt;/strong&gt;&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span style=&quot;font-size: 12pt;&quot;&gt;Similar results between the two experiences should also prevent any metric from reaching statistical significance. Therefore, statistical significance serves as a helpful guide when we check the results of an A/A test.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;font-size: 12pt;&quot;&gt;Ideally, we are looking for the statistical significance to &quot;flat line&quot; for all metrics throughout the duration of an A/A test. If statistical significance spikes, then we can flag an issue to address.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;font-size: 10pt;&quot;&gt;Tip: Use the below graph view in Optimizely Web Experimetation to look for any upticks in statistical significance for any of your A/A test metrics.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/link/6eff89d84895489f89c4206f39187653.aspx&quot; width=&quot;700&quot; alt=&quot;&quot; height=&quot;314&quot; /&gt;&lt;/p&gt;
&lt;h2&gt;&lt;span style=&quot;font-size: 18pt;&quot;&gt;&lt;strong&gt;A/A Test Use Cases&lt;/strong&gt;&lt;/span&gt;&lt;/h2&gt;
&lt;p&gt;&lt;span style=&quot;font-weight: 400; font-size: 12pt;&quot;&gt;Moreso a useful practice than a &amp;ldquo;test&amp;rdquo; itself, the A/A test is a practical step to include in your experimentation program for several reasons. For example, you can use this test to:&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;span style=&quot;font-weight: 400; font-size: 12pt;&quot;&gt;Confirm the functionality of your experimentation platform&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;font-weight: 400; font-size: 12pt;&quot;&gt;Determine the feasibility of performing a test on a specific web page&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;font-weight: 400; font-size: 12pt;&quot;&gt;Validate the accuracy of the software&amp;rsquo;s performance metrics&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;font-weight: 400; font-size: 12pt;&quot;&gt;Check the tracking and analytics of your primary and secondary metrics&amp;nbsp;&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;font-weight: 400; font-size: 12pt;&quot;&gt;Obtain baseline data for these metrics that can be used to forecast targets&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;span style=&quot;font-size: 12pt;&quot;&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;Or, of course, all of the above! As you can see, the A/A test not only verifies the readiness of your software but lets you spot-check the setup of individual tests, as well.&amp;nbsp;&lt;/span&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;Use this type of test as your &amp;ldquo;dry run&amp;rdquo; before your first A/B or multivariate test to gain the peace of mind you need to successfully launch your experimentation program!&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;</description>            <guid>https://world.optimizely.com/blogs/lindsey-rogers/dates/2024/4/the-aa-test-what-you-need-to-know/</guid>            <pubDate>Mon, 15 Apr 2024 19:46:35 GMT</pubDate>           <category>Blog post</category></item><item> <title>How to Determine Which Test Metrics to Select</title>            <link>https://world.optimizely.com/blogs/lindsey-rogers/dates/2024/2/how-to-determine-which-test-metrics-to-select/</link>            <description>&lt;p&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;When preparing to build a test in Optimizely Web Experimentation, we often know what we want to change and where we want to change it. Even knowing who we want to target may come to mind rather easily. However, knowing exactly which metrics we want to be able to track and measure can oftentimes give us pause during the planning stage.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;/h3&gt;
&lt;h3&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;The Primary Metric&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;Think of your &lt;/span&gt;primary metric&lt;span style=&quot;font-weight: 400;&quot;&gt; as the key piece of information that you need to determine whether or not your test will produce the desired effect of your hypothesis. This data point acts as your validation metric or &amp;ldquo;proof point&amp;rdquo; that will either prove or disprove your test hypothesis.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Tip: Look at your test hypothesis for clues.&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;Ideally, your primary metric can be tracked on your Targeting Page to ensure that the success (or failure) of your test can be directly attributed to the results of this metric. In other words, the farther your primary metric is placed from the Targeting Page, the more other variables could influence user behavior before reaching the primary metric of your test.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;If distance is created between the page where the user sees the change (Targeting Page) and where the user is expected to take action (primary metric), there exists the possibility that additional factors could influence the user&amp;rsquo;s behavior between these two points.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;In other words, the user action that corresponds to your primary metric should exist on the page that you are testing to avoid being influenced by other outlying variables.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;Let&amp;rsquo;s take for example, a test in which we change the Add to Cart (ATC) button text in the variant from &amp;ldquo;Buy Now&amp;rdquo; to &amp;ldquo;Get It Now&amp;rdquo; to see if this wording would entice more users to click onto this button. (After some research in semantics, we are interested to learn if &amp;ldquo;get&amp;rdquo; as a word that alludes to the speed of receiving the product would be more appealing to users than &amp;ldquo;buy&amp;rdquo; as a word more associated with the purchase of the product.)&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;Now, while we ultimately want users to complete their purchase journey, the main purpose of this A/B test is to see if more users will click onto the Add to Cart button on the variant versus the control. Therefore, our primary metric to track will be Add to Cart button clicks.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;Notice how this primary metric also exists on the Targeting page where the user action (click on ATC button) takes place for direct attribution to test failure/success.&lt;/span&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;As you can see through this example, the primary metric for tests on an e-commerce website is not always a revenue metric. Even though &lt;/span&gt;revenue per visitor&lt;span style=&quot;font-weight: 400;&quot;&gt; (RPV) may be a focus metric for an e-commerce website and supports business objectives, we cannot assume that this metric will always be the primary metric of tests performed on that website.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Tip: Consider the desired result that the variant of this test is expected to produce in order to determine the primary metric for the test.&amp;nbsp;&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;&lt;/h3&gt;
&lt;h3&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;Secondary Metrics&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;Revenue, however, does remain an important metric to keep in mind, as the changes made in the variant may ultimately impact how much revenue is generated by this test. So, while not the primary metric for this test, revenue generated by this test will still be interesting to compare between the control and variant.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;This is where &lt;/span&gt;secondary metrics&lt;span style=&quot;font-weight: 400;&quot;&gt;, or metrics of next importance, come into play. &lt;/span&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;For this particular e-commerce example, secondary metrics that would be relevant for this test might be revenue per visitor (RPV) and total revenue generated.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;Additional metrics of consideration could include:&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;Conversions&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;Conversion Rate&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;Average Order Value, or AOV&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;Notice how the above metrics also provide insight into the different stages of the customer purchase journey. Several actions are taken from the moment a user lands on your website to the point at which s/he makes a purchase. While this final destination, or conversion point, of the customer journey is worth our attention, so are the steps in-between.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;Secondary metrics give us the opportunity to take a closer look at certain user actions and behaviors that take place along the customer journey, so that we can observe progress and drop-off, step-by-step.&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;Prioritizing Metrics&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;Within Optimizely Web Experimentation, metrics are ranked according to priority. At the top of the metrics list, the primary metric is ranked highest and will therefore be prioritized first. Each (secondary) metric added thereafter will then be ranked in subsequent priority. So, once you have established your primary metric in your list, consider how important each additional metric will be to your test and then add these secondary metrics in that order.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;In Optimizely Web Experimentation, secondary metrics appear gray, in contrast to the primary metric that will be shown in blue. The prominence given to this &amp;ldquo;blue&amp;rdquo; metric can serve as a reminder to double-check your metrics setup and ensure that the most important metric for this test is placed at the top, as the primary metric&lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;&lt;span&gt; &lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;Tip: Use the drag-and-drop functionality within the Metrics section to reposition your &lt;/span&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;metrics if needed. Move the metrics around until you feel satisfied with the order of priority that you give to each. Notice how the metrics will automatically change color based on their placement. A metric placed at the top will automatically become blue and identified as your primary metric, while metrics placed further down the metric list will be gray.&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;&lt;img src=&quot;/link/ffcd38411a7d4e468a32c8dee566aa4c.aspx&quot; width=&quot;558&quot; alt=&quot;Optimizely Test Metrics sample list&quot; height=&quot;489&quot; /&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;While you might be eager to add a plethora of metrics to your test to diversify your insights, more metrics does not necessarily equal better results. More metrics may impede the speed at which your highest ranked metrics can reach statistical significance, which would then require your test to run for a longer length of time.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;When more than five metrics are added to a test in Optimizely Web Experimentation platform, you will notice a cautionary message appear. This message advises against selecting too many metrics for a single test. Therefore, as a general rule of thumb, try to stay within a range of one to four additional metrics in order to reach statistical significance for the metrics that matter the most for your test.&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;Monitoring Metrics&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;Monitoring metrics&lt;span style=&quot;font-weight: 400;&quot;&gt; can also lend insight into other effects that may have resulted due to differences between the control and the variant(s). While the variant is created with the purpose and intention to produce a specific change from the original (or control) version of the target page, this change between versions may impact other factors within the user experience.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;Some common monitoring metrics include bounce rate, exit rate, average engagement time, and scroll tracking percentages that let you see if the changes in the variant increased or decreased content consumption on the Targeted Page. These metrics can even be paired alongside other clickstream analytics and/or heatmapping tools for deeper insights into user behavior and engagement caused by the variant.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;Together, with the primary and secondary metrics, monitoring metrics can help to provide you with a holistic view of the impact that your test had on the overall user experience.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;h3&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;Edit, Review, and Finalize Your Metrics&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt;With so many metrics at your fingertips, you may be tempted to include as many as possible into your test. However, remember to err on the side of concision so as not to dilute the results of your test. &lt;/span&gt;&lt;strong&gt;Choose metrics that matter&lt;/strong&gt;&lt;span style=&quot;font-weight: 400;&quot;&gt; and prioritize those that directly relate to the goal of your test. Use an editing eye to stay within a reasonable range (4-8) to only include those that will provide the data you need to determine the success of your test.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;</description>            <guid>https://world.optimizely.com/blogs/lindsey-rogers/dates/2024/2/how-to-determine-which-test-metrics-to-select/</guid>            <pubDate>Thu, 08 Feb 2024 15:07:43 GMT</pubDate>           <category>Blog post</category></item><item> <title>Opal and ODP, Driving Deeper Customer Insights!</title>            <link>https://world.optimizely.com/blogs/lindsey-rogers/dates/2024/2/opal-and-odp-driving-deeper-customer-insights/</link>            <description>&lt;p&gt;&lt;span&gt;&lt;span class=&quot;ui-provider a b c d e f g h i j k l m n o p q r s t u v w x y z ab ac ae af ag ah ai aj ak&quot;&gt;Exciting news for Optimizely users! Opal integration is now seamlessly woven into the Optimizely Suite with its latest inclusion in Optimizely Data Platform (ODP).&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;span class=&quot;ui-provider a b c d e f g h i j k l m n o p q r s t u v w x y z ab ac ae af ag ah ai aj ak&quot;&gt; This enhancement brings forth a time-saving and powerful new tool for understanding customer profiles, thanks to AI-generated summaries. These summaries provide invaluable insights into the actions and behaviors of your customers in an easy-to-read format for you to view!&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;span class=&quot;ui-provider a b c d e f g h i j k l m n o p q r s t u v w x y z ab ac ae af ag ah ai aj ak&quot;&gt;As a pop-out feature on the customer profile, the customer summary extracts key pieces of information observed along the customer journey and brings these details forward for you:&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;&lt;span class=&quot;ui-provider a b c d e f g h i j k l m n o p q r s t u v w x y z ab ac ae af ag ah ai aj ak&quot;&gt;&lt;img src=&quot;/link/10c1890ac65240548312cb2060ad79a3.aspx&quot; width=&quot;1124&quot; alt=&quot;&quot; height=&quot;608&quot; /&gt;&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span class=&quot;ui-provider a b c d e f g h i j k l m n o p q r s t u v w x y z ab ac ae af ag ah ai aj ak&quot;&gt;Understanding your customer becomes effortless with this new feature, saving you time and energy that otherwise may have been spent in deep analysis per customer. Now, with customer summaries in ODP, you can &lt;strong&gt;learn more about your customers in less time&lt;/strong&gt;.&lt;strong&gt;&amp;nbsp;&lt;/strong&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span class=&quot;ui-provider a b c d e f g h i j k l m n o p q r s t u v w x y z ab ac ae af ag ah ai aj ak&quot;&gt;Even beyond quick and efficient customer analysis, the possibilities of this new feature are endless. From streamlining segmentation to fine-tuning RFM&lt;/span&gt;&lt;span&gt;&lt;span class=&quot;ui-provider a b c d e f g h i j k l m n o p q r s t u v w x y z ab ac ae af ag ah ai aj ak&quot;&gt; (Recency-Frequency-Monetary Value) matrices, you can leverage the power of Opal and ODP to accelerate strategic planning across your Optimizely Suite.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;</description>            <guid>https://world.optimizely.com/blogs/lindsey-rogers/dates/2024/2/opal-and-odp-driving-deeper-customer-insights/</guid>            <pubDate>Wed, 07 Feb 2024 21:26:53 GMT</pubDate>           <category>Blog post</category></item></channel>
</rss>