Amp.ai vs A/B Testing: A Paradigm Shift

Ajay Bhoj

Nov 29, 2018

Introduction

In earlier posts on A/B Testing vs Autonomous Optimization, Growth Amplified via Autonomous Optimization and An Overview of Amp.ai as an Optimization-as-a-Service, we described why Amp.ai is a good alternative to A/B Testing for driving continuous growth. In this post, we'll delve further into how and why Amp.ai is a new paradigm compared to A/B testing.

In any form of user based split testing or A/B testing, you have a fixed set of variants of your product or service. To determine which variant is the best for your metrics, you conduct an experiment where a new visitor is exposed to a variant that is chosen randomly. The variant assigned to the user is fixed for future interactions that span the duration of the experiment. The latter precludes any improvement in user experience after a user is assigned to a variant or cohort group. It also precludes leveraging any learnable user behaviors to improve experiences for other similar users during the experiment. Therefore, while A/B testing is good for validating a hypothesis, it is not the best methodology to drive continuous growth. This is evidenced by the fact that a majority of A/B tests in the software industry are inconclusive, and hence, lead to no growth and waste resources.

Amp.ai optimizes sessions (depicted in the figure below), which are well-defined periods of user activity. Users are intelligently exposed to different variants across several sessions, and Amp.ai learns from past interactions to influence future metrics. The latter is the basis for driving intelligent decisions, and can be a game-changer for your business.

Amp-and-ABtesting-v2.001.png#asset:850


In the next few sections, we'll cover some key concepts related to Amp.ai and compare it with A/B testing.

Data Containers: Users and Sessions

Amp-and-ABtesting.001.png#asset:843


Amp.ai differs from A/B testing, as it works with sessions as data containers for optimization.

Optimization Targets: Metrics

Amp-and-ABtesting-v2.003.png#asset:851

Metrics are numeric and can be defined as arbitrary functions of events and their properties. In A/B testing, they are typically computed per user. We refer to them as UserMetrics in the figure above, to distinguish them from SessionMetrics or metrics defined in Amp.ai, which are per session. Metrics in Amp.ai are defined using events referred to as "Outcome" events. In the example above, the "Checkout" event is an Outcome event, and a metric of interest could be the total revenue from all checkouts in a session.

Next, we outline some key aspects of metrics in Amp.ai:


Amp-and-ABtesting.003.png#asset:845


With Amp.ai, metric values can change as the session progresses, and hence, they are attributed to the session start time for charting purposes. This makes it easier to view a session-metric time-series that updates continuously. In the example time-series shown above, sessions that start between 6 - 7 pm are attributed to the time bucket, and average metric values are computed from them. In general, it is advantageous to have short sessions that complete quickly, so that metric bucket averages stabilize quickly. These complete sessions can then be used for learning policies, to improve optimization performance for future sessions.

Optimization Enablers: Decisions

Amp-and-ABtesting.004.png#asset:846

As explained earlier, with A/B testing a random variant (of the product or service) is chosen for each new user, and thereafter, users are not allowed to switch between different variant groups or cohorts. Amp.ai differs significantly here as it serves intelligent decisions at a session level. For example, in the figure above, UserP is shown "VariantB" for the first session and "VariantA" for the second. Over a period of time, Amp.ai explores the variant preferences of users, and learns from past sessions to improve metrics in future sessions.

Finally, in an A/B test, when the test is terminated, the aggregate statistics may point to a globally winning variant with high significance, or the test could be inconclusive. On the other hand, Amp.ai continuously learns from past sessions to serve the best contextual policies to the amp client libraries, that execute them in future sessions.

Summary

In this post, we covered key abstractions related to Amp.ai that set it apart from the A/B testing methodology. The key takeaways are:

Overall, if your objective is to drive continuous growth in business metrics, Amp.ai is a powerful alternative to A/B testing.

For more information, contact us here or email us at support@amp.ai. Schedule a demo here.