Affirm Experiments on Android

Hector Monserrate
Affirm Tech Blog
Published in
5 min readMar 2, 2020

--

Affirm’s mobile app feature development is driven by data. Most changes introduced in our mobile apps are A/B tested, meaning that users are shown different variations of the app in order to measure each variant’s performance. A solid experimentation process is essential because our apps must figure out which variant the current user should see and change the app flow accordingly, all while being able to gradually roll the feature out to users.

In this blog post, I will explain how the mobile apps use our experimentation platform to help with developing and shipping new features.

Why we rely on experiments

Experiments are an effective tool that our engineers and product managers use to build confidence that the changes we make will deliver a better experience to our users and align with our company goals.

Another benefit of using experiments is that it allows us to ship new features with the assurance that we can turn them off if we detect issues on production. We also use our experiment system for feature flagging. As a result, any given production version of our app contains a number of features that users never get to see because they are behind an experiment.

Experiments Library Goals

  • API Simplicity: With the aim to run dozens or potentially hundreds of experiments, our library should be simple to use.
  • Type Safety: Different experiments have different variant groups and we want to define enums for each so that we can make sure we account for all variants and minimize the chances of bugs in the form of typos.
  • Semi-blocking: We always want to have the freshest values for experiments. However, we also want the app to load as fast as possible. Thus the middle point is to either not block the UI (preferred) or block it for a very short period of time.
  • Trackability: The client’s experiment assignments might be out of sync with the server but we want to be accurate in our analysis. We need the client to communicate the variants it is operating under.
  • Non-volatility: We want to have a consistent user experience, this simply means that once the experiment library resolves the variant for a given experiment, that result will continue to be served during the duration of a session (a full begin-to-exit run of the app process). This prevents situations like showing the user a feature only to hide it when navigating back.

Library API

With simplicity and type safety in mind when we thought about what the ideal API would be, we came up with the following:

Interface ExperimentType<V : Enum<V>> {
val name: String,
val description: String
val defaultVariant: V
}
interface Experimentation {
fun <V: Enum<V>> variantForExperiment(experiment: ExperimentType<V>): V
fun <V: Enum<V>> trackImpression(experiment: ExperimentType<V>)
}

Let’s go over the two public functions the library provides:

  • variantForExperiment: This is a synchronous method to resolve an experiment variant.
  • trackImpression: This is the signal back to the server in order to communicate that an assignment was actually used to change product behavior. We initially sent impressions implicitly inside the variantForExperiment function. However, we realized that sometimes impressions don’t occur at the same time they are resolved. For an experiment to have an impact on user behavior it has to be seen. For instance, you might need the variant of an experiment to pre-render a hidden view but not send the impression until the user actually sees it.

Best effort approach to provide the most up-to-date variant

The major challenge is to offer an easy to use, non-blocking, synchronous interface for something that is intrinsically asynchronous — at the end of the day, the variant assignment is controlled by the server.

The library is initialized with values from SQLite, which were previously stored from the last successful network request to the server. We want the clients to reflect server changes on experiments as soon as possible; therefore, we always perform the network request and use SQLite solely as a fallback, not to cache values.

Another aim was for experiments to be non-volatile. In practice, we keep a collection with already queried experiments in memory. If an experiment is found there, we look no further and return that value; otherwise we look for the latest known variant and insert it into the queried experiment collection.

We identified the points where we need to fetch the experiments: on the splash screen and right after the user logs in. Because we want the app to load as fast as possible, we do it in a semi-blocking fashion — we block the UI for at most one second before continuing. Thanks to RxJava and the share operator, we actually keep the request going in the background until it finishes.

If the variant of an experiment is still not known, we assign the client to control which is the variant that doesn’t introduce new changes to the app. There are several reasons why we might not have a value for it yet:

  • Simply because the experiment is defined on the app but not in the portal. We want to avoid having a strict config synchronization between server and clients; thus, unknown experiments are ignored by the app and unknown experiments to the server default to Control in the app.
  • Another reason might be that the request failed, or it took too long and is not in our fallback storage (SQLite) from a previous fetch.
Experiment Resolution Flow

Defining a New Experiment

Let’s assume we want to introduce a new onboarding experience to our apps. We will first create a new experiment:

enum class AnimatedOnboardingVariant{ CLASSIC, ANIMATED }
object AnimatedOnboarding : ExperimentType<AnimatedOnboardingVariant> {
override val description = “‍🎬 Animated Onboarding”
override val name = “animated_onboarding”
override val defaultVariant = CLASSIC
}

We don’t have to worry about creating the experiment on the Portal yet because we know that if the experiment is not recognized by the server, the experiment will resolve to Control which is ideal behavior until the feature is ready for rollout on a future release.

Next steps would be to actually code the new onboarding screens. There will be a moment when we need to wire everything up and the Experiment will act as a feature flag until everything is ready:

when(experimentation.variantForExperiment(AnimatedOnboarding)) {
CLASSIC -> {
// show classic onboarding
experimentation.trackImpression(AnimatedOnboarding)
}
ANIMATED -> {
// show new animated onboarding
experimentation.trackImpression(AnimatedOnboarding)
}
}

Assuming the feature is good to go by version v3.3.30. We can now go to the experiment portal and setup the experiment:

Affirm Experiments Portal

Conclusion

Our experiment library allows us to hide most of the Experiment System complexity from developers. It is an integral part of our development process and I believe it will help us to scale as our engineering team grows.

--

--