An Auto-Allocate activity in Adobe Target identifies a winner among two or more experiences and automatically reallocates more traffic to the winner to increase conversions while the test continues to run and learn.
While creating an A/B activity using the three-step guided workflow, choose the Auto-Allocate to best experience option on the Targeting page (step 2).
Standard A/B tests have an inherent cost. You must spend traffic to measure performance of each experience and through analysis figure out the winning experience. Traffic distribution remains fixed even after you recognize that some experiences are outperforming others. Also, it’s complicated to figure out the sample size, and the activity must run its entire course before you can act on a winner. And, there is still a chance the identified winner is not a true winner.
An Auto-Allocate activity reduces this cost and overhead of determining a winning experience. Auto-Allocate monitors the goal metric performance of all experiences and sends more new entrants to the high-performing experiences proportionately. Enough traffic is reserved to explore the other experiences. You can see the benefits of the test on your results, even while the activity is still running: optimization occurs in parallel with learning.
Auto-Allocate moves visitors toward winning experiences gradually, rather than requiring that you wait until an activity ends to determine a winner. You benefit from lift more quickly because activity entrants who would have been sent to less-successful experiences are shown potential winning experiences.
A normal A/B test in Target shows only pairwise comparisons of challengers with the control. For example, if an activity has experiences: A, B, C, and D where A is the control, a normal Target A/B test would compare A versus B, A versus C, and A versus D.
In such tests, most products, including Target, use a Welch’s t-test to produce p-value-based confidence. This confidence value is then used to determine if the challenger is sufficiently different from the control. However, Target doesn’t automatically perform the implicit comparisons (B versus C, B versus D, and C versus D) that are required to find the “best” experience. As a result, the marketer must manually analyze the results to determine the “best” experience.
Auto-Allocate performs all implicit comparisons across experiences and produces a “true” winner. There is no notion of a “control” experience in the test.
Auto-Allocate intelligently allocates new visitors to experiences until the confidence interval of the best experience does not overlap with the confidence interval of any other experience. Normally this process could produce false positives, but Auto-Allocate uses confidence intervals based on the Bernstein Inequality that compensates for repeated evaluations. At this point, there is a true winner. When Auto-Allocate stops, provided there is no substantial time-dependence to the visitors who arrive at the page, there is at least a 95% chance that Auto-Allocate returns an experience whose true response is no worse than 1% (relative) less than the true response of the winning experience.
The following terms are useful when discussing Auto-Allocate:
Multi-armed bandit: A multi-armed bandit approach to optimization balances exploratory learning and exploitation of that learning.
The overall logic behind Auto-Allocate incorporates both measured performance (such as conversion rate) and confidence intervals of the cumulative data. Unlike a standard A/B test where traffic is split evenly between experiences, Auto-Allocate changes traffic allocation across experiences.
The multi-armed bandit approach keeps some experiences free for exploration while exploiting the experiences that are performing well. More new visitors are placed into better performing experiences while preserving the ability to react to changing conditions. These models update at least once an hour to ensure that the model reacts to the latest data.
As more visitors enter the activity, some experiences start to become more successful, and more traffic is sent to the successful experiences. 20% of traffic continues to be served randomly to explore all experiences. If one of the lower-performing experiences starts to perform better, more traffic is allocated to that experience. Or if the success of a higher-performing activity decreases, less traffic is allocated to that experience. For example, if an event causes visitors to look for different information on your media site, or weekend sales on your retail site provide different results.
The following illustration represents how the algorithm might perform during a test with four experiences (click to expand the illustration):
The illustration shows how the traffic allocated to each experience progresses over several rounds of the activity lifetime until a clear winner is determined.
Round | Description |
---|---|
![]() |
Warm-Up Round (0): During the warm-up round, each experience gets equal traffic allocation until each experience in the activity has a minimum of 1,000 visitors and 50 conversions.
Only two experiences move forward into the next round: D and C. Moving forward means that the two experiences are allocated 80% of the traffic equally. The other two experiences continue to participate but are only served as part of the 20% random traffic allocation as new visitors enter the activity. All allocations are updated every hour (shown by rounds along the x-axis above). After each round, the cumulative data is compared. |
![]() |
Round 1: During this round, 80% of traffic is allocated to experiences C and D (40% each). 20% of traffic is allocated randomly to experiences A, B, C, and D (5% each). During this round, experience A performs well.
|
![]() |
Round 2: During this round, 80% of traffic is allocated to experiences A and D (40% each). 20% of traffic is allocated randomly, so that means A, B, C, and D each get 5% of traffic. During this round, experience B performs well.
|
![]() |
Round 3: During this round, 80% of traffic is allocated to experiences B and D (40% each). 20% of traffic is allocated randomly, so that means A, B, C, and D each get 5% of traffic. During this round, experience D continues to perform well and experience C performs well.
|
![]() |
Round 4: During this round, 80% of traffic is allocated to experiences C and D (40% each). 20% of traffic is allocated randomly, so that means A, B, C, and D each get 5% of traffic. During this round, experience C performs well.
|
![]() |
Round n: As the activity progresses, a high-performing experience starts to emerge and the process continues until there is a winning experience. When the confidence interval of the experience with the highest conversion rate doesn’t overlap with any other experience’s confidence interval, it is labeled the winner. A badge displays on the winning activity’s page and in the Activity list.
Important: If you manually chose a winner earlier in the process, it would have been easy to choose the wrong experience. For this reason, it is best practice to wait until the algorithm determines the winning experience. |
If an activity has only two experiences, both experiences get equal traffic until Target finds a winning experience with 75% confidence. At that point, two-thirds of the traffic is allocated to the winner, and one-third to the loser. After that, when an experience reaches 95% confidence, 90% of traffic is allocated to the winner, and 10% is allocated to the loser. Target always sends some traffic to the “losing” experience to avoid false positives in the end (that is, maintain some exploration).
After an Auto-Allocate activity is activated, the following operations from the Target UI are not allowed:
For more information, see Auto-Allocate can give you faster test results and higher revenue than a manual test.
Consider the following information as you work with Auto-Allocate:
The following advanced metric settings are not supported: Increment Count, Release User, Allow Reentry and Increment Count, and Release User and Bar from Reentry.
If a visitor who sees experience A returns frequently and converts several times, the Conversion Rate (CR) of experience A is artificially increased. Compare this result to experience B, where visitors convert but do not return often. As a result, the CR of experience A looks better than the CR of experience B, so new visitors are more likely to be allocated to A than to B. If you choose to count once per entrant, the CR of A and CR of B might be identical.
If return visitors are randomly distributed, their effect on conversion rates is more likely to be evened out. To mitigate this effect, consider changing the counting method of the goal metric to count only once per entrant.
Auto-Allocate is good at differentiating between high-performing experiences (and finding a winner). There could be times when you don’t have enough differentiation among the under-performing experiences.
If you want to produce statistically significant differentiation between all experiences, you might want to consider using the manual traffic allocation mode.
Some factors that can be ignored during a standard A/B test because they affect all experiences equally cannot be ignored in an Auto-Allocate activity. The algorithm is sensitive to the observed conversion rates.
Following are examples of factors that can affect experience performance unequally:
Experiences with varying contextual (time, location, gender, and so on) relevance.
For example:
Using experiences with varying contextual relevance can skew the results in an Auto-Allocate test more than in an A/B test because the A/B test analyzes the results over a longer period.
Experiences with varying delays in conversion, possibly due to the urgency of the message.
For example, “30% sale ends today” signals the visitor to convert today, but “50% off first purchase” doesn’t create the same sense of urgency.
Consult the following FAQs and answers as you work with Auto-Allocate activities:
Yes. For more information, see A4T support for Auto-Allocate and Auto-Target activities.
No. Only new visitors are automatically allocated. Returning visitors continue to see their original experience to protect the validity of the A/B test.
The algorithm guarantees a 95% confidence or 5% false-positive rate if you wait until the winner-badge appears.
The algorithm starts working after all experiences in the activity have a minimum of 1,000 visitors and 50 conversions.
80% of traffic is served using Auto-Allocate and 20% of traffic is served randomly. When a winner has been identified, 80% of traffic goes to it, while all experiences continue to get some traffic as part of the 20%, including the winning experience.
Yes. The multi-armed bandit ensures that at least 20% of traffic is reserved to explore changing patterns or conversion rates across all experiences.
As long as all experiences being optimized face similar delays, the behavior is the same as an activity with a faster conversion cycle. However, it takes longer to reach the 50 conversion threshold before the traffic allocation process begins.
Automated Personalization uses each visitor’s profile attributes to determine the best experience. In doing so, it not only optimizes, but also personalizes the activity for that user.
Auto-Allocate, on the other hand, is an A/B test that produces an aggregate winner (the most popular experience, but not necessarily the most effective experience for each visitor).
Currently, the logic favors visitors that convert quickly or visit more often because such visitors temporarily inflate the overall conversion rate of the experience they belong to. The algorithm adjusts itself frequently, so the increase in conversion rate is amplified at each snapshot. If the site gets numerous return visitors, their conversions can potentially inflate the overall conversion rate for the experience they belong to. There is a good chance that return visitors are randomly distributed, in which case the aggregate effect (increased lift) is evened out. To mitigate this effect, consider changing the counting method of the success metric to count only once per entrant.
You can use the existing Adobe Target Sample Size Calculator to get an estimate of how long the test runs. (As with traditional A/B testing, apply Bonferroni correction if you are testing more than two offers or more than one conversion metric/hypothesis.) This calculator is designed for traditional fixed-horizon A/B testing and provides an estimate only. Using the calculator for an Auto-Allocate activity is optional because Auto-Allocate declares a winner for you. You don’t need to pick a fixed point in time to look at the test results. The provided values are always statistically valid.
Internal Adobe experiments have found the following:
There is really no reason to remove an under-performing experience. Auto-Allocate automatically serves high-performing experiences more often and serves under-performing experiences less often. Leaving an under-performing experience in the activity does not significantly impact the speed to determine a winner.
20% of visitors are randomly assigned across all experiences. The amount of traffic served to an under-performing experience is minimal (20% divided by the number of experiences).
Adobe does not recommend that you change the goal metric midway through an activity. Although it is possible to change the goal metric during an activity using the Target UI, you should always start a new activity. Adobe does not guarantee what happens if you change the goal metric in an activity after it is running.
This recommendation applies to Auto-Allocate, Auto-Target, and Automated Personalization activities that use either Target or Analytics (A4T) as the reporting source.
Adobe does not recommend that you change the reporting source midway through an activity. Although it is possible to change the reporting source (from Target to A4T or the opposite way) during an activity using the Target UI, you should always start a new activity. Adobe does not guarantee what happens if you change the reporting source in an activity after it is running.
This recommendation applies to Auto-Allocate, Auto-Target, and Automated Personalization activities that use either Target or Analytics (A4T) as the reporting source.
Using the Reset Report Data option for Auto-Allocate activities is not suggested. Although it removes the visible reporting data, this option does not remove all training records from the Auto-Allocate model. Instead of using the Reset Report Data option for Auto-Allocate activities, create a new activity and de-activate the original activity. (This guidance also applies to Auto-Target and Automated Personalization activities.)
Auto-Allocate builds models based on the traffic and conversion behavior recorded in the default environment only. By default, Production is the default environment, but the default environment can be changed in Target (Administration > Environments).
If a hit occurs in another (non-default) environment, traffic is distributed according to the observed conversion behavior in the default environment. The result of that hit (conversion or non-conversion) is recorded for reporting purposes but not considered in the Auto-Allocate model.
When selecting another environment, the report shows traffic and conversions for that environment. The default selected environment for a report is the account-wide default that is selected. The default environment cannot be set on a per-activity basis.
For example, can the activity consider the month of December for deciding how to allocate traffic, rather than looking at September visitor data (when the test began)?
No, Auto-Allocate considers performance of the entire activity.
Auto-Allocate uses sticky decisioning for the same reasons that A/B Test activities are sticky. The traffic allocation works for new visitors only.
The following videos contain more information about the concepts discussed in this article.
This video includes information about setting up traffic allocation.
This video demonstrates how to create an A/B test using the Target three-step guided workflow. Auto-Allocate is discussed beginning at 4:45.