Consult the following FAQs and answers as you work with Automated Personalization activities in Adobe Target.
You can select an experience to be used as a control while creating an Automated Personalization (AP) or Auto-Target (AT) activity.
This feature lets you route the entire control traffic to a specific experience, based on the traffic allocation percentage configured in the activity. You can then evaluate the performance reports of the personalized traffic against control traffic to that one experience.
For more information, see Use a specific experience as control.
There is no turn-key option of comparing Automated Personalization to a default experience. However, as a workaround, if a default offer or experience exists as part of the overall activity, to understand its baseline performance, click the “Control” segment in reports and locate that particular offer in the resulting offer-level report. The conversion rate recorded for this offer can be used to compare with the conversation rate of the entire “Random Forest” segment. This helps to compare how the machine is doing compared to the default offer.
If you are looking to personalize a lower-traffic page, or you want to make structural changes to the experience you are personalizing, consider using an Auto-Target activity in place of Automated Personalization. See Auto-Target.
Consider completing an A/B Test activity between the offers and locations that you are planning to use in your Automated Personalization activity to ensure that the location and offers have an impact on the optimization goal. If an A/B Test activity fails to demonstrate a significant difference, Automated Personalization likely also fails to generate lift.
If an A/B…N test shows no statistically significant differences between experiences, one or more of the following situations is probably responsible:
Make sure to use the Traffic Estimator so you can have a sense of how long it takes for personalization models to build in your Automated Personalization activity.
Decide on the allocation between the control and targeted before beginning the activity, based on your goals.
There are three scenarios to consider based on the goal of your activity and the type of control you’ve selected:
Targeting rules should be used as sparingly as possible because they can interfere with the model’s ability to optimize.
Reporting groups can limit the success of your Automated Personalization activity. Use reporting groups only under specific conditions:
Use reporting groups only if the following conditions are met:
There is no personalization between offers in a reporting group. The offers are all treated as the same by the personalization model.
Never put all offers in an activity into a single reporting group. Doing so causes all offers to be uniformly randomly served to all visitors in the activity.
Target has a hard limit of 30,000 experiences, but it functions at its best when fewer than 10,000 experiences are created.
This same limit is applied even when the activity has enabled the Disalow Duplicates option.
For more information about character limits and other limits (offer size, audiences, profiles, values, parameters, and so forth) that affect activities and other elements in Target, see Limits.
When each visitor arrives, the set of possible offers the visitor can see is determined by the offer-level targeting rules. Then, the algorithm chooses the offer that the model predicts has the best expected revenue or chance of conversion from among those offers. Offer targeting impacts the efficacy of Target machine learning algorithms and, as a result, should be used as sparingly as possible.
There are four factors required for an Automated Personalization activity to generate lift:
The best course of action is to first make sure the content and locations that make up the activity experiences truly make a difference to the overall response rates using a simple, non-personalized A/B Test activity. Be sure to compute the sample sizes ahead of time to ensure there is enough power to see a reasonable lift and run the A/B test for a fixed duration without stopping it or making any changes. If the A/B test results show statistically significant lift on one or more experiences, it is likely that a personalized activity is successful. Personalization can work even if there are no differences in the overall response rates of the experiences. Typically, the issue stems from the offers or locations not having a large enough impact on the optimization goal to be detected with statistical significance.
For more information, Troubleshooting Automated Personalization.
Automated Personalization routes visitors to the experience that has the highest forecasted success metric based on the most recent Random Forest models built for each model. This forecast is based on the visitor’s specific information and visit context.
For example, assume that an Automated Personalization activity had two locations with two offers each. In the first location, Offer A has a forecasted conversion rate of 3% for a specific visitor, and Offer B has a forecasted conversion rate of 1%. In the second location, Offer C has a forecasted conversion rate of 2% for the same visitor, and Offer D has a forecasted conversion rate of 5%. Therefore, Automated Personalization serves this visitor an experience with Offer A and Offer D.
Automated Personalization can be used as “always on” personalization that constantly optimizes. Especially for evergreen content, there is no need to stop your Automated Personalization activity. If you want to make substantial changes to the content that aren’t similar to offers currently in your Automated Personalization activity, the best practice is to start a new activity. Starting a new activity helps other users reviewing reports to not confuse or relate past results with different content.
The time it takes for models to build in your activity typically depends on the traffic to your selected activity locations and your activity success metric. Use the Traffic Estimator to determine the expected length of time it takes for models to build in your activity.
No, there must be at least two models built within your activity for personalization to begin.
You can begin to look at the results of your Automated Personalization activity after you have at least two experiences with models built (green checkmark) for the experience that has models built.
Review your activity setup and see if there are any changes you are willing to make to improve the speed at which models build.
Automated Personalization activities are evaluated once per session. If there are active sessions that have qualified for a particular experience and now new offers have been added to it, visitors will see the new content along with the previously shown offers. Because these visitors previously qualified for those experiences, they still see those experiences during the session. To evaluate this at every page visit, you should change to the Experience Targeting (XT) activity type.
Adobe does not recommend that you change the goal metric midway through an activity. Although it is possible to change the goal metric during an activity using the Target UI, you should always start a new activity. Adobe do not warranty what happens if you change the goal metric in an activity after it is running.
This recommendation applies to Auto-Allocate, Auto-Target, and Automated Personalization activities that use either Target or Analytics (A4T) as the reporting source.
Adobe does not recommend using the Reset Report Data option for Automated Personalization activities. Although it removes the visible reporting data, this option does not remove all training records from the Automated Personalization model. Instead of using the Reset Report Data option for Automated Personalization activities, create a new activity and deactivate the original activity. This guidance also applies to Auto-Allocate and Auto-Target activities.
One model is built to identify the performance of the personalized strategy versus randomly served traffic versus sending all traffic to the overall winning experience. This model considers hits and conversions in the default environment only.
Traffic from a second set of models is built for each modeling group (Automated Personalization) or experience (Auto-Target). For each of these models, hits and conversions across all environments are considered.
Requests are, therefore, served with the same model, regardless of environment. However, the plurality of traffic should come from the default environment to ensure that the identified overall winning experience is consistent with real-world behavior.