I have sung the praises of “big data” on this very blog. We have talked about optimization algorithms and we have talked about insights, where we use our very big data for very big revelations. The questions I hear over and over are, first, ‘can you put it into action?’ and, second ‘does it actually work?’
Don’t buy look-alike models if…
In my view, the rubber meets the proverbial road with look-alike models. This is where the science comes into action in a measurable way. Good modeling should be at the foundation of successful digital media. Just to add another cliché to the mix, there are a lot of “black boxes” out there. Too often look-like models are sold as if complexity were a key benefit, as opposed to an obstacle. My advice to media planners looking to use look-alike models is twofold (though it could apply to most anyone buying anything):
- Don’t buy anything you don’t understand
- Don’t buy anything if you won’t be able to tell whether it worked or not
Care for a machine learning driven neural network, with maybe some discriminant analysis on the side?
What’s wrong with fit-based models?
None of these allegedly cutting edge approaches are new. Most were developed for sociology or the natural sciences. The only thing that has changed is the volume of data and the speed at which we can process it. Essentially, whether you are dealing with linear regression, a neural network or something entirely different, you are doing one thing. You are taking a group of people with diverse characteristics and trying to find the best fit to their commonalities. These are fit-based models. All things being equal, the more variables you include, the better the fit. Many trumpet the dozens, or even hundreds, of variables in their look-alike models. This does not make their models better. If anything, it makes them worse.
Here’s why: most look-alike models seek to describe the ideal user and then widen their targeting to make it fit more than the one user it actually describes. Tribal Fusion’s look-alike models don’t work that way. Instead of looking for the ideal user, we look for the ideal behaviour. These are the activities and interests that most indicate a person will convert, regardless of what their profile may look like as a whole.
Starting with the ideal user
Here are some pictures to help me explain it better. Assume that each dot on the chart below represents a user with different characteristics.
The red line is an equation, or rather, a model, that describes those users. The closer the dot is to the line, the more apt the description. However, you’ll notice that the line only goes directly through one point, while a couple of others seem to touch the line. The line describes the ideal user, but the ideal user only exists in scarce quantities. In many cases this is simply because the model is over-specified in an effort to get a good fit. 200 variables don’t do you a lot of good in the real world, if your typical user only has half a dozen in their profile.
So what does one do? Most media owners relax the fit on the model. They may set static thresholds of, say, 10-15% tolerance. Some of them will even adjust the fit on the fly, depending on how many users they want to reach. In other words, they are defining the model by how many ads they want to serve you.
In the illustration above, ads are seen by anyone within the dotted lines. This encompasses more users. However, some of them are quite close to the original model, while others are much more distant. As a result, performance may go up or down, depending on how near or far the ad viewer is to that best-fit line. When you build a model based on fit and then use it for targeting in the real world, your mileage may vary.
Starting with the ideal behaviours
Tribal Fusion takes a different approach. We acknowledge from the start that a user may have five or five hundred behaviours in his profile. As a result, we don’t try to describe the ideal user. We try to find the attributes that are most indicative of performance. Ours is a lift-based model.
We have over 15,000 user attributes in our system, but our goal is not to use as many of them as possible. We filter on the variables that are contributing to performance at 90% confidence. Then it’s just a question of ranking them and de-duplicating. Behaviours that indicate a high propensity to convert are usually really specific, and thus low reach. For example, for a European airline, people in-market for flights to Luxembourg might be 40 times more likely to convert than your average internet user, but only a small number of those people exist. So we’d start our model with this one variable. We’d take that 0.001% of our user base that wanted a flight to Luxembourg, and we’d set them aside.
Then we look at the next best behaviour. That’s probably something related, say people in-market for Luxembourg hotels. Since there’s a bit of an overlap in the user base, we’ll only get a small increase in reach, and aggregate lift will necessarily go down. We then look for the next best, and the next best after that. Graphically, the model looks something like this.
Each point on the curve above represents a behaviour that we can target. By definition, every behaviour on this curve is equally efficient for an advertiser to target since it defines the trade-off between reach and performance. Working with the client, we would choose the right cut off point for their campaign objectives. Typically we start the discussion around 5 times the average propensity to convert, and work up or down from there. Depending on the steepness of the curve, you might, for example, reach 10% of our network at that lift level. That would allow for an agreeable compromise between performance and reach. Want to reach more people? We can increase the reach to 20%, but it could decrease lift to 3x.
What’s the difference?
Building models from behaviours rather than a fictional ideal user is better in three ways:
- We define our model in terms of performance, and tell you how many people you will reach with that as a constraint. As a result, you know what to expect. This contrasts to a fit-based model, where you describe the audience first, and then you hope for the best when it’s time to find them in the real world.
- Our model works independent of data depth. We are targeting individual high-lift behaviours, not trying to find users with every behaviour in the model.
- We are transparent. We don’t just tell you the individual components in the model. We’ll give you the tools to see for yourself. Our look-alike models are exposed for all to see in Deep Dive, our audience diagnostics tool for advertisers and agencies.
So in summary, media planners shouldn’t buy look-alike models they can’t prove worked and that they don’t understand. So, if you are a media planner-buyer and don’t understand any of the above, please get in touch.