One thread that binds my career is top-of-funnel content. I’ve co-written two books on the importance of early-stage content and how to identify and capitalize on those opportunities. I’ve also had a lot of success launching content for the early phases in the customer journey: What some people call the “awareness” or “learning” phases. And I’ve written extensively about what success looks like for top-of-funnel content. But I have rarely led sustainable upper-funnel content efforts for a simple reason: It’s difficult to track the first touches in a multi-touch customer journey to revenue.
As I said in the referenced blog post:
When I have been successful in educating executives on recognizing their successes, I have convinced them to focus on optimizing the experience to get a higher share of users to take the next steps in their customer journeys. That kind of growth is much more valuable, as you can attribute it to revenue.
I should have added, “in theory.” In practice, few digital marketers I talk to have attribution working well enough to give each touch point in a customer journey the credit it is due. Upper-funnel content is expensive. It’s an easy line item on a budget to cut if there is no way to track it to revenue. Most of my upper-funnel content efforts were ultimately redesigned out of existence because there was no way to prove they generated revenue.
This is the paradox of upper-funnel content. Without attribution, you can’t prove it was instrumental in generating leads. But redesign it out of existence and suddenly, your lead volume goes way down. Why? Because no one becomes a lead until they’re ready. And they only get ready by first learning the what, why, and how of their topics of interest. These are the basic building blocks of upper-funnel content. Attribution modeling is the solution to the paradox. When you develop an attribution model, you figure out how important the content is to the leads you generate, and give it the credit it is due.
We have developed some methods at IBM to give all the content in a journey the credit it is due, including upper-funnel content. I can share the basic methods with you now. Before I do that, however, I want to highlight a common dead end, so we can move on from it.
Most people I talk to at conferences say they have attribution. But after I grill them for information on how they do it, they ultimately acknowledge that they mostly use “last-touch” attribution. That means giving all the credit for the last thing a prospect did before becoming a lead.
For several reasons, last-touch attribution doesn’t work. The main reason is the last thing a prospect sees before becoming a lead might not be the most important. If it is treated as such, a disproportionate amount of resources go to developing it, and you end up with a very bottom-heavy experience.
Last-touch is also bad because you never fill up the pipeline for bottom-of-the-funnel interactions without having top-of-funnel experiences. For example, the last thing many of our high quality leads did was to take a free trial. In last-touch attribution, you would give all the credit to the trial. But nobody takes a free trial without first learning about what they want to try. And nobody sets out to learn about what they want to try without first understanding what problem the tech solves. The more complex the product, the more touch points are necessary just to get to the point of wanting a free trial. In this example, top-of-funnel experiences contributed to the quality and quantity of the lead, but get no credit.
Last-touch attribution is really no attribution. So the question becomes, how do you move towards a true attribution model?
Start with response scoring
The first step is to score all your content in terms of how it contributed to leads and wins. If a white paper is downloaded often, and a high percentage of the respondents who register to take the download become sales, that white paper should be scored relatively highly. Let’s say you have 100 white papers, you can rank sort them on the number of high quality leads they generate, and give them scores on that scale from 0 to 100. Now you have a way to measure the relative value of those white papers.
But assets like white papers don’t generate leads by themselves. They have to be part of campaigns, which puts them in front of the audience using paid, owned, or earned means. If you use the same white paper in multiple channels, it is bound to generate more responses in some channels than others.
Let’s say a particular white paper about migrating to the cloud gets a lot of quality responses through organic search. For example, a lot of people query “how do I migrate to the cloud?” and visit the page where the paper can be downloaded. When they visit, they download the white paper and give quality information about themselves in exchange for the asset. This indicates that the paper does its job in early-stage education.
Now let’s say you try to use the same white paper in a paid search campaign that focuses on a product name like IBM Cloud Migration Services. When prospects click the ad, they get a single-offer landing page with the same white paper on it. Here, most of the users abandon the experience before filling out the form, and the paper does not generate a lot of quality responses.
How can the same paper do well in one context and poorly in another? In this case, the paper is useful for early-stage prospects but not for late-stage prospects. By the time someone searches for a brand name, chances are they have already learned all the basics and are ready for a deeper conversation. So a paper that tells them what they already know is no longer relevant.
This example illustrates why simple response scoring is not enough. You need an attribution model that helps you understand the value of assets when they are most useful. Once you have this more nuanced response scoring method, you can begin to pay attention to other variables in the mix.
Every time I have done studies like this as part of campaign optimization, I have also found that the same paper performs differently in two early-stage experiences. Perhaps in one, users have to scroll to get the link to the download whereas the other experience is easier. You never learn how changing UX can change performance until you try to score your assets in the mix. If you have attribution, they become markers to help improve the whole experience, including the asset itself.
From response scoring to attribution modeling
The first step in moving to attribution modeling is to look for patterns in the responses you are getting for your assets. In the example above, the white paper performed well in the early stages of the customer journey and poorly in the late stages. The hypothesis is white papers tend to do better in the top-of-funnel. Test that by looking at all your white papers to see if that pattern is consistent. If so, you can tune your experience by moving your white papers to top-of-funnel and move other things, like product demo videos, later.
This tuning is important prior to implementing attribution because the data can be very noisy if you don’t have a well tuned experience design first. If you implement attribution prior to tuning, it’s not the end of the world, but you will need to cut through the noise to tune the attribution. And this can be difficult because there are so many variables to control, it’s difficult to draw valid conclusions from the data.
When we started doing this at IBM, we found that five out of the 1000 or so white papers we had in market generated any kind of quality responses. The temptation was to say, “white papers don’t work.” But when did a further analysis of the five that worked, we found that they were highly technical in nature, not just delivering strategic points of view, but giving tactical guidance of how to implement a solution. Also, all our testing was done on late-stage offers, when tactical information is relevant but strategy is not.
Instead of jumping to conclusions, we started looking at how strategic white papers performed in early-stage experiences, and found that they performed better. All we needed to do was wire up the tracking system to show that people who downloaded those white papers also did late-stage activities leading to quality responses. When those leads closed, we could attribute the early-stage white papers to both leads and wins.
This example illuminates how attribution modeling works best. It doesn’t work to try to make all kinds of assumptions and wire something up based on the assumptions. That leads to conclusions like, “white papers don’t work.” But if you want to know how well a particular white paper is working, you have to take all these variables into account. Assuming the context in which the white paper is delivered to the audience conforms to best practices (landing page UX, right asset type, etc.), you can compare their response scores on an apples-to-apples basis.
A note on gating
Another variable you will need to control is whether or not your assets are gated. In early stages, prospects are less likely to fill out a form with their correct information to download an asset. Also, gating can prevent them from finding the assets in the first place, because the gate prevents search engines from indexing the assets independently of the experiences where they live. So the best practice is not to gate assets in early-stage experiences.
But if you don’t gate, how do you know the asset contributed to a lead? The answer is tracking. You can cookie the user anonymously and track their activities through the point where they fill out a form. When they do fill out the form, you can add all those other touches to the client reference, with the name and email you capture. All those touch points contribute in some way to the lead. Your attribution model can then take those touch points into account.
I have intentionally avoided the question of how much weight to give individual touches in a multi-touch client journey. Weighting can add bias to the algorithm, which obviously affects the results. But it is up to you to weight things the way you think measures your touch points accurately. I would start with giving equal weight to all the touch points, and then look for patterns in the data to determine that you want to weight certain items higher than others.
Attribution modeling is as much art as science. I hope after reading this, you are not so intimidated on getting started. It’s not that hard. You make hypotheses (based on best practices) about what you think is working, and you test it. You learn a lot in this process, and eventually you are able to attribute any page or asset (or combinations) you publish to the business results that matter. If you have a working attribution model, you can learn how to focus on the things that matter more. Most importantly, you can get the funding you need to do more of the things that work. In particular, you can build sustainable top-of-funnel content marketing programs.