Taming Facebook CBO: 5 lessons learned

This guest post is written by Shamanth Rao, a seasoned mobile user acquisition executive and a popular contributor to the Mobile Dev Memo slack team.

One of the bigger changes that’s coming on the Facebook platform is mandatory Campaign Budget Optimization (CBO).  CBO is supposed to seamlessly distribute budgets between ad sets: higher performing ad sets get more budget, and lower-performing ones get less budget.

That’s the theory, at least. Indeed, most of our accounts (as well as those of many people we know) haven’t yet embraced CBO entirely.  In practice, though, actual performance has been a lot trickier to manage and maintain consistently. This is because ROAS and CPA can often fluctuate wildly — and the algorithm can still behave unpredictably.

Yet, ignoring CBO is not an option because it is going to become mandatory for all advertisers very soon. This, if anything, is the time to begin adapting — and to get learnings under your belt before CBO is the only way to roll.

How are we adapting to CBO? Here are a few strategies that we, and other marketers we know, are finding effective. Do bear in mind that these are our subjective experiences, and Facebook is a black box that is very much in flux. Additionally, while results and performance can vary by vertical, geo and scale, our intent is to present broad principles here that we’re seeing across accounts we work on.

1. Start slow.

For every one of our accounts, we started testing CBO on small budgets (typically a couple of hundred dollars a day) on our strongest audiences and creatives. We ideally keep 3-4 ad sets per campaign. 

Once these show some stability in performance, we begin to increase our spends. 

2. Feed enough data to the algorithms — and ramp this gradually. 

This is the age of what many are calling algorithmic marketing — and we believe that one key goal of any campaign structure is to feed algorithms enough data so they can ‘learn’ and optimize. Facebook, at least at this point in time, requires 50 events per week (be it installs, purchases or whatever event you’re optimizing for) per ad set without significant changes to perform truly optimally. 

Yet it can be financially risky to set up multiple ad sets to hit 50 events per week — especially if you’re a smaller advertiser. If you have 5 ad sets each doing 50 events per week at a CPA of say $40, you’re setting a weekly budget of $10,000 for this campaign, which can be inaccessibly high for many advertisers.

How do you deal with this?

Set campaign budgets so as to hit a fraction of the 50 events per week requirement that you’re comfortable spending. In the above calculation, if all you’re comfortable with is $2500 per week for a campaign with 5 ad sets, start there.

As you see results, eliminate or consolidate underperforming ad sets so that the ones that remain do accrue enough conversions (in the above calculation, if you eliminate one underperforming ad set, you end up with 4 ad sets — which accrue learnings faster).

Gradually increase campaign budgets so you get to a point where each ad set can hit 50 conversions per week.

3. Test creatives without disrupting core ad sets’ learning phases.

One of the considerations that puzzled us when we realized the importance of feeding data to the algorithms was this: our ‘proven’ creatives were bound to get saturated, and their performance was bound to decline. We have always been proponents of aggressive creative testing: how would we test new creatives if we couldn’t disrupt a ‘proven’ ad set’s learning phase as it sped toward 50 conversions per week?

The best answer we’ve come up with is this: we typically separate out ‘core’ campaigns and ‘test’ campaigns. ‘Core’ campaigns contain ad sets with proven creative-audience combinations with the goal of getting these to 50 conversions per week. ‘Test’ ad sets contain new / test creatives that we run in ‘test’ CBO campaigns (typically at lower budgets to start with). 

As we gather learnings from our ‘test’ ad sets, we either graduate these ‘test’ ad sets into new ‘core ad sets,’ or add these ‘test’ ads into one of the core ad sets (assuming we’re ok with a reset to this core ad set’s learning phase). For more perspective about creative testing, see this Quantmar answer.

4. Ensure ad sets within each CBO campaign have similar audience sizes, optimization variables and OS.

We try and keep ad sets similar within each CBO campaign so they can compete on an equal footing. One way to think about this is that CBO is a way to test audiences against each other: just as in any ad set creatives compete to accrue the highest spend, in a CBO multiple audiences (and the creatives they target) compete to accrue the highest spend. 

That means within each CBO campaign, we strive to achieve:

  • Similar audience sizes: if there’s an audience of 1 million and 10 million in the same ad set, the algorithm easily skews delivery toward the bigger audience.
  • Similar optimization variables: Similarly, if you have an ad set optimizing for CPI and another for cost-per-purchase, putting them in the same campaign can result in skewed delivery.
  • Same OS: Generally if iOS and Android apps monetize less similarly to each other, we recommend keeping them separate. .

5. Control the metrics & variables that you can.

Yes, the CBO can be interpreted as Facebook taking control away from advertisers. Yet there are levers within CBO that can be very powerful ways to control performance. It isn’t a ‘set it and forget it’ setup yet.

Bids, minimum / maximum ad set level budgets, and campaign level budgets can all enable control over performance metrics. And yes, these can be managed without disrupting learning phases too much.

The brave new world of algorithmic marketing is here to stay — and many things are changing with Facebook. The above are our best attempts at decoding what is working on Facebook right now, and hopefully they’ll be helpful to others as well.

Shamanth Rao is the founder of the mobile UA agency RocketShip HQ, host of the How Things Grow podcast, and regular contributor to the Mobile Dev Memo Slack community. This article includes inputs and feedback from Sharath Kowligi, Head of Ad Monetization at GameHouse and advisor to RocketShip HQ.

Photo by NeONBRAND on Unsplash