We have a simple 3 step process:
- You fill out a campaign setup form
- We kick off a meeting with you to confirm campaign setup and collect assets (KPIs, benchmarks, tracking links, creative assets, device IDs)
- We launch your campaign
The entire setup-to-launch process can take as little as a few days to up to 1-2 weeks depending on how quickly we can confirm campaign setup and receive your campaign assets.
CPI (Cost per Install): find users who install your app
CPA (Cost per Action): find users who take action in your app
CPR (Cost per Revenue): find users who generate the most revenue for your app (ROAS)
No. Our ML can only optimize for one goal. That’s because CPI and CPA/ROAS are inversely correlated.
No. We need ML to gather data across different supply to find who your best users are. Capping ML will limit ML’s ability to scale your performance long term.
Tip: ML is only as smart as the data it’s learning from. Capping ML limits ML learnings to low quality inventory, which leads to user saturation and scale limitations.
We recommend starting with a daily cap range of $250-350. Anything more than $350 can cause ML to learn too quickly over a very short time period which may not reflect true market conditions.
It’s the user flow from install to the target A-event that we’re optimizing for.
Example:
Install > Registration > Add to Cart > Purchase
Setting up the campaign with the right sequence is key to success.
It’s the target A-event tied to your goal KPI (ROAS or CPA).
Example:
Goal: CPA
Target A-event: Purchase
It’s an event that is highly correlated with your target A-event that is likely to trigger more target A-events happening.
Example:
If your target A-event is a “purchase”, then your intermediary event can be “add to cart”. The more “add to cart” events happen, the more likely “purchase” events will happen as well.
Tip: Optimizing for intermediary events can expedite your performance, especially for apps with long user flows.
Repeat events are when a user completes the same event multiple times.
Example:
User A makes a purchase 3 times.
It’s a time window that you can use to evaluate performance.
Example:
Install occurs on day 1, purchase occurs on day 3
Uncohorted view: purchase will be attributed to day 3
7 day post install cohorted view: purchase will be attribted to day 1 install
Tip: Cohort view can help you identify user behavior patterns that you can use to inform your in-app marketing and re-engagement strategies.
It’s important because that’s the window that we’ll be optimizing for.
Example:
If a customer’s CPA goal is purchase with a 7 day window, we will optimize for purchases within 7 days of install
Uploading a suppression list enables ML to focus your ad spend on users who have not installed your app.
Example:
App with suppression list: ML “knows” which users already have the app. So, the App’s budget will only be spent to show ads to users who do not already have the app.
App without suppression list: ML does not “know” which users have the app, so it will spend on all users it thinks will probably download the app. Since some will have already downloaded it, spend will be wasted on these users.
Did you know? Liftoff customers with suppression lists uploaded see up to 39% better performance than those that do not.
Enabling all postbacks can boost performance by up to 39%. This is because it enables ML to focus spend on users who did not already install your app.
App without all postbacks:
ML does not “know” which users have the app, so it will spend on all users it thinks will probably download the app. Since some will have already downloaded it, spend will be wasted on these users.
ML will not “know” user behavior for the app outside of Liftoff-driven conversions. If ML spends on a user who has already downloaded the app and they don’t convert, it will assume that user was not a good fit for the app. Not only is spend wasted, but the ML will be pessimistic towards user characteristics who actually are a good fit!
App with all postbacks:
ML “knows” which users already have the app. So, the App’s budget will only be spent to show ads to users who do not already have the app.
ML will “know” user behavior for the app outside of Liftoff-driven conversions. If ML spends on a user who does not convert, it will rightly assume that user was not a good fit for the app.
Enabling revenue postbacks helps ML optimize for users who generate the most revenue. Without it, we are blind to the revenue amount of every transaction and are less likely to achieve your ROAS goals.
Enabling view through attribution gives you a holistic view of your ad’s performance, both from clicks and views.
Example:
User A sees ad and clicks to install at that moment
User B sees ad and doesn’t click but installs app later that day
Tip: Google and Facebook have view through attribution automatically turned on. If you want a true apple to apple comparison, we recommend making sure that all channels have the same view-through attribution setting in your partner app so that performance comparison is accurate.