How can we help you?

We have a simple 3 step process:

  • You fill out a campaign setup form
  • We kick off a meeting with you to confirm campaign setup and collect assets (KPIs, benchmarks, tracking links, creative assets, device IDs)
  • We launch your campaign

The entire setup-to-launch process can take as little as a few days to up to 1-2 weeks depending on how quickly we can confirm campaign setup and receive your campaign assets.

CPI (Cost per Install): find users who install your app
CPA (Cost per Action): find users who take action in your app
CPR (Cost per Revenue): find users who generate the most revenue for your app (ROAS)

No. Our ML can only optimize for one goal. That’s because CPI and CPA/ROAS are inversely correlated.

No. We need ML to gather data across different supply to find who your best users are. Capping ML will limit ML’s ability to scale your performance long term.

Tip: ML is only as smart as the data it’s learning from. Capping ML limits ML learnings to low quality inventory, which leads to user saturation and scale limitations.

We recommend starting with a daily cap range of $250-350. Anything more than $350 can cause ML to learn too quickly over a very short time period which may not reflect true market conditions.

It’s the user flow from install to the target A-event that we’re optimizing for.

Example:
Install > Registration > Add to Cart > Purchase

Setting up the campaign with the right sequence is key to success.

It’s the target A-event tied to your goal KPI (ROAS or CPA).

Example:
Goal: CPA
Target A-event: Purchase

It’s an event that is highly correlated with your target A-event that is likely to trigger more target A-events happening.

Example:
If your target A-event is a “purchase”, then your intermediary event can be “add to cart”. The more “add to cart” events happen, the more likely “purchase” events will happen as well.

Tip: Optimizing for intermediary events can expedite your performance, especially for apps with long user flows.

Repeat events are when a user completes the same event multiple times.

Example:
User A makes a purchase 3 times.

It’s a time window that you can use to evaluate performance.

Example:
Install occurs on day 1, purchase occurs on day 3
Uncohorted view: purchase will be attributed to day 3
7 day post install cohorted view: purchase will be attribted to day 1 install

Tip: Cohort view can help you identify user behavior patterns that you can use to inform your in-app marketing and re-engagement strategies.

It’s important because that’s the window that we’ll be optimizing for.

Example:
If a customer’s CPA goal is purchase with a 7 day window, we will optimize for purchases within 7 days of install

Uploading a suppression list enables ML to focus your ad spend on users who have not installed your app.

Example:
App with suppression list: ML “knows” which users already have the app. So, the App’s budget will only be spent to show ads to users who do not already have the app.

App without suppression list: ML does not “know” which users have the app, so it will spend on all users it thinks will probably download the app. Since some will have already downloaded it, spend will be wasted on these users.

Did you know? Liftoff customers with suppression lists uploaded see up to 39% better performance than those that do not.

Enabling all postbacks can boost performance by up to 39%. This is because it enables ML to focus spend on users who did not already install your app.

App without all postbacks:
ML does not “know” which users have the app, so it will spend on all users it thinks will probably download the app. Since some will have already downloaded it, spend will be wasted on these users.
ML will not “know” user behavior for the app outside of Liftoff-driven conversions. If ML spends on a user who has already downloaded the app and they don’t convert, it will assume that user was not a good fit for the app. Not only is spend wasted, but the ML will be pessimistic towards user characteristics who actually are a good fit!

App with all postbacks:
ML “knows” which users already have the app. So, the App’s budget will only be spent to show ads to users who do not already have the app.
ML will “know” user behavior for the app outside of Liftoff-driven conversions. If ML spends on a user who does not convert, it will rightly assume that user was not a good fit for the app.

Enabling revenue postbacks helps ML optimize for users who generate the most revenue. Without it, we are blind to the revenue amount of every transaction and are less likely to achieve your ROAS goals.

Enabling view through attribution gives you a holistic view of your ad’s performance, both from clicks and views.

Example:
User A sees ad and clicks to install at that moment
User B sees ad and doesn’t click but installs app later that day

Tip: Google and Facebook have view through attribution automatically turned on. If you want a true apple to apple comparison, we recommend making sure that all channels have the same view-through attribution setting in your partner app so that performance comparison is accurate.

There is a ramp up period to get your campaign up and running.

  1. Learn – ML gathers data across different ad inventory to see what kind of users convert in your app
  2. Optimize – ML starts to optimize for the most valuable users

Once in the optimization phase, we can begin to scale your performance.

We recommend launching with 2 geos maximum per operating system. This enables ML to gather enough data without spreading your budget too thin.

Ramp up time depends on geo, app, event type and supply.

CPI: Up to 100 unique installs
CPA: Up to 300 unique events
ROAS: Up to 150 unique revenue events

CPA and ROAS campaigns optimize for high value users who take action or generate revenue. High value users are more expensive.

Tip: CPI and CPA/ROAS are inversely correlated. Capping CPIs limits ML to bidding on low quality users who don’t drive value.

Yes, you can reach out to your customer success manager to discuss changes you want to make.

Tip: Changing campaign goals or making drastic daily budget changes will force our ML to spend most of your budget learning instead of optimizing on learnings. High ML performance requires a stable learning process. When ML can proceed to optimization, you’ll see it reflected in steady performance lift.

Yes but we recommend starting with no more than daily cap maximum of $350 to allow ML to gather enough data over a reasonable time period. Anything more than $350 can cause ML to learn too quickly over a very short time period which may not reflect true market conditions. Once we’ve completed the learning phase, we can increase your daily caps to focus spend on driving performance.

No. Our ML is built to target users on the user level, not the publisher level. What that means in practice is that we will optimize for users who are most likely to convert regardless of their source app. Applying allowlists or blocklists limits ML’s reach and ability to scale your performance.

Example:
Allowlist – ML bids only on users from limited group of source apps, leading to saturation and scale issues
Blocklist – ML misses out on potential quality users on blacklisted apps

Tip: Concerned about fraud? ML’s built in Fraud Check tool screens every bid request at the device level to filter out suspicious or fraudulent bid requests.

You’ll need to have a retargetable audience of at least 250K users (100k for ecommerce, 100k retargetable players for casual gaming).

We’ll need the following to set up your Re-Engagement campaign:

  • CPA/ROAS goal
  • App User Flow (e.g. install > registration > add to cart > purchase)
  • Target Segment (e.g. purchasers who haven’t purchased in past 14 days)
  • Intermediary event (e.g. add to cart)
  • Target A-events (e.g. purchase)
  • Blackout window (e.g. target users after 3 days of install)
  • Device ID List
  • All Postbacks (required for more than 2 weeks if 250k Device IDs aren’t available)
  • MMP Tracking Deep Links

Yes. There are many benefits of running UA and Re-Engagement with Liftoff.

Better performance – leverage ML holistically to re-engage users at the right time
Better user experience – create a seamless ad experience with consistent creatives
Cost efficiency – eliminate risk of double paying and streamline operations with one point of contact

Machine learning is the engine that powers programmatic ad buying. Think of it as an algorithm that makes smart buying decisions. Machine Learning uses test and measurement to discover and reliably target users who are most likely to respond to your ad.

This is how it works:
Training data > Model > Predictions

We train our ML to learn only from comprehensive, clean and accurate data. We’ve built the following tools into our ML:

  • Smart Pacer – paces your ad spend optimally to see users throughout the day (bid more on users when there are more valuable users, less when there are less)
  • Fraud Check – screens bid requests for missing or suspicious information (e.g. missing Device ID, too many publisher apps, anonymous IP address)
  • Dead Zones – blocks accidental clicks that occur too close to the close ad button

This enables ML to learn adaptively and scale your performance long term.

Our ML’s built in Fraud Check tool automatically screens every bid request at the device level, rejecting bids that are:

Suspicious

  • Have too many publisher apps
  • Sees too many bid requests/day (hyper-active user)
  • Suspiciously high geo movement
  • Anonymous IPs (for exit nodes, VPNs, servers, etc)

Invalid

  • Missing device ID
  • Missing app store ID
  • Malformed or truncated

Fraud Check rejects 13% of total bid requests, ensuring that your spend is protected.

We work closely with exchange partners to adhere to their brand guidelines. Additionally, we implement a Liftoff-wide standard on the inventory that we buy.

Example:
We do not buy from apps that may contain sensitive or inappropriate content (e.g. poltiical, firearms, alchohol, sexual)

In the event that you see an ad appear in an app that is deemed unsafe, just let us know. We’ll pause the publisher and investigate within 24 hours of request.

We’ll need the following to design high performing creatives for your campaign:

  • App name
  • App title/caption
  • App icon
  • App logo
  • App short copy (max: 65 characters)
  • App description (max: 90 characters)
  • Product images
  • In-app screenshots
  • Marketing collateral
  • Native images
  • Video endcard
  • Video portrait (1-15s, 16-30s)
  • Video landscape (1-15s, 16-30s)
  • Branding guidelines

Please refer to your campaign setup form more details.

We accept the following asset types:

Images: PSDs (strongly preferred), JPGs, PNGs
Videos: MP4

We run the following ad formats:

320×50 (Banner – Mobile)
300 x 250 (Banner – Mobile)
728×90 (Banner – Tablet)
480×320 (Interstitial – Mobile)
320×480 (Interstitial – Mobile)
768×1024 (Interstitial – Tablet)
1024×768 (Interstitial – Tablet)
1200×627 (Native)

You can view your creatives on your Liftoff dashboard after launch.

Generally, we do not run seasonal creatives. This is because of the way we test and iterate creatives – long-term or “evergreen” assets consistently perform better.

On top of that, seasonal creatives are often:

  1. Not as polished as evergreen assets
  2. Weak in leveraging layout and branding because they introduce many new elements, causing lower performance for the same amount of spend
  3. A weakening factor for campaigns, because they require spend that could be used for more meaningful tests and learning.

We A/B test creatives using the following methodology:

  1. Choose an existing creative to improve on (usually your top spending creative)
  2. Identify a key characteristic to test (e.g. visual or text)
  3. Build 1-2 creatives to test against the existing creative
  4. Conduct a roundrobin test with creatives so each has the same likelihood of being seen (same exchanges, source apps, countries, etc).
  5. Reach a minimum threshold of 100k min total impressions, 80 total installs, and a probability (p-value) of <0.01, meaning that one creative will continue to perform better than the other with 99% certainty

We measure success using the following metrics:

  1. User acquisition: impression-to-install (ITI)
  2. Re-engagement: impression-to-app-event (ITA)

We use A/B testing to find out which creatives will optimize campaign performance. It also allows us to identify new trends and best practices that can be used across all formats. When we discover a new learning in your vertical, we apply those learnings to your campaign.

Liftoff will run a minimum of 2 A/B tests per quarter per account.

Added frequency depends on level of creative flexibility, spend, and number of assets provided.

All of Liftoff’s ads are responsive which means that they are automatically configured to fit the user’s device and screen size, creating a seamless user experience.

We’ve generally seen responsive ads perform 200% better than static ads.

Yes, as long as those videos do not feature interaction prompts that are specific to those formats. For example, many Instagram/Snapchat videos end with a prompt to “swipe up” in order to learn more. That functionality does not apply when videos run outside of those apps, which can lead to a frustrating user experience.

With the iOS14 update, users will by default be opted out of IDFA. Without IDFA, advertisers cannot track users via device ID.

  1. We’re testing and iterating our ML on non-personalized traffic.
  2. We’re working with MMPs to support a blend of attribution approaches, including SKAdNetwork.

Liftoff will not collect IDFA for opt-out users, and instead will rely on coarse, non-user specific identifiers, like device model and language to train our ML models. Learn more about our privacy policy: https://liftoff.io/privacy-policy

Yes, Liftoff has secured the SKAdnetwork supply path and is partnering with exchanges and publishers to ensure we can all share the required information. You can read more about what publishers need to do to prepare for iOS14 here.

Non-personalized is defined as traffic that does not include device IDs (ex: zero IDFA due to iOS14 opt-in or blank device ID due to reasons such as limit ad tracking) whereas personalized traffic does include device IDs. The use of device IDs allows us to target specific users based on past behavior.

We receive non-personalized traffic from our exchange partners. We access this traffic the same way as we access our current traffic.

  • iOS campaigns will be a mix of personalized (IDFA) and non-personalized (zero IDFA) traffic. Expect non-personalized traffic to make up a greater portion.
  • KPIs will need to be adjusted to account for non-personalized traffic performance.
  • Re-Engagement scale will drop for iOS campaigns. Most re-engagement users will exist in the UA channel as it will no longer be possible to distinguish between new and existing users in non-personalized traffic. Android campaigns will continue as normal.

  1. Enable probabilistic matching through your MMP. Granular postback data is key to performance optimization.
  2. Adjust your KPIs to measure aggregate campaign performance. KPIs will depend on the attribution window and should include down-funnel events to measure UA and RE impact.
  3. Adjust your creatives to speak to new and existing users. For example, consider changing Install CTA to Play or Shop.

We recommend not splitting campaigns for 2 reasons:

  1. Personalized and non-personalized traffic will change a lot as iOS14 ramps up
  2. Allow ML to optimize for aggregate performance

Yes, Liftoff supports re-engagement campaigns for both iOS and Android. However, since re-engagement relies on device IDs, scale will decrease on iOS when users have the option to upgrade to iOS14.