Mobile Fraud Prevention

Mobile Fraud

Avatar by Morgan Friberg | August 6, 2019

With over a decade of experience, Andreas Naumann, the Head of Fraud at Adjust, is one of the leading authorities in mobile advertising fraud prevention.

No stranger to Liftoff, Andreas has been featured as a Mobile Hero (and comic character), participated in a podcast and webinar, as well as penned numerous guest blog posts including: “Mobile Ad Fraud Q&A with Adjust,” “Mobile Ad Fraud Expert Answers Three Important Questions,” “Mobile Ad Fraud: 4 Best Practices to Fighting Click Spamming,” and “3 Common Types of Mobile Ad Fraud and How To Spot Them.”

With his vast mobile fraud knowledge, Andreas monitors the #fraud channel on the Mobile Heroes Slack Community. We recently held a Fraud AMA (Ask Me Anything) with Andreas, which you can read below. 


What are the newest types of mobile ad fraud you are starting to see?

We haven’t seen any completely new fraud schemes in 2019, so SDK Spoofing and Click Injection are still the newest contenders. We have, however, experienced massive developments in the efficiency and accuracy of these known fraud schemes.

For instance, Click Injection mostly happens on the harder to detect “content provider exploit,” so the decision as well as the data transfer is managed server side. This means the exploit can be turned off at any time.

SDK Spoofing has also evolved to perfection. Successful spoofers do not rely on curated data anymore, they actually use live apps to maliciously collect device data in order to spoof installs and post install engagement for real devices that never installed the target app.

What should marketers do to protect themselves from Click Injection and SDK Spoofing? 

That depends on what you are already doing. At Adjust, we employ strong deterministic solutions, so both Click Injection and SDK Spoofing evolutions are still fully covered. Spoofing, unfortunately, is nearly undetectable, but not unstoppable. Making sure it is not possible to spoof the SDK is the best protection.

Are you familiar with any of the third party fraud tools (Machine, Scalarr, Interceptd)? Do you have opinions on them and what value they can add?

I have limited exposure to third party vendors, but my understanding is that many third party fraud tools do not disclose their methods of detection and mitigation. They only explain which fraud types they engage with and that they detect it. Not knowing the actual methodology will not allow marketers to judge their results. 

When comparing 3rd party fraud vendors, I recommend marketers consider the following points:

  1. Avoid false positives – be careful with vendors that simply claim to find more fraud. Flagging more isn’t difficult, it’s flagging the correct amount that’s the hard part.
  2. Work with transparent vendors – working with a vendor that is not transparent makes fraud claims opinion instead of fact. It also removes your chances of judging if a new or persistent fraud scheme will be sufficiently covered by that solution.
  3. Be careful with customization – these mentioned fraud tools can be customized. In particular, they offer post install funnel analytics and compliance issues which can be tailored to fit each individual app. However, in this customization lies the inherent risk of misconfiguration, so use sparingly and don’t create false positives for yourself.

Can you give some advice on how to spot post-install fraud (e.g. event fraud such as purchase)?

This is a question that is not easily answered in general terms. It depends on the app, the user conversion and event funnels.

If you have a clear hierarchy of events that can only happen in chronological order, then you would start looking at outliers from that structure first.

If you have the chance to check external systems for the validity of an event, e.g. purchases or registrations, you should tie that data in and look for discrepancies.

If you are worried about spoofing, the first step should be to secure the install first. A spoofer can’t make up events for installs they do not know about, and they will also not get paid for events on installs not attributed to them.

What are your thoughts on fingerprint vs identifier attribution and their correlation to potential fraud?

I’m a little biased when it comes to this question — in my opinion, fingerprinting as an attribution method is outdated.

At Adjust, fingerprinting is off by default during campaign creation and needs to be opted-into by the advertiser.

On Android it should not be relevant at all, as the GPS referrer is available. On iOS there is currently no clean way of attributing in lieu of an IDFA  as the device and OS fragmentation is pretty much non existent. Combined with carrier grade Network Address Translation (NAT), the uncertainty of fingerprinting on iOS is below acceptable and it should only ever be used in edge cases.

What is the best way to detect fraud coming from the web on iOS and Android?

Web on iOs and Android for that matter is currently based on fingerprinting. But it doesn’t have to be — the native browser should pass through the respective advertising ID for advertising purposes.

Currently there is no way around fingerprinting, especially on mobile web iOS. On Android I would still argue that the referrer is the best fit.

What would your ideal “anti-fraud stack” consist of?

The beginning of a good anti-fraud stack on the advertiser side starts with at least one employee dedicated to fight ad fraud. That person should be involved in all marketing-related processes and decisions: from selecting an MMP and, deciding which SDKs go into the app, to agreeing on BI tools and performance measurement. This designated person should try and stay on top of the topic as best they can and employ internal checks, as well as utilize external tools or suites. 

One of the focuses should be achieving transparency in supply and making sure that the full user conversion funnel is available for scrutiny. This will allow for personalized KPIs and sanity checks which will identify most fraud schemes right away. From there, defensive measures can be designed or bought externally as needed.

In my book, handing the anti-fraud stack off to an external party is something an advertiser can do, but they should not do it blindly. An advertiser needs to have at least one person on staff that can scrutinize and control different tools and solutions in order to continually add value.

In your opinion, why do publishers like Cheetah remain on the Google Play Store after being called out for fraud, whereas apps by DO Global or DU Apps were banned?

As for the anti-fraud measures of the Self Attributing Networks (SANs), I do not have sufficient insights into the anti-fraud efforts of the Play Store. My answer therefore would be pure speculation. 

I would imagine the removal and subsequent re-acceptance of fraud-generating apps are tied to the degree of the offense committed. Attack vectors like compromised SDKs or compromised app development environments are not in favor of the app developer that gets their apps removed. However there are also cases where the app developers or publishers actively committed fraud, and in these cases, I’m sure Google Play is a lot more stringent when it comes to allowing apps/developers/publishers back into the store. We of course welcome policing by the Google Play Store but don’t rely or focus on it for our efforts. Calling out or suing people doesn’t help our problem at hand as companies names are interchangeable in today’s world.

How does the incentive structure needs to change in ad tech to have a positive impact on reducing mobile ad fraud?

This is a very interesting and important question from my point of view. The incentive structure on the advertisers’ side should make sure that marketers have a decent chance to do their best work — and be rewarded for it — while actually working in favor of their company and creating additional value.

It might be easier to start with a negative example first. A setup I see very often is when UA Managers are measured on spending their budget fully and acquiring the highest possible amount of new users. In turn, marketers get incentivised to spend the most money on the sources with the cheapest traffic at the highest volume. Naturally the supply side partners will optimize towards the same goals as this will increase their revenue and earnings.

Unfortunately this structure breaks when it comes to publishers. A publisher can’t offer their supply at the same low price, since it is limited and they have to make a living. Enter the fraudster. The fraudulent publisher has convinced the networks (and therefore the advertisers) that there is an unlimited supply of cheap and high volume traffic, which is mostly click spam and click injection traffic. This cannibalizes the organic channels, paid channels and other non-digital marketing channels an advertiser is running. In essence, the advertiser buys users they already convinced to use their app for a discounted price through mobile performance channels.

Now the “perfect incentive scheme” is hard to pinpoint in general, but tying quality traffic metrics to those schemes seems to be a sane approach. So the highest volume, at the best price, while still achieving decent click through and conversion rates.

Where do you get your fraud news? Any blogs, newsletters, Twitter users, etc. to follow?

I have several Google alerts running. I check LinkedIn daily and follow experts such as Dr. Augustine Fou.  I also get articles sent my way from every direction, eliminating the chance of me ever missing one. Also I would hope everybody in the industry takes the time to read Craig Silverman’s articles on ad fraud.

Any good tools out there to monitor malicious SDKs in apps? 

Rogue SDKs are usually found out by security companies through weeks of research and reverse engineering. I do not know of any tool that would be able to replicate that work reliably.

If the idea is just to check internet communication that might be unwanted or malicious, Charles Proxy is a great tool to get an overview of the communication a mobile device sends off to the internet. Ad requests that execute without an ad being seen on the screen could be recorded, however it’s not possible to pinpoint the actual software doing this.

How large would a blacklist need to be for it to be effective? 10K apps? 100K apps? Tips on which app categories or keywords to avoid (e.g., flashlight)?

I would argue a blacklist gets less and less effective the bigger it gets. The problem is that most fraudsters are pretty successful career criminals that have a deep understanding of the industry. Therefore they have little hassle getting around blacklists since they can usually drop their domain, company name, address, server, bank account, etc. Then, they will be back with new data, a new name and new faces. Legitimate companies, that get put on a blacklist accidentally or on purpose have little to no chance to ever get off of it. As a result, you will get a bigger and bigger blacklist with a higher and higher share of useless, outdated and incorrect data.

Fraud is not about categories, industries or countries. It is about exploiting low security and low awareness. The biggest fraudsters do not have flashlight apps; they have very popular apps that most likely already monetize through advertisements. It’s just that they maximise revenues by not playing by the rules.

What is the difference between fraud on iOS and Android?

The only big difference is that Click Injection fraud schemes are tied to the exploits available in the Android operating system, so Click Injections are not found on iOS. All other fraud schemes are applicable to all platforms.

We also do not see a significant difference in the amount of fraud on the different platforms. However, iOS has more click spam than Android, while Android has click injections as well.