Ahead of ATT’s rollout, which reached a critical mass of distribution to iPhone devices via iOS 14.6 in June, the general consensus I observed amongst advertisers was that fingerprinting was not expected to be a viable means of attributing installs on iOS in the post-IDFA environment, although whispers of a “grace period” circulated. But months after ATT’s introduction in iOS 14.5, fingerprinting is still rampant on mobile, and Apple has made no indication that it plans to police its use.
Why is Apple allowing this?
First, it’s important to understand what fingerprinting facilitates. Fingerprinting is the process of aggregating hardware and network parameters from a device into a combination that is likely to be unique, or unique enough to provide a sense of identity, within some period of time. The more parameters that are combined, the less common the combination, but the primary components to a device fingerprint for mobile advertising: device IP address, OS version, and model code. A fingerprint is not persistent, and it can expire rapidly, so fingerprinting can really only be used for install attribution: the time between a click and an app install tends to be abbreviated such that a fingerprint match between an ad click and app install is considered reliable. So while a fingerprint can be credibly used to attribute app installs, the same is not true for in-app events that happen hours or days later. I speak to this limitation in the ATT, one month in: privacy thresholds with Rich Jones of Dataseat podcast.
So while fingerprinting can help an ad network claim attribution on an install, because it is not persistent, fingerprinting can’t be used to build the kinds of behavior-based profiles that large ad platforms use to target ads. Those profiles are built through event attribution: the event stream that exists between a product and ad platform is only helpful if those events can be attributed to users and aggregated as inputs into click and conversion probability models. I describe this attribution-to-targeting dynamic in this article.
The ad platforms that sell owned-and-operated advertising inventory operate the apps in which that inventory is sold. If these apps were to fingerprint, Apple would see that happening in the App Store review process. Apple could police fingerprinting from these companies by rejecting updates for their apps until the unnecessary device parameters used in fingerprinting were relinquished. This would be a point-blank means of policing fingerprinting that would punish companies directly for that violation of ATT.
But when ad tech companies conduct fingerprinting through SDKs, they do so within the apps of their customers. Apple can similarly see this happening in app review, but in order to police it, it would need to reject updates from app developers that aren’t themselves doing anything wrong. This would be a messy solution, especially since every app on the App Store being run as anything resembling a business contains at least one SDK that is currently fingerprinting. App developers would be punished by Apple for violations from the ad tech companies that they are paying.
Ad networks unquestionably benefit from this dynamic:
- Because these networks mostly use contextual targeting, their efficiency wasn’t dependent on IDFA-indexed behavioral profiles and is mostly unaffected by ATT. Through fingerprinting, these companies can measure attributed installs in the same way that they did prior to ATT, without yoking a measurement methodology to SKAdNetwork. This makes these networks an attractive alternative to ad platforms selling owned-and-operated traffic;
- Fingerprinting is an imperfect and imprecise means of attributing an install. As was discussed in this podcast episode, a fingerprint is useful for finding any matched conversion, and it tends to overattribute. Anecdotally, many mobile advertisers are seeing the proportion of organic installs shrink meaningfully after ATT’s introduction, potentially because those organic installs are being overzealously attributed to ad networks through fingerprinting.
Apple introduced Private Relay, a feature that obfuscates IP traffic in the Safari browser, with iOS 15. Private Relay processes unencrypted traffic within apps, but that accounts for very little traffic: apps sit mostly outside of the purview of Private Relay.
If that changes — if Apple routes all in-app traffic through its Private Relay filter — then in-app fingerprinting would become much less reliable. Given that Apple’s only other option for policing fingerprinting is to reject app updates, putting an end to fingerprinting through Private Relay is likely the best choice. But the timeline for that remains to be seen.