
My guest on this episode of the Mobile Dev Memo podcast is Mikołaj Barczentewicz, an expert on European data privacy law. Mikołaj is a law professor at, and the research director of, the Law and Technology Hub at the University of Surrey in the United Kingdom, and he joined me just a few weeks ago in A deep dive on European data privacy law to discuss recent developments in Europe related to data privacy and GDPR enforcement.
I invited Mikołaj back onto the podcast because we didn’t get a chance in our last conversation to cover two pieces of legislation that will perhaps catalyze the most important changes to the digital landscape in Europe since the GDPR: the Digital Markets Act (DMA) and the Digital Services Act (DSA), both of which were codified into law last year and go into effect soon.
In this episode, Mikołaj outlines the legal impact of both of these pieces of legislation in detail, with specific attention paid to the digital advertising market. We also discuss the latest news related to Meta’s recent fine by the Irish DPC for using first-party data without consent to empower personalized advertising, as well as the temporary ban that the Italian DPA enforced on OpenAI’s ChatGPT.
The sound quality in this episode was unfortunately worse than usual, so a lightly-edited, machine-generated transcript of our conversation can be found below. As always, the Mobile Dev Memo podcast is available on:
Podcast Transcript
Eric Seufert:
Mikolaj, thank you very much for joining me again on the Mobile Dev Memo podcast. Our last episode was very well received. Got some really positive feedback on that.
In that episode we went so deep into a lot of GDPR and ePrivacy Directive stuff that we didn’t get to the DMA and the DSA, which are two very, very large, important looming topics related to European digital privacy. So I’m very happy to have you back on to discuss those topics.
Mikolaj Barczentewicz:
Thank you for having me back. Yeah, I think there is a lot to say about the novel issues in the EU law.
Eric Seufert:
Great. I want to get to DMA and DSA, and that’ll be the bulk of the episode. But before we do, there have been some updates from the topics that we discussed last time. They’re two big ones. They both erupted, I guess, in the latter half of last week.
The first is a little bit more clarity with respect to how Meta plans to adapt it services to this ruling from the Irish DPC, which said that it can no longer use the contractual necessity legal basis for processing user data for the purposes of advertising target. First of all, did I get that right? And then if I did, what actually is this? What did they announce?
Mikolaj Barczentewicz:
You’re right that according to that Irish decision, Meta cannot rely on contractual necessity. And we speculated a bit what will Meta try to do now about their local basis for data processing. And we now know that they want to replace contractual necessity… actually not just one. They announced that they will, as of I think 5th of April, Wednesday, that they will start using legitimate interests. They said, unsurprisingly, that they believe contractual necessity was fine, but because of that decision they are switching to legitimate interest, which they also think is fine. But as we said during our previous conversation, there are some risks with doing that, and it’s not a strategy that many other data processors choose to use these days.
Eric Seufert:
And we talked about the experience with TikTok, along those lines. Just as a brief reminder, so TikTok tried to shift the legal basis in Europe away from contractual necessity to legitimate interest. They had an opt-in consent pop up.
When we’re discussing this, we’re only talking about the use of first-party data. We’re only talking about the usage of data that is derived from the direct interaction with the product. We’re not talking about third party at all. This is entirely scoped to first party.
But TikTok had a consent prompt, I guess they wanted to get rid of it, that asked the user, the consent prompt asked the user if they would allow for their first-party data to be used for ads targeting. So TikTok wanted to get rid of this prompt. They attempted or they announced that they would change their terms to rely on legitimate interest and not contractual necessity. And they were told… They were nudged by some DPAs to recognize that if they made that change, it would be challenged. Is that roughly correct?
Mikolaj Barczentewicz:
Yes. It was primarily the Italian authority, the Italian DPA, GPDP, who officially objected to TikTok’s plan.
Eric Seufert:
Which is a nice segue into the next topic that we’ll touch on briefly, but I just, I want to hang out there for a second because I think that poses an interesting question. TikTok announced that they would do this, and then they ultimately abandoned that change. They ultimately abandoned the idea of trying to use legitimate interest as the legal basis for processing this data. Why would Facebook succeed? Why would Meta succeed? Why could Meta succeed here where TikTok couldn’t? Is there a very clear reason why?
Mikolaj Barczentewicz:
TikTok, their problem with the Italian authority was not just under the GDPR. There is also the ePrivacy Directive angle, which doesn’t contemplate such a lawful basis as legitimate interest of the data processor. So under the ePrivacy Directive, if you store information or gain access to information stored in the terminal equipment of the user, so for example on the user’s phone or computer, then you need to ask further consent. Perhaps Meta thinks they found a way around that, that they don’t store or gain access to that information, that they don’t have the eprivacy problem.
But under the GDPR, what the Italian authority noted was that legitimate interest has two limitations. One is that you cannot use your legitimate interest as a data processor if this interest is overridden by more important interests of the data subject. There is that tension, and there seems to be some hostility, so [inaudible] organization, knowing they already published a press release and they say, unsurprisingly, that they believe that targeted advertising use of these uses of personal data, that just cannot be justified by legitimate interest in this balancing test.
And then there is another problem, which was also an issue for TikTok, which was that you cannot process special category data, so data reviewing racial or ethnic origin, and so on, sexual orientation, yeah, based on legitimate interest. So if someone could argue that Meta processes or just cannot avoid processing this special category data, or that there is this issue of balancing, and that just doesn’t work for their reason of balancing of their interest versus user rights, then legitimate interest wouldn’t work here.
I’m half expecting and I think it’s very likely that some national authorities in Europe, perhaps the Italian authority, DPA, the Italian DPA will try to use that interpretation against Meta. We may see some litigation on this point as well.
Eric Seufert:
Okay. Let me see if I can read that back to you. Meta may have found a… We don’t know. This is just pure speculation. But the reason Meta may be choosing this path, where they saw TikTok be unsuccessful on it, is that they may have developed some kind of mechanism for not actually reading or writing any data to the user’s terminal. They might not actually have to read or write data directly from the user’s phone, and that may relieve them of the ePrivacy Directive applicability here. Because ePrivacy Directive only allows for consent. There is no legitimate interest. And so they may have relieved that applicability through some novel application of technology. We’ll just wait and see.
If that’s true, well then, okay, that’s one difference. That’s potentially one difference, but then they still face the specter of having the personalized advertising use case be interrogated under the auspices of legitimate interest. And we just have no idea how that would be resolved. There’s no real precedent for that yet. Is that correct?
Mikolaj Barczentewicz:
It looks like it. What we do know is that there seems to be clear hostility against using legitimate interest from some national DPAs. We’ll probably hear more about this soon.
Eric Seufert:
All right, so segueing into the next brief update here before we get into the meat, from Italian DPA objecting to TikTok’s use of legitimate interest to, now, news last week that the Italian DPA, the Guarantee… I don’t know how to pronounce it, has basically intervened in the case of OpenAI’s ChatGPT, and they’ve said that this system, this program may not use Italian residents’ data. They may not use the data from Italian citizens.
Talk to me a little bit about that because this is still a little bit unclear. This just pertains to the data that they use to train the models. And they had published this press release, and they said, “Look, you have to come into compliance. Otherwise, there’s a whole fee schedule.” But can you just talk to me briefly about that? Because it’s very new, but I think it would be good to clear up some of the confusion.
Mikolaj Barczentewicz:
What we can read in that decision from the Italian DPA is that, as you said, the focus on collection of personal data and processing for the purpose of training the model used by ChatGPT. It’s not specifically about the processing that is being done by ChatGPT operating now, but, oh, it’s about the training process. That’s why all the reasons they give for it. For example, they say that there’s a violation of the principle of lawfulness because the OpenAI didn’t state the lawful basis for processing personal data for it. There was a violation of the principle of accuracy.
Although, that’s interesting because here the authority seems to be looking at ChatGPT now giving inaccurate answers and using that as a reason to say that there is a violation of the principle of accuracy regarding personal data, but they also talk about the right to be informed. But again, this is a right to be informed regarding this processing for training purposes. So they think, at least prima facie, this is a violation, and that’s why they use their power, under Article 58 of GDPR, to impose a temporary limitation on data processing. But yes, that’s meant to be related to the model training exercise.
Eric Seufert:
I mean, obviously this will be litigated and we’ll get just a sharper sense of how these new technologies will be regulated. At a high level, my belief is that we are kind of entering a new world here. There are probably ways to completely foreclose upon these types of technologies from operating, using existing privacy law. And the question is, basically, how does it get applied?
And to your point earlier, and there are some DPAs that seem to have a stricter interpretation of the GDPR then other DPAs. I guess, what is your sense for how this plays out? Because it seems like this could get very chaotic.
Mikolaj Barczentewicz:
There are many questions here because you could theoretically imagine a training process, even perhaps a very large language model, where you could try to filter out to not ingest personal data, but whether you can do that really depends on how do you understand the definition of personal data. Because if you understand it very broadly, then actually, it might not be possible to have a training process for a large language model that avoids using personal data, and then you have to, at least it seems that, under the GDPR, then you have all those GDPR requirements. That’s one aspect.
But there is also the other aspect that once you train your model, can you also train it in such a way that even if you start with personal data, you do not end up with a model that constitutes per single personal data because you have broken the link, de-identified the data in such a way that it cannot be realistically re-identified. That’s also a technical question about the development of those models because if you could do it, then that should address most of all of the GDPR concerns.
But again, whether you can do it, it’s only partly a technical question. To a large extent it’s a legal question because it depends not just on the technicalities, but also on how do you define personal data, and how do you define the anonymization or de-identification that makes personal data, or well, something that used to include personal data, not include personal data anymore.
And here, I think we touched on this last time, that some national DPAs, they do seem to have this interpretation of GDPR which is based not on the standard of what realistically can be re-identified, but on the standard of almost theoretical possibility that if you throw at it billions of dollars, and it’s just someone else, not even you, has another dataset that if you combine with that dataset you have, then you can re-identify things, well, then that’s all personal data, and then the GDPR applies. That’s the problem, that it’s both a negotiation of in terms of technology but also in terms of law and its application.
Eric Seufert:
Right. Yeah, so buckle up.
Okay, I want to move into the headline topics here today, which are the Digital Markets Act and the Digital Services Act. Both of those were passed into law last year. They will go into effect this year, but I think the restrictions start applying in 2024. Let’s just start there. What is the Digital Markets Act and what is the Digital Services Act?
Mikolaj Barczentewicz:
They all started as a one general legislative idea. It was clear that the European Commission and the European Parliament, they wanted to do something about tech. And now, after passing the DSA and the DMA, there was a victory announcement from the European Commission. They said that, “There will be a before and an after for the DSA and the DMA. Many thought that regulation would take years, would be impossible, too complicated, the lobbying too strong.” That’s a quote from the European Commission.
What do they think they achieve with those pieces of legislation? From the original set of ideas that were divided into two separate pieces of legislation, the DSA, the headline version is that it ensures a safe and accountable online environment, whereas the DMA is meant to ensure fair and open digital markets. So the DMA sounds more like competition, consent-competition-focused regulation, whereas the DSA is more about online content and illegal content online, and content moderation, and some related issues.
Eric Seufert:
Got it. Can you just talk briefly… I found this fascinating, understanding this when these two pieces of legislation were being negotiated. But can you just talk to me briefly about the trilogue negotiation process? I am just amazed that anything can ever make it out of that. That seems like a crucible. But just could you talk to me? How does a bill become a law in the EU?
Mikolaj Barczentewicz:
In the case of those two, and generally that’s the way that the European Commission, which is the executive government of the European Union… Also, they have the technical capacity and the political mandate to propose new laws, which often happens after a resolution, like general political resolution from the European Parliament, and we had such resolutions in this case. Usually the parliament, or the national governments, they have some ideas.
The Commission has some ideas, and then the Commission prepares, say, after a consultation process, they prepare a draft. This draft then is being considered both by the European Parliament, where we have directly elected members of the European Parliament from each of the EU countries. And every draft proposed by… Oh, not every draft, but often those drafts are also considered by the European Council, or just the Council. The Council is not directly elected. It’s just a representation of national governments from each of the member states.
We have three parts of that process. There’s the European Commission. They are the ones who draft proposals. And then there is the European Parliament, or the elected representatives, and then the governments of… It’s almost like you will have a government or representatives of state governors deciding on legislation in Congress. The process of trilogues involves partly open, but mostly closed, behind closed doors, negotiations of the representatives of those three institutions. They go through several rounds.
Much of this is really hidden. Sometimes we have leaks. And when there is a piece of legislation which receives so much media attention, like for example the DMA, the DSA [inaudible], then we have more leaks, but often for EU legislation it’s really all in obscurity.
But not all pieces of legislation succeed this way because sometimes disagreements are too large between institutions, or there’s just not enough political will or legislative time to deal with them. In this case, it was a success from the perspective of just getting it done. And after several rounds of negotiations, we ended up with a final text. The final text was adopted by the European Parliament and by the Council, and that’s how we end up with the law, two laws in this case, two regulations.
Eric Seufert:
Got it. So the DMA is related to competitive issues. I think some of the very specific takeaways from that, if you look at specific instances in the digital economy where there has been acrimony, or where there has been a claim of unfairness, that’s cases like a platform operator also competing with the companies that sell products in its app store. That’s one example.
There’s also a lot of issues there around forcing interoperability across services. If the platform operator runs some kind of service, then they have to make those APIs and the underlying, whatever, the underlying machinery available to companies that also make a similar service on the platform. And then the DSA was, as you said, related to in like data transparency and then content moderation transparency. That’s roughly how I think about them. Is that correct?
Mikolaj Barczentewicz:
I think that’s roughly correct.
Eric Seufert:
Okay. I had written an article right after the DSA became law, and I talked about how the DSA would apply to digital ads. You sent me some helpful information today about how the DMA applies to ads. I want to get to that later, but first I want to talk about the interoperability mandate because I think that certainly was what got the most purchase on Twitter around this legislation, about how impossible that would be to implement, and also about what it could mean for the prospects of end-to-end encryption.
What kind of trade-offs, with respect to security, will need to be made in order to make these messaging services interoperable? That’s the court case there was messaging. On the iPhone, for instance, you have iMessage, and then the issue with the DMA is, well, the iMessage has to be interoperable with Facebook chat. Or with WhatsApp, or with Signal, or Telegram. Talk to me about that because there is real questions there about what security sacrifices you’d have to make in order to allow for that.
Mikolaj Barczentewicz:
One thing that we should mention about both the DMA and the DSA is that at least some of the rules only apply to certain kinds of entities.
Eric Seufert:
Sure.
Mikolaj Barczentewicz:
Those special entities in the DSA are called very large online platforms, and then the DMA, they are called the gatekeepers. We don’t know which companies are going to be designated as gatekeepers. There are several presumptions, one is that if you had 45 million active end users in the EU. That’s one issue.
Of course, not every service can fall under this, so it has be the so-called core platform service and that include social media, web browsers, online advertising services. That’s the general thing that I think needs to be understood about the DMA.
When it comes to interoperability, so that was one of the most heated debates during the legislative process for the DMA. What we have in the end is not the maximalist version of this provision because there were ideas of having just a general interoperability requirement for all sorts of services which ended up being limited.
And now, we have this Article 7 of the DMA, which only provides for interoperability of this, and the name is very clunky, number-independent interpersonal communication services, those number-independent services, so not telephony services where you have a phone number, but WhatsApp, iMessage, and so on. But, of course, the question is whether any of those services will reach the threshold of being a gatekeeper. I’m not aware of what are the exact numbers, but I’m guessing it may probably happen to iMessage or WhatsApp. I don’t know that for sure.
But let’s assume, just for this conversation, that we’re talking about iMessage and WhatsApp. So we have two different services which are not operating right now. The way I interoperate with people, because I’m using both, is that I multi-helm it. So I have both WhatsApp and iMessage apps, and it works for me. I don’t mind it.
But the idea here is that something is happening to me that is not good, but I have to multi-helm, and I should have one app to rule them all, that would connect me to everyone who uses the WhatsApp network or the iMessage network.
The problem with that, and if someone’s interested in it, there is a great new paper by a very respected Cambridge University computer security specialist who show this very well, that this idea that you can do this while protecting user security and privacy is a bit of wishful thinking, given the current operational and technological reality. We have this Article 7 where it says, on one hand, well, you have to get this done, but on the other hand, this… That’s my favorite provision where it says that the level of security, including end-to-end encryption and that the gatekeeper provides to its own end users, should be preserved across the interoperable services. The idea is that this is meant to be done, but it has to be done in a way that doesn’t lower the current level of security.
That’s pretty much impossible right now, and it seems like this is going to be impossible in the timeframe where when those laws are meant to come into force. So either this provision, the safeguard provision will be watered down and will just not be treated seriously, or there will be some sort of a delay. Or perhaps somehow magically the problems will be resolved, but that’s probably the least likely scenario in the timeframe, which is next year.
Eric Seufert:
Got it. Just to clarify there, so the idea being that the interoperability requirement applies to gatekeepers, but companies that don’t qualify as gatekeepers, because they’re too small, still would participate in that. So the two messaging services that this applies to are iMessage and WhatsApp. I was trying to find Facebook chat numbers, just briefly, but I couldn’t find anything. They don’t break that out.
So let’s say it’s just those two. That means, well, they have to make their services interoperable with anyone that wants to operate on their services, but the opposite is not true. So Signal, which I’m assuming doesn’t qualify, doesn’t have to make its service interoperable for anybody. It can just exist as a standalone service, but it can integrate into iMessage if it so chooses. So there’s if you don’t qualify, you still get to participate in the interoperability of the main services.
Mikolaj Barczentewicz:
Yes. There is a very important clarification. Interoperability is not meant to be a benefit for the gatekeepers, but for those who are not gatekeepers. But the problem with that is, of course, that… I mean, maybe this is not obvious, but I think it’s at least arguable that the gatekeepers, the current gatekeepers or the likely gatekeepers are the ones who are in a better situation to actually provide this level of security than, so for example, some sort of a startup.
That leaving aside Signal because the idea was, on here, also to spur innovation to allow especially European startups to compete with the gatekeepers, but the problem is that if you have like two guys in the basement startup, they will not have the information security infrastructure that Meta or Google have. That’s not even in the realm of possibility.
Then the question is, do we treat seriously this requirement that level of security has to be the same, or do we water it down? If we water it down, then where is the limit of watering it down? So do we really care about security or not? I mean, it may sound nice on paper, but it will be very difficult to do.
Eric Seufert:
Right. Yeah. Okay. Moving on to the DSA, so the DSA has a number of implications for online advertising, although my personal assessment of the DSA is that it is less restrictive and severe than legislation that was proposed here, but in the US, proposed here last session, we had the Banning Surveillance Advertising Act.
Can you talk to me about how the DSA will impact the online advertising market, and why? So just that, first, what will the impact be? And then second, why do you think that is? Why would the European legislation be more toned down? Is that just because that’s what made it into law, and so that’s what happens through that negotiation process by definition?
Or is there a more radical element in the United States? And keep in mind the Banning Surveillance Advertising Act didn’t go anywhere, so it didn’t get codified into law. It would just be interesting to hear your opinion there because it does feel like that’s not what you would expect.
Mikolaj Barczentewicz:
The effect is the effect we have because of those trial negotiations, and it is the case that some participants of those negotiations tried to push for things like prohibition along targeted advertising, so what’s called profiling. If I’m not mistaken, we have that, but only for minors.
Eric Seufert:
Right. Yeah.
Mikolaj Barczentewicz:
This was a big debate that ended up scaled back to just the issue of minors. We don’t have a product prohibition of targeted advertising. That’s true. I think that’s just, in a sense, a testament to some pragmatism even in the European political process, that just everyone thought this would. Oh, not everyone, but the majority there would go [inaudible]. That’s the reason.
Eric Seufert:
I think it’s probably worth noting that I think that probably more the DMA, but the DMA and the DSA were the most aggressively lobbied pieces of legislation in EU history, maybe, after the GDPR. So yeah, that there’s probably some influence, in that respect, but I mean, I guess it’s just that these negotiation process, is by its very nature, moderating, right, and so-
Mikolaj Barczentewicz:
Yes.
Eric Seufert:
… you do get some of the more extreme edges shaved down a little bit.
But can you talk to me about so what are those impacts? The one is that there is a full ban on targeting advertising to minors. You may not do it. I think, for the most part, that’s uncontroversial. I think most reasonable people would agree with that. The question was, going into this, how this prohibition would be determined. So the question was, well, do you have to know, with full credibility, that this person is not a minor before you can target ads to them? Or if you know they’re a minor, then you may not target ads to them any longer, on the basis of them being a minor.
So the former would be very restrictive. The former would, in effect, be a total ban on targeted advertising because you’d have to know, with full confidence, that someone’s not a minor before you could target ads to them, and that’s very difficult to do. How could you know that? You could put up…
Mikolaj Barczentewicz:
[inaudible] on internet.
Eric Seufert:
Right. And then the latter is more loose, and I think that’s much more common sense. If you know that someone is a minor, then you may not target ads to them. I don’t know anybody that would push back on that. That’s one restriction but talk to me about some of the other restrictions.
Mikolaj Barczentewicz:
By the way, on this point of minors and targeted advertising, the language in the DSA is aware with reasonable certainty. Of course, I think that it will still be debated what does that mean exactly.
But moving onward from advertising, we do have a generic prohibition on dark patterns in Article 25. Although, at the same time, the DSA states that legitimate practices, for example, in advertising that are otherwise in compliance with EU law, are not to be considered as dark patterns. That’s also a question of… It’s just one example of what we’ll see over and over again, which is that we have somewhat vague terms, and it will really be up to the authorities and the courts to determine what they are meant to mean, in practice.
So we do have a prohibition on dark patterns, and we have provisions on algorithmic transparency. This is entitled recommender system transparency, which we can discuss later. We have provisions on labeling of advertising that commercial communication should be labeled as commercial communication. This is not new in EU law. And then we have additional online advertising transparency for very large online platforms where we’ll have those open databases of information about currently running or ads from the past year, for very large platforms.
And then probably the last thing we could also discuss is in the data access regime where researchers will be able to get access to perhaps internal databases, maybe not live production databases, but some copies of their databases or code bases of very large online platforms. That may also be used to scrutinize the advertising ecosystem. That’s not clear exactly how that will be used, but it may have some impact.
Eric Seufert:
Right. I think my takeaway from these requirements and stipulations is that they mostly apply to the relationship between the ads platform and the consumer. A lot of this is who targeted me? What parameters can be used to target me? What parameters were used to target me for this specific ad? What ads are being shown by this advertiser right now? Can I go look through that?
By the way, Google just announced that they’re going to introduce that soon, so most likely in preparation for becoming compliance. And then the other piece is the relationship between the platforms and regulators. So algorithmic transparency, which Twitter’s algorithmic transparency was very interesting to see all the privileges for Elon literally hard coded, not even using a user ID. But anyway, so I guess we’ll get that for…
Mikolaj Barczentewicz:
[inaudible].
Eric Seufert:
… just isElon. I guess we’ll get that for Facebook, and maybe there’s a similar isZuck trigger there. But nonetheless, that’s more the relationship between regulators and the ad platforms. There wasn’t a whole lot, in my mind, in the DSA that applied to the relationship between advertisers and platforms, but we’ll talk about that in a second.
Mikolaj Barczentewicz:
Can I ask you a question? Or I’m curious what you think about this Article 39, or on additional online advertising transparency where we’ll have those compulsory open databases because of information about advertising where you have content of the advertisement, like who paid for it and all the targeting criteria. Not the price, not here, but the information who paid, and which if it’s not the same entity as was being advertised. But this is meant to be available through APIs, so it’s not just for users. Right?
Eric Seufert:
Right.
Mikolaj Barczentewicz:
If it’s a tool available through APIs, then it seems like, I think the idea was that this is going to be used by researchers, but my first intuition was that this is going to be used primarily by the industry. So you will probably very quickly have products that tell you what your competition is doing in terms of if you’re an ad agency or if you’re just a client. I can imagine those products being developed very quickly. I wonder if you think this will have any impact?
Eric Seufert:
You’re right. I agree that the main consumer of this information will be practitioners. They will be operators.
Mikolaj Barczentewicz:
Yeah.
Eric Seufert:
I think the use case is intended for researchers and regulators, but I think the primary consumers will be the operators, and I think that’s demonstrably true now because Facebook has the Facebook Ad Library.
It doesn’t provide a whole lot of data. What it’s primarily used for now is just looking at what ad creatives your competitors are running. The problem with it is that it reduced the time to ubiquity. If I have an ad, and I’ve been running it for a while, by proxy, that’s a signal that it’s a performing ad. As soon as that is the case, all my competitors will copy it, pixel for pixel almost. That’s one downside. I think the upside is much more substantial. It’s just having a lot of transparency in what ads are being run.
But no, you’re totally right. The API will be ingested by probably tools that companies subscribe to, to just get instant alerts of when their competitors are running ads, and then all the new data that is mandated to be made available as well. Because with Facebook Ads Library, you get to see spend levels, but you don’t get to see spend amount and stuff like that. You have to make a lot of assumptions about how much money has been spent on these ads.
Mikolaj Barczentewicz:
But we don’t have spend amounts here. We have numbers of some basic stats, the number of users reached, and aggregate numbers broken down by your member states, but I don’t think we have spend here. That’s also here.
Eric Seufert:
Yes. The view counts are also bucketed on Facebook now, so you don’t get to know exact view counts. That could be used as a proxy for spend, right?
Mikolaj Barczentewicz:
Oh, okay. I see.
Eric Seufert:
This will be just it’ll be used tactically by operators and it probably will be used, to some degree, by regulators and researchers.
Mikolaj Barczentewicz:
Yeah. Well, and it will not be just Facebook, but all of those-
Eric Seufert:
Yeah, right, exactly.
Mikolaj Barczentewicz:
… providers that get classified as [inaudible] ops.
Eric Seufert:
Okay, so we talked about the DSA’s application to online advertising. You send me a bunch of in interesting potential points in the DMA or points in the DMA that also apply to online advertising. Can we walk through those too? You sent me kind of four and the point you made in the email when you sent them, this is very much going to be a question of enforcement in interpretation because you could make the case on all of these that this could be the end of the world or this is no big deal. So I think, but let’s just walk through those.
Mikolaj Barczentewicz:
So the first one actually goes back to our first topic today, so Meta’s move to legitimate interest because Article 5(2) says that… Of course, again, I’m just assuming for the sake of argument that, for example, Facebook will be covered as a core service, as a gatekeeper. Which may also be litigated, but let’s assume. So what Article 5(2) says is that the gatekeeper should not process personal data for the purpose of advertising, relying on otherwise that need a user consent. So it excludes the possibility of using contractual necessity or legitimate interest.
So in this instance, this whole big debate and the Irish investigation and so on, it is being made moot by the DMA. So yes, is, I’m guessing, that Facebook will have to adjust to it, assuming that it’s designated a gatekeeper. But that’s an interesting resolution to and kind of a call back perhaps to our first conversation. You have to use consent. It’s almost like the ePrivacy Directive. That’s the first one.
The second one is in the same article, or Article 5(9). Then we have a provision that’s meant to regulate gatekeepers who offer online advertising services. This deals with information that the gatekeeper shall further provide advertisers or third parties authorized by advertisers, so I guess ad agencies. So here, there are requirements on information on a daily basis, free of charge, concerning each advertisement. Sorry.
The question is, so not being a practitioner, it’s not easy for me to judge whether those letters A, B, C here, whether they introduce much of a novelty. So what do we have here? Pricing fees paid by the advertisers, including collections and search engines for each relevant online advertising, and then remuneration received by the publisher, including any deductions on search in this, and finding metrics in which of how the prices, fees and remuneration are calculated.
I know that the European Commission justified those provisions, saying that this kind of transparency in the ad ecosystem is not yet… Well, that’s not the current situation, but I’ll be curious what do you think of it and what other experts in the field think of whether this is really new or this is just something that’s already provided?
Eric Seufert:
Well, no. This is the crux of the DOJ suit against Google. This is all opaque, and especially so… I mean, this could be a totally separate topic, but you’ve got the publisher payouts. Which I don’t know if you read the DOJ case, but a lot of that had to do with Facebook adjusting the bid on the advertiser’s behalf. And if that bid was going to an external service, they could adjust it to make the bid on its own service more competitive.
And all of that happened without really any ability for an outsider to know with certainty that it was happening. I mean, there was an understanding that it was happening. I mean, and that’s why publishers reacted by changing the bid floors for different networks to try to move more of their impressions to be served by non-Google entities. There was an understanding that that was happening under the hood, but the DOJ, they’ve shown a spotlight on that.
But no, there is no law that requires that. And I think this could be… I think this will just will put a lot of price pressures on ad tech middlemen. I think it’ll be apparent just how much of a rake they’re exercising, and then it’ll just become competitively advantageous to charge less. I charge less, I get more business. So I think this will have an impact, but yeah, this is the heart of the DOJ case.
Mikolaj Barczentewicz:
And we have it, by the way, so I just mentioned the Article 5(9) where the gatekeeper providers app provides advertisers with information, but Article 5(10) does the same for publishers. It talks about gatekeepers providing information to publishers, including information on the price paid by the advertiser. So you get transparency from both sides of this relationship. That’s what Article 5 says specifically about advertising.
Eric Seufert:
Yeah, so it’s just the other side of the coin there.
Mikolaj Barczentewicz:
And then we have Article 6, which is… I’m not going to go too much into detail the differences between Articles 5 and 6, but Article 6 also has some interesting potential duties that may be imposed on the gatekeepers. Beginning with Article 6(10), here we have a provision that doesn’t speak about advertising directly, but it seemed to me that it could be relevant because it talks about gatekeepers and their business users. So if you advertise through a gatekeeper servicer, then you’re a business user of a gatekeeper service.
What is this meant to do? This is meant to give those business users a right to get, for free and where this high quality continues and realtime access to and use of all data, including personal data provided or generic in the context of the use of the services. The point is that any data that’s generated by you, or by the users with whom you’re interacting through that platform, is meant to be made available to you for free and through an API continuously and so on.
This is, one, an example of this provision that, as you said, it could be a revolution or it could be a bit of a nothing burger. It will really depend on how this is interpreted in practice, but it seems to me that at least there’s a possibility that this could change something in terms of, yeah, data access.
Eric Seufert:
Right. Let me just run all those back to make sure that I’m clear and hopefully to clarify for the audience too. So there, we’ve got four articles/sub-articles here that are relevant. The first is Article 5(2), which basically says… And here is where I may be off in my interpretation, but my read on this is is this is essentially codifies ATT into law. This says you must receive consent for using third-party data for the provision of advertising services. That doesn’t apply to first party. That’s only third parties that…
Mikolaj Barczentewicz:
I should have said that. You’re absolutely correct that this is different from our first case in the sense that, our first topic today, that it’s third party versus first party.
Eric Seufert:
Right, so it’s that 5(2) essentially is legal ATT. This is ATT is a law now. You must get consent if you’re going to collect that third-party data for ads targeting.
The second article/sub-article is 5(9), which just mandates that these platforms offer up some minimal level of transparency. It sets the minimum standard for transparency that platforms must offer to advertisers around pricing and fees paid. 5(10) does the same for publishers, right?
Mikolaj Barczentewicz:
Yeah.
Eric Seufert:
So it establishes a minimum standard of transparency. And then 6(10) through (12) says that data generated by the advertiser or the people that interact with the advertisers ads in the advertising use case, it might apply to other use cases, but in the advertising use case, that the data that’s generated through the use of the platform for advertising must be made available to the advertisers. They understand what the effect of their advertising was with more transparency.
Mikolaj Barczentewicz:
It seems like this is what the effect of Article 6 that would be. But by the way, we still have Article 6(11) and (12), which are slightly different. They are not the same. They are not just extending Article 6(10) because Article 6(11) talks about providing information to online search engines. Sorry. If you’re running an online search engine, and that’s a gatekeeper and it’s obvious which ones or which one is meant here, then you have a duty to provide information on individual-level-query click and view data from that search engine.
I wonder if that’s going to affect the ad business, at least in the sense that it will be an interesting source of information for ad researchers and marketing researchers. Right?
Eric Seufert:
Mm-hmm.
Mikolaj Barczentewicz:
This will be query click and view data.
The problem is, of course, that it says that personal data should be anonymized, and it’s very difficult to think how queries, which often very betray personal data, how can you anonymize it. But well, that’s just one of those contradictions in the DMA. That’s 11.
And then 12, here that’s just a general front requirement, for so whenever a gatekeeper is providing its services to business users. And it’s not just limited to application stores, but it’s also for on search engines. I wonder if that could also extend to advertising, but perhaps not. That I am not sure about, but these two are also separate from 6(10).
Eric Seufert:
Got it. Okay, so these are the four articles from the DMA that-
Mikolaj Barczentewicz:
[inaudible]. Yeah.
Eric Seufert:
So the four points from the DMA that impact the advertising space, we talked about the different aspects of the DSA that impact the advertising space, but these bills were not targeted at advertising. Advertising is one behavior that these laws are designed to regulate, but there are a whole bunch of other use cases that these laws will regulate. Obviously, the provision of an app store, that the DMA will have a tremendous impact on the app economy with respect to who can run a store and how the store can be operated.
We’ve already seen some companies prepare for that eventuality. We heard Microsoft say two weeks ago that when the DMA goes into effect, they will launch a game store on iOS and Android. They are going to do that. I speculated two weeks ago about what would happen if Meta did that. If Meta did that and they ran the store, and your ads clicked through to their store, they would have a full chain of custody throughout that user journey. And then they’d be able to use that data for ads.
I think there are myriad ways that the DMA will upset the status quo as it stands now. And then, obviously, that’s just in Europe. The applicability is just for Europe, but we’ll see what other American legislation regulation follows in the DMA and DSA’s footsteps.
Okay, so we talked about the different ways in which these laws will apply to different use cases. We talked about what these laws are. Let’s talk about how these laws get enforced because that, to me, is potentially the biggest question here. When we know what the laws say now, we could probably guess how they’re interpreted or with what level of vigor they’re interpreted.
How do they get enforced? How does the EU team up or staff up a team of sufficient size with the sufficient domain expertise to police this? Especially when you talk about algorithmic transparency and you talk about some of the elements from the DSA that pertain to what is essentially IP. It’s IP for those companies. This has been developed over years and years and years, and these companies recruit extensively from Ph.D. programs for their marketing science divisions and their ad platforms. How does the EU enforce this?
Mikolaj Barczentewicz:
Depending on your perspective, if you are in the European Commission, and I think the official line is that they are ready for it, that they are now hiring. I think they announced that they will hire 100 full-time staff in one of the directors general in the DigiCollect. Those people will be involved in enforcing and studying issues related to the DSA and the DMA. So this will be one DSA/DMA task force. Okay, so the official story is that this will be sufficient. Then this will allow the Commission to achieve its goals.
Of course, there are critics who think that this is not enough, that still even the 100 staff will leave the enforcers a very significant imbalance and, vis a vis, the companies that they deal with. So possibly that this is not such a large number, given that there are so many different nuances to all those obligations, that it could be that many having sensible guidance will have more litigation. So there is that risk. So it’s depending on the position that you could see it as the Commission being ready or as the Commission being understaffed and not prepared for that task. There are definitely two views on that.
In terms of what the Commission is meant to do, the Commission under the DSA, the role will be a bit like under the GDPR, although with… There will be national authorities, and national so-called digital services coordinators, so not DPAs, but DSCs. And each country will designate a DSC authority for itself, but the Commission, unlike in the GDPR, will have a bit more direct investigatory authority. I heard there was a very large online platform.
And that is a bit of a compromise because so one argument was that the Commission should have much broader investigatory authority to avoid the problems with the GPR, according to some. But so that’s the compromise, that there is a somewhat split competence and the Commission gets those very large platforms and then fines, so here we have 6% of obviously the worldwide turnover, total worldwide turnover, up to 6%. So that’s the DSA.
And the DMA is fully enforced by the Commission with fines for non-compliance, in the first instance, up to 10% of total worldwide turnover, up to 20% on repeated offenses, and even a provision in Article 18 that if there is systematic non-compliance, that the Commission may implement acts to order behavioral or structural remedies, so something like divestiture or, yes, there’s some functional remedies. That will be… The fines are very high and that’s certainly something we’re used to under the GDPR, and then we have this additional, under the DMA, an additional tool of behavioral and structural remedies.
Eric Seufert:
Yeah, it’s interesting because if you look at the case of Twitter open-sourcing the algorithm, I mean that was combed through in a matter of hours by tens of thousands of people. Right?
Mikolaj Barczentewicz:
Yeah.
Eric Seufert:
And all of that insight was surfaced very, very quickly because essentially you syndicated the job of combing through the algorithm to tens of thousands of people who are very interested in it. Little pieces were just trickling through my timeline within minutes. They began trickling. Within minutes, people were finding really interesting stuff.
I wonder why they didn’t take that approach, because I mean, obviously, it’s difficult to do that with some things that are truly trade secrets, right, that are-
Mikolaj Barczentewicz:
Intellectual property.
Eric Seufert:
… yeah, the actual intellectual property, but some of this… I mean, the algorithm, you could argue that it needs to be public. You can make the argument that needs to be public so that people understand how their feed is curated.
I mean, I suppose you could consider it to be a trade secret, but you could test the different combinations of these parameters so readily that I don’t know that, in effect, it is. Right?
Mikolaj Barczentewicz:
Yeah.
Eric Seufert:
So if I wanted my algorithm to behave like TikTok’s, I could just test a bunch of different…I could test the sensitivity of content to various pieces of feedback from users. Now, to my mind, when you look in the Twitter algorithm, the real trade secret is the ability to make those predictions. Because, okay, if we predict that this is going to happen, then we apply these rules. It wasn’t based on observed outcomes. It was based on predicted outcomes, the ranking. So my sense is that is the real IP. That’s the product. The algorithm is just like a thin layer of logic on top of that.
So if you have a DSA, or sorry, a European Commission, that is just woefully understaffed, to actually interrogate what’s in these systems, and that’s the DSA says, “We mandate that you make the data accessible and the algorithms accessible to vetted researchers.” That seems to be the stuff of conspiracy theories. Then you’re going to get a whole bunch of people saying like, “Oh, what do they know?” Why not just make it all open? I mean, that the algorithm, if it’s all open, everyone can just peek into it.
I was curious about that because I think, especially… I don’t know. It just feels like vetted researchers, well, there’s this shadowy association of people that get the access and other people don’t. And what do they do with it? I would just assume you’d want to sidestep that question completely, but they didn’t do that.
Mikolaj Barczentewicz:
Yes. That’s true, but I mean, aside from trade secret issues and intellectual property issues, there are also questions because the algorithm is one thing, but what if, at least partially, the algorithm is not really what Twitter just published, but it’s a machine learning model? Which may, perhaps, I don’t know, but theoretically could include some, for example, personal data, and then you could have issues of also uncovering personal data if you publish that.
I mean, that was, of course, it’s also had an effect of the negotiations, and this is also a compromise and that we ended up with. But I see what you’re saying that perhaps, at least for some algorithms, it would have been more effective just to have sunlight. But yes, but there are those problems.
Eric Seufert:
We can finish on this. The FTC just announced a program where they were going to hire some number of technologists. I think it was a pretty substantial number of people they want to hire. Now, there was the ECS doing the same thing to enforce these laws that are… They’re going into effect very soon. Will they be able to accomplish that? Do you feel like they’re…
I was talking to someone the other day, and they said, “Look, I think a lot of people would love to go and work for the FTC for a couple of years because, after they do, they’ll have a deep understanding of how that organization functions and could be very useful inside of big tech. You go and you’ve got this very marketable skillset. You take it to the FTC. You probably take a cut and pay, but after that, these big tech companies would be tripping over themselves to hire you because you’d have an insight that they don’t really have, which is how these organizations function.
Do you see that happening in Europe? Will there be a lot of people that say, “You know what? I’m totally willing.” I understand some people would do it just because they feel like it’s a duty or obligation. I’m not discounting that, but I’m talking, for the people that are purely motivated by money, this actually could be a pathway to making more money.
Mikolaj Barczentewicz:
That’s a possibility. I don’t know how this recruitment process is going for the European Commission, but I did see some announcements that they… I think all of them were related to officials from national authorities, so high-ranking officials from national authorities or national governments joining the Commission. So people who worked on those files on the negotiations are now being scoped prior by the Commission to work. But we’re talking about officials, so bureaucrats or politicians or political operatives rather than technologists.
So yes, it is an interesting question whether the Commission will be able to capitalize on that effect. But I’m sure this is real, what you just suggested, so I wouldn’t be surprised if they managed to get some good technologists as well. But for now, it seems to be, at least from public announcements, seems to be mostly former officials.
Eric Seufert:
Got it. Mikolaj, this was again a very fascinating discussion. I’m sure it will be as well received as the first was. Please tell the audience where they can find you, how they can engage with you, how they can reach you.
Mikolaj Barczentewicz:
You can follow me on Twitter @MBarczentewicz, like my name. I’m sure there will be a link, so that’s probably easier than me pronouncing it.
Eric Seufert:
That’s right. I’ll include a link in the show notes. Mikolaj, thank you very much for your time. I appreciate you taking the time to chat with me today.
Mikolaj Barczentewicz:
Thank you.