Showing posts with label WhatsApp:antitrust. Show all posts
Showing posts with label WhatsApp:antitrust. Show all posts

Wednesday, July 30, 2025

Meta Sudans Redux: AGCM at the Gates


Fresh off the press release: the Italian antitrust authority has knocked — quite literally — on Meta’s door. A dawn raid hit the company’s Italian offices just yesterday. Perhaps it’s just a coincidence, but there’s been a notable uptick in enforcement on this side of the pond since that so-called 'windmills' trade deal struck on the golf course. 
The opening decision spans some fifteen pages, and it's apt to delight both the die-hard antitrust nerds and those somewhat disillusioned by competition law, who now see more promise in a concerted regulatory front combining multiple toolkits at once. As you Wavesblog Reader will have gathered by now, I firmly belong to the latter camp — and it’s from that vantage point that I’ll offer a few brief reflections on this decision.

For many of us in the latter camp, this 2014 Venn diagram from the European Data Protection Supervisor remains, to this day, a quietly persistent source of inspiration. The attentive Reader will have noticed that this diagram makes no mention of the Digital Markets Act (DMA)  — unsurprisingly, as it entered into force a full decade after the diagram first appeared. Nor does it include more recent hybrid instruments, such as Section 19a of the German Competition Act. I would place the DMA precisely at the intersection of those three circles — where data protection, competition, and consumer protection converge. Technically speaking, the DMA operationalises elements from each of these domains. From competition law, it borrows e.g. the focus on market power, albeit reinterpreted, and barriers to entry; from data protection, it incorporates core principles such as consent, data minimization and the 'by design' principle; from consumer protection, it reflects concerns such as user  manipulation. But this is far from a flattening or plain subsuming of these established legal domains. Something original has emerged, a distinct regulatory logic. 

Back to the AGCM's decision. If you’re anything like me, you did everything you could to avoid clicking on the shimmering little button that popped up on your WhatsApp screen — but you couldn’t exactly miss it either. As the Italian authority put it, the "Meta AI service was made autonomously available to users without them having taken any action to that effect. In other words, Meta chose to pre-install Meta AI and place it in a prominent position on the screen, making the service immediately available to all its WhatsApp users.” While neither the icon nor the search bar can be removed by the user, the Italian authority explained that users could, if they wished, initiate an independent chat with other AI services—such as ChatGPT—by manually adding them as a contact. I tried it out immediately (ah, the things we do in the name of research: now I've ChatGPT sending me unrequested messages). And yet, the Meta AI icon stays firmly in place, as does the search bar, helpfully prompting: “Ask Meta AI or search."

The Italian authority then embarks on what can only be described as a Kafkaesque journey through Meta’s “policies.” At first, Meta appears to say they don’t use your interactions with Meta AI to train the model — and then, a few brave clicks later, they say they do. That is, except for private messages, unless you or someone in the chat chooses to share those private messages "with our AIs" too. And in any case, as the AGCM immediately notes, it was Meta itself that stated WhatsApp user interactions would be used to train and improve their models. But it doesn’t stop there. As the Italian antitrust authority points out, Meta’s Privacy Policy suggests that WhatsApp users may gradually receive a personalised version of the service — based on the information they themselves have provided to Meta. In other words, each interaction with Meta AI feeds the system with information, some of which is then used to deliver increasingly personalised outputs. The aim? As Meta puts it, to make the responses from Meta AI more “useful and relevant.”

Already at this point, there would be plenty to say from a data protection, consumer protection, and DMA perspective. But for now, let’s stick to plain antitrust logic — though, as we know, it’s never that plain, and at times not that logical. This is an investigation under Article 102 TFEU, aimed at determining whether there has been an abuse of dominant position. The focus is, in essence, on a potential case of leveraging market power from one market — where the company holds dominance — into another, adjacent but distinct, market where it does not. All of this is well covered by EU competition law precedents [I still remember a panel at ASCOLA 2018 NYU, chaired by Tim Wu, where the question was put to us panellists about what more antitrust law should be doing to tackle Big Tech. My answer then? Pay closer attention to these very forms of market leveraging 😇]. And this, indeed, appears to be the core of the authority’s allegation: a classic case of tying by a dominant firm. Meta, it is argued, offers its main service - WhatsApp - together with a bundled, secondary service - Meta AI.  WhatsApp is an app designed for communication through advanced messaging technologies, while Meta AI is a separate service altogether — a generative AI tool meant to answer users’ general-purpose queries. Two distinct services. No question about it. But these aren’t just core platform services, as one might frame them under a DMA lens. From an antitrust perspective, they also correspond to distinct and separate markets.  When it comes to Meta’s dominance in the consumer communications market via app — whether at national or EU level — there is little room for reasonable doubt. The market definition and Meta’s entrenched position therein as sketched in the decision offer few revelations. More intriguing — and potentially more consequential — is the discussion around the market for AI chatbot or AI assistance services. The authority ventures into some possible delineations and goes on to consider potential competitors and their relative positions, but ultimately finds that, at this stage, the definition of the relevant markets for AI services can be left blissfully open (more on this later). With the question of market definition set aside, we reach what ought to be the real crux of the case: the identification of the alleged abuse. As previously indicated, the alleged abuse, according to the authority, lies in Meta tying two distinct services — WhatsApp and Meta AI — by pre-installing Meta AI within the WhatsApp app. Following the pre-installation, the AGCM argues, Meta — already in a dominant position with WhatsApp — would secure a competitive advantage in the market for AI chatbot or assistant services. The reason is straightforward: with minimal friction, WhatsApp’s more than 120 million users across Europe are instantly turned into potential users of Meta AI.

And here comes the discussion I personally find most compelling — not least for its broader implications. In paragraph 44, the authority goes a step further and notes that "it is also possible that Meta trains its AI model on the data and/or interactions with the user base of the service in which it holds a dominant position.” The authority thus argues that users are not only effectively *drawn* into using Meta AI through its pre-installation in the WhatsApp app — they also have their data and interactions used to train Meta’s AI model. These two intertwined dynamics may well produce exclusionary effects for rival providers of AI chatbot or assistant services, especially in light of the increased risk of user lock-in and functional dependency. In fact, argues the AGCM, Meta AI appears to store user-provided data over time — meaning its outputs become progressively more tailored and “relevant” as interaction increases, thereby reinforcing user reliance and raising the barriers for competitors.

This, dear Wavesblog Reader, is merely the opening decision. How it unfolds remains to be seen. But the heart of the matter seems clear enough: AI services offered by Big Tech — the very same platforms we use to chat with relatives, search online, or draft blog posts like this one, are being quietly shoved down our throats. Not only are we given no meaningful choice as to whether we want to use these services — they come bundled, pre-installed, ready or not — but once we do, we effectively feed them. Our interactions are harvested to train their models as well as, in Meta AI's case, apparently, personalise and refine the AI’s responses, making them feel ever more “useful,” while subtly reinforcing our reliance. The endgame from a purely antitrust perspective? A state of lock-in and functional dependency that all but shuts the door to alternative providers. 

That said, one can’t help but notice once again how narrow this type of antitrust lens is. The current logic of Article 102 TFEU, with its insistence on delineating separate relevant markets, ends up artificially carving WhatsApp and Meta AI out of their broader context. And yet, we know we are dealing with Meta — a particularly intricate ecosystem, where the relationships between services and their intersections are structurally orchestrated through malleable code and the seamless flow of data across the platform’s various environments, and their combination with third-party 'tracking' data, all to fuel its gargantuan advertising machine. That this market-delineation logic is utterly inadequate and insufficient in dealing with Big Tech in particular is evidenced by the very existence of the DMA and Germany’s Section 19a competition law provision — both of which, rightly, go beyond it. 

One might well reply that Article 102 TFEU is no broad analytical lens for grasping the wider problems posed by this kind of economic power, let alone its slide into political influence. It is a sophisticated scalpel, not designed for sweeping diagnoses, but reliable and technically sound when used with skill and precision. And that, it seems, is how the Italian competition authority intends to wield it. The goal is to preserve competition in the emerging market for 'Her's like' personal AI assistants, which the authority sees taking shape as an emergent, distinct category of digital services fuelled by generative AI. The protagonist of that uncannily prescient film falls hopelessly in love with his AI assistant — and that is the stickiest of 'lock-in' and 'functional dependency' scenarios for consumers more generally. It’s precisely what keeps him from even contemplating alternative AI companions, even when given the choice. One can easily imagine him letting her book his table at that little trattoria in Turin [where he went to visit the amazing Egyptian and Cinéma Museums] that perfectly matches his taste: all about tradition and authenticity, but with just the right touch of modern flair — because she knows him. Intimately. However, don't expect, dear Wavesblog Reader, to find any mention of Her in the AGCM’s opening decision. Had it been penned by one of those US judges with a flair for pop references, you might have found one. What that judge wouldn't have ignored, possibly, is that Her was not just an artistically brilliant film — it was a pink/scarlet-red painted warning. 


And so we circle back to where we began: the 2014 EDPS Venn diagram - as 'updated'  to include the DMA. It’s clear the Italian antitrust authority knows it’s not operating in a vacuum. It is, rather, doing what it can with the tools at its disposal, under no optimistic assumption that the practices Meta engages in, and the services it rolls out, are otherwise fully compliant with the complementarily applicable legal regimes (those on the updated diagram and, additionally and eventually - tomorrow - the AI Act). Already the brief encounter with Meta’s privacy policy, as recounted in the decision, must have been enough to dispel any such illusion. 

Set against our diagram, the potential concern the AGCM has chosen to investigate, albeit subject to the still inadequate pace of traditional antitrust proceedings [but what about interim measures - do we really need to go to the Autorité for them?], is clearly narrow - as noted above - but also hardly open to serious dispute. With a mere system update, millions of WhatsApp users — without having asked for it — suddenly found themselves turned into potential Meta AI users, with a rollout laced with unmistakable nudges towards engagement. All this in a context where no one had prevented Meta’s prior appropriation of Facebook and Instagram users’ data to train its models (but, rest reassured, data protection authorities are still discussing whether this was ok or not). Also from the point of view of the DMA, that move was at least questionable. Now, let’s assume for a moment that there truly is an exclusionary effect — that rival “Hers” can’t quite compete on the merits with Meta AI, simply because they have access to far less data, not only generally to train AI models, also about individually about us as users and consumers. Is that something we should worry about? On one level, certainly yes. 


TBC