![]() |
Fresh off the press release: the Italian antitrust authority has knocked — quite literally — on Meta’s door. A dawn raid hit the company’s Italian offices just yesterday. Perhaps it’s just a coincidence, but there’s been a notable uptick in enforcement on this side of the pond since that so-called 'windmills' trade deal struck on the golf course.
The opening decision spans some fifteen pages, and it's apt to delight both the die-hard antitrust nerds and those somewhat disillusioned by competition law, who now see more promise in a concerted regulatory front combining multiple toolkits at once. As you Wavesblog Reader will have gathered by now, I firmly belong to the latter camp — and it’s from that vantage point that I’ll offer a few brief reflections on this decision.
For many of us in the latter camp, this 2014 Venn diagram from the European Data Protection Supervisor remains, to this day, a quietly persistent source of inspiration. The attentive Reader will have noticed that this diagram makes no mention of the Digital Markets Act (DMA) — unsurprisingly, as it entered into force a full decade after the diagram first appeared. Nor does it include more recent hybrid instruments, such as Section 19a of the German Competition Act. I would place the DMA precisely at the intersection of those three circles — where data protection, competition, and consumer protection converge. Technically speaking, the DMA operationalises elements from each of these domains. From competition law, it borrows e.g. the focus on market power and barriers to entry; from data protection, it incorporates core principles such as consent; from consumer protection, it reflects concerns such as user manipulation. But this is far from a flattening or plain subsuming of these established legal domains. Something original has emerged, a distinct regulatory logic.
Back to the AGCM's decision. If you’re anything like me, you did everything you could to avoid clicking on the shimmering little button that popped up on your WhatsApp screen — but you couldn’t exactly miss it either. As the Italian authority put it, the "Meta AI service was made autonomously available to users without them having taken any action to that effect. In other words, Meta chose to pre-install Meta AI and place it in a prominent position on the screen, making the service immediately available to all its WhatsApp users.” While neither the icon nor the search bar can be removed by the user, the Italian authority explained that users could, if they wished, initiate an independent chat with other AI services—such as ChatGPT—by manually adding them as a contact. I tried it out immediately (ah, the things we do in the name of research: now I've ChatGPT sending me unrequested messages). And yet, the Meta AI icon stays firmly in place, as does the search bar, helpfully prompting: “Ask Meta AI or search."
The Italian authority then embarks on what can only be described as a Kafkaesque journey through Meta’s “policies.” At first, Meta appears to say they don’t use your interactions with Meta AI to train the model — and then, a few brave clicks later, they say they do. That is, except for private messages, unless you or someone in the chat chooses to share those private messages "with our AIs" too. And in any case, as the AGCM immediately notes, it was Meta itself that stated WhatsApp user interactions would be used to train and improve their models. But it doesn’t stop there. As the Italian antitrust authority points out, Meta’s Privacy Policy suggests that WhatsApp users may gradually receive a personalised version of the service — based on the information they themselves have provided to Meta. In other words, each interaction with Meta AI feeds the system with information, some of which is then used to deliver increasingly personalised outputs. The aim? As Meta puts it, to make the responses from Meta AI more “useful and relevant.”
Already at this point, there would be plenty to say from a data protection, consumer protection, and DMA perspective. But for now, let’s stick to plain antitrust logic — though, as we know, it’s never that plain, and at times not that logical. This is an investigation under Article 102 TFEU, aimed at determining whether there has been an abuse of dominant position. The focus is, in essence, on a potential case of leveraging market power from one market — where the company holds dominance — into another, adjacent but distinct, market where it does not. All of this is well covered by EU competition law precedents [I still remember a panel at ASCOLA 2018 NYU, chaired by Tim Wu, where the question was put to us panellists about what more antitrust law should be doing to tackle Big Tech. My answer then? Pay closer attention to these very forms of market leveraging 😇]. And this, indeed, appears to be the core of the authority’s allegation: a classic case of tying by a dominant firm. Meta, it is argued, offers its main service - WhatsApp - together with a bundled, secondary service - Meta AI. WhatsApp is an app designed for communication through advanced messaging technologies, while Meta AI is a separate service altogether — a generative AI tool meant to answer users’ general-purpose queries. Two distinct services. No question about it. But these aren’t just core platform services, as one might frame them under a DMA lens. From an antitrust perspective, they also correspond to distinct and separate markets. When it comes to Meta’s dominance in the consumer communications market via app — whether at national or EU level — there is little room for reasonable doubt. The market definition and Meta’s entrenched position therein as sketched in the decision offer few revelations. More intriguing — and potentially more consequential — is the discussion around the market for AI chatbot or AI assistance services. The authority ventures into some possible delineations and goes on to consider potential competitors and their relative positions, but ultimately finds that, at this stage, the definition of the relevant markets for AI services can be left blissfully open. With the question of market definition set aside, we reach what ought to be the real crux of the case: the identification of the alleged abuse. As previously indicated, the alleged abuse, according to the authority, lies in Meta tying two distinct services — WhatsApp and Meta AI — by pre-installing Meta AI within the WhatsApp app. Following the pre-installation, the AGCM argues, Meta — already in a dominant position with WhatsApp — would secure a competitive advantage in the market for AI chatbot or assistant services. The reason is straightforward: with minimal friction, WhatsApp’s more than 120 million users across Europe are instantly turned into potential users of Meta AI.
And here comes the discussion I personally find most compelling — not least for its broader implications. In paragraph 44, the authority goes a step further and notes that "it is also possible that Meta trains its AI model on the data and/or interactions with the user base of the service in which it holds a dominant position.” The authority thus argues that users are not only effectively *drawn* into using Meta AI through its pre-installation in the WhatsApp app — they also have their data and interactions used to train Meta’s AI model. These two intertwined dynamics may well produce exclusionary effects for rival providers of AI chatbot or assistant services, especially in light of the increased risk of user lock-in and functional dependency. In fact, argues the AGCM, Meta AI appears to store user-provided data over time — meaning its outputs become progressively more tailored and “relevant” as interaction increases, thereby reinforcing user reliance and raising the barriers for competitors.
This, dear Wavesblog Reader, is merely the opening decision. How it unfolds remains to be seen. But the heart of the matter seems clear enough: AI services offered by Big Tech — the very same platforms we use to chat with relatives, search online, or draft blog posts like this one, are being quietly shoved down our throats. Not only are we given no meaningful choice as to whether we want to use these services — they come bundled, pre-installed, ready or not — but once we do, we effectively feed them. Our interactions are harvested to train their models as well as, in Meta AI's case, apparently, personalise and refine the AI’s responses, making them feel ever more “useful,” while subtly reinforcing our reliance. The endgame from a purely antitrust perspective? A state of lock-in and functional dependency that all but shuts the door to alternative providers.
That said, one can’t help but notice once again how narrow this type of antitrust lens is.
TBC before the end of the week ;-).
No comments:
Post a Comment