Wednesday, July 02, 2025

British Browser Boiz in Brussels! Fun at Microsoft’s DMA compliance meeting

 B. Lawson, here.

Beyond AI & Copyright funding a sustainable information ecosystem

 Open Future, here.

America must not defang Europe’s new tech law

 L. Lowe, here and here

And the EC has once again confirmed that it won't, here and here.

ACM market investigation into computer-controlled consumer prices in the airline sector: research methods and consultation

 PR, here

UK Announces Proposed Measures in Google Investigation

 M. Kirkwood, here

The Technical Feasibility of Divesting Google Chrome

 KGI, here

ByteDance Compliance/Enforcement DMA Workshop: "Just" technical questions - engineers couldn't make it, sorry!

 




How Monopolies Secretly Steal Your Freedom

 Brought to you by the inimitable Lina Khan, here (may I please live long enough to see a Khan US Presidency, or two?). 

Alphabet’s Second DMA Compliance Workshop: A Self-Reported Engaged Gatekeeper

 A. Ribera Martinez, here

Recording here.

Privacy-focused app maker Proton sues Apple over alleged anticompetitive practices and fees

 TechCrunch, here

Senators Reject 10-Year Ban on State-Level AI Regulation, In Blow to Big Tech

 Time, here

Dear Ursula, ensuring GPAI Rules Serve the Interests of European Businesses and Citizens

 Letter here

Brasil, rumo ao ("DMA") gol!

It was a real pleasure yesterday to share some reflections on the European experience with the Digital Markets Act across the Atlantic. You can watch the recording here.

We spoke candidly: about what seems to be working, about what remains difficult, and about what, in hindsight, might have been done differently. These are not easy conversations, but they are necessary ones.

What stood out, above all, was the seriousness and clarity of purpose shown by our Brazilian counterparts. There’s no doubt: they are approaching the challenge of platform regulation with a level of determination that is both impressive and encouraging. It’s not just about legal design or enforcement mechanics: it’s about political will. And in Brazil, that will is clearly there. 

This isn’t Brazil against Europe (or Italy ;-)): it’s one match, one team.
And the stakes couldn’t be higher.

Sunday, June 29, 2025

Bridging Internet & AI Governance: From Theory to Practice

 IGF 2025, here.

Interoperability in Digital Platforms and its Regulation: Transatlantic Dialogue alive and kicking!

Not a dead parrot.
Webinar, 1 July, 15-17 CET. Register here if you want to pose questions, otherwise live on YouTube here. Recording here.

This webinar is a transatlantic conversation bringing together researchers, regulators and civil society. The focus is on interoperability, not as a silver bullet, but as a critical lever to support more open, competitive and innovative digital environments.
It offers a timely comparison: Brazil, a jurisdiction that has demonstrated its ability to build effective digital public infrastructure (Pix), thereby getting rid of the extractive Visa-Mastercard duopoly, and the European Union, which has so far struggled to do the same. At the same time, Europe has taken the lead in legislating to curb Big Tech’s power, and other regions, including Brazil, are now watching the Commission's enforcement of this legislation closely.
All this, just as transatlantic tensions over digital regulation resurface, and as the EC DMA Team does its utmost to stay below Trump’s radar. And then there's the DMCCA (and UK politics). 

Together with Isa Stasi and Ian Brown, our task on Tuesday is to share a few lessons from the European experience. So what is it, really, that we have to offer from this side of the Atlantic?

As for my contribution, I’m still finalising the details. Not long ago, I wouldn’t have had much to say about EU interoperability, at least not anything terribly useful for promoting open, fair and competitive digital markets. But the past few months have been surprisingly lively. Four developments stand out, and I hope they can add a little spice to our conversation. I will most likely begin with the antitrust commitments by Apple concerning NFC (Apple Pay), and reflect on their aftermath. Next, I’ll briefly touch on the recent judgment of the Court of Justice of the European Union regarding interoperability of the Android Auto OS. I’ll then say a few words about the Commission’s specification decisions on Apple’s interoperability obligations under Article 6(7) of the DMA. And finally, I’ll offer some thoughts on prospects for stronger DMA enforcement, on the case for refining the regulatory framework, and even on the EuroStack (10 minutes in total :-)). 

The first reflection I would like to offer concerns access to Near-Field Communication (NFC) functionality, a technology which, until mid-2024, Apple had reserved exclusively for its own Apple Pay service within the EEA. An important point to note is that, across Europe, NFC, a technology not developed by Apple, has become the standard for mobile payment. It enables fast, contactless transactions, secured through tokenisation and encryption. Virtually all payment terminals in the EEA now support it. 

It is now almost exactly one year since the European Commission made Apple’s commitments in the Apple Pay case legally binding. These commitments are centred squarely on interoperability: Apple is required to allow third parties access to the NFC functionality for payment purposes on iOS devices. As a result, a wide range of developers can, in principle, begin to use this technology to offer alternative NFC  payment services. Even though relatively little time has passed, it is important, I believe, to ask whether anyone has actually seized this opportunity, whether any new entrants have made their way into the NFC in-store mobile wallet market on iOS. There have, in fact, been some entries, though so far limited to a few countries rather than on an EU-wide scale. As illustrated at the most recent OECD Competition Committee meeting in June, the first to enter was Vipps MobilePay, though its launch remains limited to Norway, and facing huge hindrances to becoming a pan-European interoperable wallet (Single Market, anyone?). Next came a US tech firm, hardly a small player, namely PayPal, which is currently rolling out its wallet in Germany. German cooperative banks have also signalled their intention to enter this space soon, likewise focusing on the German market. In the announcement, it is explicitly stated that Apple Pay will no longer be needed to make payments with the new service, a move framed as part of a broader effort to raise awareness of how heavily payment systems in Europe rely on US corporations such as Visa, Mastercard, PayPal, and, of course, Apple. The ongoing trade tensions with the US are cited as an additional reason for concern. This raises a broader question: can interoperability serve not only as a tool to promote competition, but also as a means of advancing digital sovereignty? The answer, perhaps, is that interoperability is certainly a first step, but a far more effective approach, had it been pursued from the outset, would have been to establish a digital public infrastructure for electronic payments, along the lines of Brazil’s Pix. Crucially, this would have required a broad adoption mandate for banks operating across the EEA. If done properly, such a system could have delivered both competition and sovereignty in a more structural and sustainable way. A related and important question is what went wrong with SEPA, the Single Euro Payments Area. Conceived as a cornerstone of European financial integration, SEPA has largely failed to deliver the kind of common digital payment infrastructure that could support genuine sovereignty and competition at scale. Even though scepticism remains high, it remains to be seen whether the Instant Payments Regulation, now in force, with serious enforcement beginning this year, will offer an effective fix to SEPA’s shortcomings. The other, and perhaps enduring, question is whether the commitments offered by Apple were ever sufficient. More fundamentally, even if the commitments had been ideal, one must ask how much they could realistically achieve in isolation. If anticompetitive conditions persist in adjacent markets, and addressing interoperability at just one layer may do little to resolve distortions that are structural and multi-layered in nature (e.g., terminals?). Moreover, to fully benefit, at the EEA-scale, from access to previously gated functionalities, new entrants would need to rely on other components of essential digital public infrastructure, most notably, the European Digital Identity, meant to be enabled by the eIDAS 2.0 framework.

Even from the few observations above, it becomes clear that getting interoperability right, as a driver of innovation, competition, and even digital sovereignty, is no small feat. It requires multiple elements to come together in a coherent and sustained manner. It is far from simply unleashing latent energies held back by a textbook refusal to provide interoperability. In the Brazilian context, the national scope of the market, combined with the groundwork already laid in this area by the regulator, may well place the competition authority in a favourable position to act effectively. That said, it is beyond doubt that the refusal to enable interoperability has often been used strategically by Big Tech players as an anticompetitive tool.  In some cases, it served to block potentially disruptive innovators at a critical moment; in others, it was used to secure and preserve market positions in areas where new business opportunities were emerging, as Apple did with NFC functionality until recently, and as Google did in relation to Android Auto, for which it was sanctioned by the Italian competition authority. 

This latter case also triggered a preliminary reference to the EU Court of Justice, which did not miss the opportunity to say once again something helpful, a development I would like to briefly address as the second point in my remarks. That the effectiveness of the branch of competition law tasked with preventing and prohibiting abuses of market power has been profoundly challenged by the rise of Big Tech is, of course, no secret. One aspect long recognised as particularly ill-suited to the digital context is the so-called Essential Facilities Doctrine, particularly in its Bronner formulation.The Court’s ruling in Android Auto provides a further fix, in the form of a "clarification" of the doctrine, perhaps a limited one, but a welcome one nonetheless. The case concerned Google’s refusal to allow Enel X’s electric vehicle charging app to interoperate with the OS Android Auto, citing security concerns and the burdens of developing a new template. The Italian competition authority (ICA) found this refusal to be in breach of Article 102 TFEU and ordered Google to enable interoperability. 

Last slide of my 2021 presentation 
In a 2021 ASCOLA Conference online presentation (Covid time, my last one) discussing the case, my first research question was whether the legal and economic reasoning proposed by the ICA would have made it more agile for competition authorities to assess interoperability cases involving digital platforms under Art. 102 TFEU. We can, at this point, confidently answer in the affirmative, and that is good news. From my perspective, what deserves particular emphasis is that the Court made a meaningful contribution on the issue of safeguarding investment and innovation incentives. While the arguments put forward certainly merit further reflection, the Court’s reasoning was sufficient to effectively dismantle the familiar defence invariably deployed by Big Tech in abuse of dominance cases. Unlike the infrastructure at issue in Bronner, Google had not developed the Android Auto OS to serve its own needs to secure a competitive advantage. On the contrary, it was explicitly designed to encourage participation by complementors, a fact that clearly undermines the argument that granting access would have reduced Google’s incentives to invest. To what extent this workaround can be relied upon beyond the specific platform ("complementors") context and across different digital services and types of infrastructure remains to be seen. 

The second question I raised during the ASCOLA presentation, bearing in mind that this was back in 2021, after the DMA had been tabled but before it became law, was how to frame the relationship between ex post and ex ante approaches to interoperability mandates going forward. The ex ante approach to interoperability introduced by the DMA will form the third point of my remarks. But before that, a few words are in order on the relationship between ex ante and ex post interventions and, more broadly, between traditional antitrust law and emerging forms of digital regulation with regard to interoperability, which can be of some interest specifically in the context of our transatlantic dialogue.  The first thing to note is that the Commission’s experience in the Apple Pay commitments proceedings, discussed earlier, appears to have fed directly into the current phase of DMA enforcement. This is evident in the confidence with which the Commission has now moved to specify Apple’s interoperability obligations under Article 6(7), a point I will return to shortly. The second aspect I wish to highlight, though much more could be said, is that courts are reading the DMA and drawing inspiration from it, even when interpreting traditional antitrust law. This is clearly visible in the Android Auto case, where a regulatory approach inspired by the DMA can be seen in how the Court assessed what may constitute an objective justification for refusing to grant interoperability. In my view, all things considered, this too is a positive development.

The prince app developer (see blog post)
With the third point, we turn to the DMA. Ensuring certain levels of interoperability is clearly a priority for the EU legislator, as a means to promote digital markets that are fair and contestable. This is evident from a range of obligations whose proper compliance by gatekeepers presupposes different degrees of interoperability between their services and third party services. In this context, I would like to say a few words about Article 6(7), which has recently been the focus of significant enforcement activity by the Commission. As I write these lines, the 2025 Apple Enforcement Workshop is taking place, with an entire session dedicated to Apple’s compliance journey with this very provision. Apple was required to comply with Article 6(7), and the other DMA obligations, by 7 March 2024. This has evidently not occurred, and the Commission has taken notice. To address such situations, the DMA introduces a new tool: the specification proceeding. Under Article 8, the Commission can unilaterally determine the measures needed for compliance, unlike commitment proceedings, which rely on the company’s own proposals to avoid a formal finding of antitrust infringement. The Commission issued two specification decisions in relation to Apple last March: one concerning the process for handling interoperability requests, and the other relating to interoperability with connected devices. With the latter, the Commission specifies, among other things, how Apple should provide effective interoperability with the NFC controller in Reader/Writer mode by the end of 2025 at the latest. The scope of the NFC part of the decision is therefore significantly broader than last year’s antitrust commitments. It is also clear that interoperability with this feature is in high demand and, in principle, should foster innovation. This is reinforced by the fact that Article 6(7) does not allow Apple to dictate which services or products may make use of the interoperability obligation. What stands out in this respect is also the in-house technological expertise the Commission has already developed regarding the services offered by the gatekeepers it regulates. This expertise enables it, as in this case, to engage eye to eye with a giant like Apple.
The fourth and final aspect I would like to briefly touch on relates to a pronounced sense of urgency. For instance, I find it worrying that it is considered normal for a study on the impact of emerging technologies on the DMA to take up to a year to produce. In the same vein, if we come to realise that the DMA is insufficient, whether in terms of its scope of core platform services or the need for additional obligations, then the time to act is now, and this relates also to the possible need for additional interoperability obligations. We cannot afford to lose time. However, even for the most determined regulator (and well equipped, also financially) in the world, it is very difficult, though not impossible, as we have just seen, to impose interoperability obligations on powerful and recalcitrant gatekeepers. But that is precisely what lies ahead if we allow them to build the digital infrastructure on which we all then come to depend. Equally difficult, and arguably wishful thinking, is relying on the industry to deliver effective interoperability solutions. This has been attempted in the EU for years, with very limited results. So how do we move forward? Well, perhaps by starting with a good look back. We need retrospective analyses to understand how we ended up letting a handful of players control the digital infrastructures we all rely on, not to point fingers at antitrust enforcement alone, but to get a clearer picture of the broader dynamics, which likely vary across different digital services. On this, as on many other things, I fully agree with Robin Berjon, who notes that everything in digital happened so quickly we didn’t even realise we were dealing with infrastructures, let alone how to properly regulate them. Secondly, the kind of public effort required, in terms of both capacity-building for smart regulation and "project" funding, must be laser-focused on identifying the type of infrastructures we need and how to get there. This type of mobilisation cannot be decoupled from public choices. This is particularly clear in the case of Pix, where inclusiveness was a key consideration (but almost accidentally ended up by solving an extractive duopoly problem). Moreover, infrastructural choices inevitably shape the kinds of technological innovation they enable. These are societal decisions, not technical footnotes (and, of course, we cannot afford for them to be significantly steered by even well-meaning think tanks nor other - European! - industry giants, from telecoms to aerospace). The upcoming plenary debate in the European Parliament on the Report on European Technological Sovereignty and Digital Infrastructure, produced by the Committee on Industry, Research and Energy (ITRE), offers an excellent starting point for these much-needed democratic conversations.

To be discussed!

 

Workshops Reveal the DMA’s Broken Promises

 Chamber of Progress, here.

Facebook is starting to feed its AI with private, unpublished photos

The Verge, here.

After Notice and Choice: Reinvigorating “Unfairness” to Rein In Data Abuses

 L. Khan, S. Levine, S. Nguyen, here.

First-sale doctrine in the AI age: incentivising book bonfires! No externalities there?

 ArsTechnica, here

Picture chosen by ArsTechnica for the article.







This picture suddenly recalled a Ted Talk I gave 12 years ago touching also on the first sale doctrine.

Unbelievable how much went wrong since. 

Friday, June 27, 2025

EuroStack: a concrete pathway to European digital sovereignty and strategic autonomy

 CEPS Video here.

 

---- 

Here's the EP Report; here's where the Greens would go further. 

Kick-off conference (24th Sep 2024)  here.

Essential facilities doctrine and digital ecosystems: case C-233/23 alphabet (android auto)

 P. Hornung, here.

Dutch Consumers' Association launches major legal action against Booking.com

 Belganewsagency, here.

Decoding Competition Concerns in Generative AI

 K. Tyagi, here.

ACM attaches strict conditions to acquisition of RTL Nederland by DPG Media

 Here.

Private enforcement of the Digital Markets Act (DMA)

 Barentskrans, here.

Apple’s app stranglehold is now a political flashpoint

 Podcast here.

Heads I Win, Tails I Win: Why US Cloud Giants Benefit from DeepSeek and Other Chinese Companies' AI Strategies

 C. Rikap, here.

Apple "changes" App Store rules in EU to [fake compliance] with [DMA] order

 


Reuters, here. 

Antidote: "daily fines of 5% of its average daily worldwide revenue, or about 50 million euros ($58 million) per day after being given 60 days to show it was in compliance with the bloc's Digital Markets Act."  

Apple, here.  

  

Wednesday, June 25, 2025

UK politics blunts antitrust action against Google

 yep.

New digital competition dynamics: The Ai Slop

 J. Oliver, here.

Interoperability in Digital Platforms and its Regulation: Transatlantic Dialogue alive and kicking! [Spoiler alert: not with the US]

Webinar, 1 July,  15-17 CET. Register here if you want to pose questions, otherwise live on YouTube here

Together with Isa Stasi and Ian Brown, our task today is to share a few lessons from the European experience. I still vividly remember when Luca Belli came to the European Parliament to explain how digital sovereignty and digital public infrastructure can be built in practice, and how, in just three years, they managed to get rid of the Visa–MasterCard duopoly, which had long become more of a bottleneck than a service. So what is it, really, that we have to offer from this side of the Atlantic?



Tuesday, June 24, 2025

Amazon’s Second DMA Compliance Workshop – The Power of No: Where the Balance Should Land

Alba, here.  

Recording of the workshop here.

ANDREA BARTZ ET AL. V: ANTHROPIC

 Here

Excessive Wealth Concentration and Power

 CEU, here

I've been dealing with this topic for 7 lustra and it looks increasingly bleak - not my fault😉

WhatsApp banned on House staffers' devices

 Axios, here

CMA takes first steps to improve competition in search services in the UK

 




Here

Proposed decision here.

Roadmap here

Exploring consumers’ search behaviours here.  

Read also Sarah's blog post here ("Based on how it is currently offered and used, we have provisionally decided that Gemini AI assistant should not be included as a product within this scope").



"We have identified a further set of possible actions (for example, restricting use of default agreements and providing access to underlying search data) which are currently the subject of live litigation between the US Department of Justice and Google. We will consider our approach in these areas in light of developments over the coming months. This is in line with the CMA’s prioritisation principles and the government’s recent strategic steer, which encourages the CMA to consider where we are best placed to act"





Thursday, June 19, 2025

G7 Data Protection and Privacy Authorities’ Communiqué

 Here.

The Progressive Regulator Winning Over the Populist Right

 Podcast, here.

DMA (Team) Sudans, when will Meta's compliance with Article 5(2) finally flow?


Don't look for it in Rome...

  Nearly two months on, the Commission’s DMA non-compliance decision against Meta  was finally published yesterday [18 June]. Coincidentally or not, a German court also published a ruling yesterday, offering its own reading of the same DMA obligation with which, according to the Commission, Meta is not (yet?) compliant. As Wavesblog Readers will appreciate, both are of considerable interest. What follows are a few preliminary observations. Let me begin, however, with a brief disclosure. During the eight months in which I had the pleasure of consulting for Article 19, I had occasion to focus on Meta twice: first, in writing about an intriguing German judgment applying Article 102 TFEU (not the one you're thinking of, another one entirely); and second, precisely in relation to the application of Article 5(2) of the DMA.  As for the latter, in the course of that work I had reached the conclusion, now confirmed by the Commission in its recently published decision, that there was simply no way Meta could be considered compliant, a conclusion I had the pleasure of presenting in the Commission’s presence in beautiful Fiesole.
 

Why, then, did it take an 80-page decision, and why is Meta, in all likelihood, still not compliant more than a year after it was first required to be? The most obvious answer, of course, is that the obligation in question strikes at the very heart of Meta’s business model, not only as it stands today, but also with implications for future developments, given the EU legislator's insistence on DMA compliance by design, a theme that has been a recurring one here on Wavesblog. Moreover, in this case, the Commission is not simply tasked with interpreting and enforcing a 'standard' DMA obligation in relative isolation; to do so, it must also apply the General Data Protection Regulation in conjunction with the DMA. That these two legislative instruments converge in more than one respect is also confirmed by the German ruling mentioned earlier and to which we shall return in due course. On that note, we are still awaiting the joint EDPB/Commission guidance on the interplay between the DMA and the GDPR, which, according to  Commission’s remarks this week in Gdańsk, is expected imminently. Interestingly, the Commission takes this into account as a mitigating factor in determining the fine for non-compliance ("the Commission acknowledges that the interplay between Regulation (EU) 2022/1925 and Regulation (EU) 2016/679 created a multifaceted regulatory environment and added complexity for Meta to design its advertising model in a manner compliant with both regulations"). That might be perfectly understandable, were it not for the fact that we are, after all, dealing with Meta, which has elevated non-compliance with the GDPR to something of an art form worthy of Bernini. This raises both a puzzle and a question: will Meta be left to do much the same under the DMA? And might other gatekeepers be allowed to match, or even surpass, it? Is there, perhaps, a structural flaw in the DMA’s enforcement apparatus, just as there are, quite plainly, in the GDPR, that gatekeepers can be expected to exploit at every opportunity? Or is it simply a matter of DMA enforcement resources falling well short of what would actually be required? What, then, can be inferred from this particular decision in response to that question? Serious reflection is clearly needed here. For now, I can offer you, Wavesblog Readers, only a few very first impressions, but I’d be all the more keen to hear yours. 

Even before Compliance Day (7 March 2024), it is clear from the decision that the Commission already had serious reservations about the "Consent or Pay" advertising model that Meta was in the process of grinding out, which had been presented to the Commission as early as 7 September 2023. The decision makes clear not only that the Commission was in close dialogue with Meta, but also that it engaged with several consumer associations and other interested third parties, on both the DMA and privacy sides of the matter. On that note, a further question, though perhaps it’s only me. Should there not be, if not a formal transparency requirement then at least a Kantian one, for the Commission to list, even in a footnote, all the interested parties with whom it bilaterally discussed the matter?  On this point, one almost hears the Commission suggesting that such a transparency obligation might discourage others from speaking up, for fear of retaliation by the gatekeeper. The point is well taken, but one wonders whether some form of protected channel might be devised, a kind of "privileged observer’s window with shielding" available where reasonably requested, providing clear assurances that the identities of those coming forward will be safeguarded (short of being a whistleblower). Moreover, as is well known, this point tangentially touches on a broader issue. The EU legislator, likely with a view to streamlining enforcement, left limited formal room for well-meaning third-party involvement. The Commission-initiated compliance workshops, the 2025 edition of which has just begun, are a welcome addition, but they are, of course, far from sufficient. In particular, without access to fresh data provided by the gatekeepers, available only to the Commission, how are third parties expected to contribute anything genuinely useful at this point of the "compliance journey"? As we shall see, this concrete data was also an important point in the very process that led to the adoption of the non-compliance decision in this case (Meta knew that its 'compliance model' was producing exactly the result they wanted). The lawyers, economists and technologists on the DMA team have clearly had their hands full in the matching ring with Meta (hence, of course, the Sudans in the title). Even a quick reading of the decision reveals, between the lines and squarely on them, the array of tactics deployed by Meta to throw a spanner in the works of effective DMA compliance, all carefully orchestrated and calculated with precision, and surprising no one. But one does wonder whether the DMA’s hat might not still conceal other tools, better suited to crowdsourcing and channelling constructive efforts, particularly from those third parties who stand to benefit from the DMA (as we also heard in Gdańsk), from conflict-free academics, and from what might be called real civil society, genuinely committed to effective and resolute DMA enforcement, rather than the usual crop of gatekeeper-supported associations. 

To loosely paraphrase Microsoft at the 2025 Enforcement Workshop, Meta’s “compliance model from day one” was a marble (code)-carved binary choice, one that could scarcely have been further from what the EU legislator had in mind. Unlike the strategies adopted by certain other gatekeepers, Meta didn’t even bother to kick the compliance can just far enough down the road to create an illusion of movement. The opening of non-compliance proceedings came swiftly and had, by all appearances, been fully anticipated. The decision that brings them to a close contains no epiphanic surprises, and is lengthy only because Meta's counsel deployed the full repertoire of legal ingenuity, focusing in particular on arguments forged in the intersection between the DMA and data protection law. Time, then, for a modest walkthrough, dear Wavesblog Readers.
 
As one assumes it must by now form part of general digital literacy (until one realises it doesn’t, even when speaking to students born to parents who were themselves, perhaps, already digital natives), Meta "generates almost the entirety of its revenues from its advertising service." On Facebook and Instagram, both designated under the DMA, end users "post and consume personalised content." Upon registering, each user receives "a dedicated, unique user identifier... and all data tied to that user identifier is part of a unified user account for that environment, i.e., the Facebook environment or the Instagram environment." Meta combines the personal data it collects from that user within the same social network environment and further merges it with data gathered through another designated core platform service, Meta Ads, to display personalised advertising. It is solely this latter data combination that is at the heart of the non-compliance decision.

[I pick it up again now, not without noting that while I'm still writing this post, the 60-day deadline to comply with the cease-and-desist order, according to my calculations -  which are apparently wrong but I don't understand why -  has lapsed, and that DMA enforcement may well have found its way into the cauldron of Trump-era trade negotiations].

Let us suppose that you, dear Wavesblog Reader, use Facebook in the EU. Meta, as designated gatekeeper, according to the DMA (and commencing for real no later than today - 24 June - or on the 26th at the latest) must present you with the specific choice of a less personalised but equivalent alternative to the advertising-based service Facebook that you would use if you had given your consent to the combination of your personal data from Facebook with data from the Meta Ads. 

[Let's have another go at this, shall we? A few rather significant developments have intervened since last time I wrote here. First off, it appears DMA enforcement hasn't quite ended up in the cauldron of Trump trade negotiations. Second, we've had the last DMA Compliance Workshop, with Meta taking centre stage. During the workshop, we heard that from 27th June, Meta has rolled out yet another iteration of  their "compliance journey." The third development is that the Commission has launched a public consultation on the first review of the Digital Markets Act, which will keep many of us rather pleasantly occupied this summer. The fourth, and of less general interest (but mentally preparing for it - we'll record the episode rather soon), is that I'll draw on thoughts from this piece to discuss this decision  "Chez Oles"  with two more Invitees. Finally, I should mention that it's not merely Meta and the DMA Team feeling the heat at the moment: the entire European continent is currently enduring what can only be described as a perfectly ghastly heatwave. Rather puts everything into perspective, doesn't it?]
 
Data cocktails beyond the decision 

Perhaps the best place to restart is with what was clearly explained during the Compliance Workshop, and you, dear Wavesblog readers, who’ve perhaps already pored over the decision, will have caught all the nuances of it. The non-compliance decision at issue concerns a single type of data combination covered by Article 5(2). Many other cocktails Meta shakes up with your data fall outside the decision’s direct scope,  but are still covered by the same article and have been extensively discussed in the ongoing regulatory dialogue between the DMA Team and Meta. Data cocktails can be mixed within CPSs themselves (e.g., Facebook and Messanger), but also between CPSs and the gatekeeper’s other services (e.g., Instagram and Threads). During the workshop, we heard from the Commission that these other 5(2)-related compliance discussions largely centred on what qualifies as an equivalent service for users opting out of data combination, without slipping into degradation beyond what’s strictly necessary due to reduced data use. These, as we understood, are discussions about the kind of service a user who declines data combination is entitled to expect, and whether it is genuinely equivalent, as required by Article 5(2). In this context, it emerged that the Commission and Meta have been discussing what such a service looks like. For example, Messenger or Threads when data combination with Facebook and Instagram, respectively, is not allowed by the end user. On this front, the Commission noted, with some satisfaction, that improvements have been made. 

Increasingly hot...
 
Equally outside the scope of the non-compliance decision, but still very much part of the ongoing regulatory dialogue with Meta on Article 5(2) was, unsurprisingly, AI. The Commission has been actively following the rollout of Meta’s shiny new AI services, as well as the more familiar ones, like WhatsApp, where AI has been quietly slipped in, while considering what all this might mean for compliance with Article 5(2). 
We heard that the Commission is looking specifically at "data flows across services, how the data is sourced and used [to train, ground of fine-tune the models] and if that involves the combination of personal data." Moreover, the Commission is "looking at data for personalising or grounding AI both within designated services and other services offered by Meta." As we've witnessed throughout the rest of the Compliance Workshops, Series 2, a robust regulatory dialogue on AI is already well underway with all designated gatekeepers, a fact that appears to have been somewhat downplayed by Meta before the Cologne court mentioned earlier, to whose recent ruling we now (very briefly) turn. 
 
Source: Proton Drive on X
The case was brought by a German consumer protection association seeking to prevent Meta from using data publicly shared by adult users on Facebook and Instagram (such as your photos posted on a public Instagram account - so-called First Party Data) as well as user interactions with its AI (e.g., the questions you put to Meta AI reused to fine-tune and improve those very systems - Flywheel Data), to train its own large language models. In summer 2024, Meta had already informed its end users of its intention to train its own large language model using the data of adult users from the EU/EEA, with the training scheduled to begin on 26 June 2024, though this was later postponed following concerns raised by the Irish data protection authority, as well as among others by the claimant, who issued a formal warning. Less than a year later, after what one can only assume was a rather lively exchange between Meta, the regulators, and other stakeholders, Meta tried again. This time, no one stopped them. The German court had an eleventh-hour opportunity to do so, but decided not to take it. For our purposes, it's worth noting that the claimant argued not only that Meta was in breach of the GDPR, but also in direct violation of its obligation under Article 5(2) of the DMA. At the workshop, we heard from the DMA Team that this very issue is part of their ongoing regulatory dialogue with Meta: how data flows across services, how it's sourced and used to train AI models, and whether this involves the combination of personal data. It’s a fascinating and extremely relevant (also legal) question, but one that would carry this blog post well off course. So, let’s return to the shore we’ve been meaning to stay anchored to: the non-compliance decision.
 
According to the Commission’s stated preference, the DMA Workshop was not meant to be the forum for discussing the non-compliance decision itself, but rather a space for Meta to illustrate where it currently stands on compliance with Article 5(2), also following its latest post-27 June tweaks, and to gather feedback from third parties. Wavesblog Readers can judge for themselves whether that’s quite how things actually played out. In any case, for context, the Commission did briefly outline the content of the non-compliance decision itself, offering a very helpful summary: one I’m grateful for and will return to as a running thread, gradually weaving in my own first reflections along the way. The decision concerns whether what we might call “1.0” of personal data combination in Meta's advertising services was compliant with the DMA. “2.0” was rolled out in November 2024, followed by “3.0” on 27 June 2025. 1.0 was squarely a "consent or pay" advertising model and this is what the decision addresses. 
 
End user's journey: 1.0
As reminded above, Meta combines personal data for what the Commission, rather benevolently, refers to as ads personalisation on Facebook and Instagram servicesSo this takes us straight to the heart of Meta’s economic engine, which has remained more or less unchanged since time immemorial, or at least after the dorm-room days. By 7 March 2024, it’s hard to argue that Meta could've been caught unprepared, as Article 5(2) itself drew initial inspiration from a long-running German antitrust saga in which Meta has been entangled since at least 2019. In its DMA non-compliance decision, "the Commission finds that Meta’s consent-or-pay advertising model fails to present end users of Meta's Facebook and Instagram platforms with a specific choice of a less personalised but equivalent alternative to the fully personalised option." At this point, it’s worth recalling that between March and November 2024, Facebook and Instagram users were asked to “choose” between Scylla (a six-headed sea monster who would snatch and devour sailors from passing ships on one side of the Strait of Messina) and Charybdis (a massive, gaping whirlpool that could swallow entire ships, on the other side of the Strait): only by paying could they escape the fully personalised option. 
 
Penelope waiting, Chiusi Etruscan Museum 

Based on the Commission's reading of Article 5(2) DMA, the legal reasoning underpinning the non-compliance decision is twofold. First of all, the less personalised but paying (Scylla) and the fully personalised (Charybdis) options cannot be considered equivalent alternatives, as they exhibit different conditions of access. Second, the binary configuration of Meta’s consent-or-pay model (Scylla or Charybdis) doesn’t ensure that end users freely give consent to the personalised ads option, falling short of the GDPR requirements for the combination of personal data for that purpose. But what about ('data combination in Meta's advertising services') 2.0? In November 2024, Meta charted an additional ads option. This is a free of charge, advertising-based version of Instagram and Facebook. Further tweaks to this option followed just as the 60-day compliance period set out in the non-compliance decision was about to expire — enter 3.0. Is it finally the Ithaca Option, that ensures compliance with the DMA by giving the end user a real chance to exercise the data right enshrined in Article 5(2)? In its non-compliance decision, specifically on Meta’s data combination 1.0, the Commission, without delving into detail, also indicates what a DMA-compliant solution would, in its view, require, making explicit reference to elements introduced in 2.0:
1) the end user should be presented with a neutral choice with regard to the combination of personal data for ads so that he/she can make a free decision in this respect;
2) for all the relevant personal data combined valid consent has been obtained;
3) the less personalised alternative should be equivalent in terms of user experience, performance and conditions of access - except for the amount of personal data used.
In any case, the Commission has already informed Meta that it is still assessing whether 3.0 is sufficient to meet those main parameters of compliance, while also hinting that one still outstanding issue might be whether the end user is truly being presented with a neutral choice (1). 
 

Before diving into the more legal aspects of the non-compliance decision, while taking on board, too, at least some of the arguments aired by Meta during the workshop (soon to be set out in more precise legal terms in the upcoming appeal against the decision), I’d like to surf briefly above the currents of DMA enforcement. Allow me, dear Wavesblog Reader, a quick self-quotation (then I’ll stop, promise). Reflecting on Article 5(2) and other DMA data-related provisions  in something I wrote in early in 2021, I reached a conclusion I still stand by, namely that this provision, at its core, and with regard to online advertising specifically, should provide end users with a real choice over the level of ‘creepiness’ they’re comfortable with. That may sound rather underwhelming — and it is. In fact, in the same piece, I wondered whether it wasn’t finally time to regulate personalised advertising more broadly, and for real, without losing sight of the broader picture: that private digital infrastructures, especially informational ones, ought, depending on the case, to be regulated decisively, dismantled, made interoperable, and so. At the same time, I do believe this end user's right to choose as carved clearly into Article 5(2) of the DMA, without tying it to one’s capacity to pay in order to escape a level of surveillance one finds uncomfortable (for oneself and/or the society she/he lives in), is immensely important.

 Bravissima, whatever. 

It’s a glimpse of something different: a very narrow but real incentive in favour of an "economic engine of tech" that starts rewarding services built around data minimisation and not surveillance and data collection (and its combination). From this perspective, offering the mass of Facebook and Instagram users a genuinely neutral choice, and thus the concrete possibility of making a free decision in favour of a less, but equivalent, personalised alternative (e.g., about the preferred level of creepiness), could also have a welcome educational effect. However, we can’t pretend that the current AI age hasn’t, for now at least, further accelerated the shift towards surveillance and data accumulation — and what’s being done to counter it still feels far too limited (see also the above-mentioned ruling by the Cologne court).

Hannibal in Italy, Palazzo dei Conservatori - Rome
And so we come to those distinctly legal parts of the non-compliance decision which I found most interesting and thought-provoking — and which may well be among the points challenged in the forthcoming appeal, judging by the amuse-bouche offered by Meta’s representative during the compliance workshop. What some, Meta included, saw, and still see, as the unwieldy elephant in the Article 5(2) enforcement room is the namesake ruling from the CJEU. This isn’t just any elephant, however — it’s a war elephant, deployed by Meta in the front line to defend its 'data combination in Meta's advertising services' 1.0, namely its consent or pay model. The first thing to note is that the ruling, of course, isn’t about the DMA, but it stems from the German Facebook antitrust saga we’ve already touched on. According to Meta, however, that ruling has significant implications for how the DMA itself should be interpreted. Specifically, the ruling is meant to justify the binary option presented to end-users under the consent or pay model. It is beyond doubt, as recalled by the Meta representative, that the Court was directly referring to Meta—here acting in its capacity as an online social network holding a dominant position within the meaning of EU competition rules. Equally indisputable is that, although the case arose from a competition law context, part of the ruling addressed the conditions under which consent for the processing of personal data for personalised advertising can be considered validly given. The Commission, however, does not consider the Meta ruling to provide a valid justification for the consent or pay model under Article 5(2) of the DMA. 

Distinct legal notions 
The main argument supporting the Commission's view is rather straightforward: namely, that the Court could not have ruled on compliance with Article 5(2) of the DMA, as the latter introduces a specific test based on two distinct, cumulative legal notions—only one of which is the concept of consent within the meaning of Article 4, point (11), and Article 7 of the GDPR, while the other, that of a specific choice, must be understood as an autonomous notion under the DMA, with no equivalent in the GDPR or in any other legal instrument. Meta, as designated gatekeeper, is under the DMA obligation to present end users of Facebook and Instagram "with the specific choice of a less personalised but equivalent alternative to the advertising-based service it offers where those end users consent to the combination of their personal data from Meta’s Non-Ads Services in and with data from the [Online Advertising Service] CPS Meta Ads." So, under the DMA, it’s not enough for the end user to have given valid consent under the GDPR, as the user must also be presented with the specific choice of a less personalised alternative: only then can the user exercise a real specific choice. This alternative should not rely on the combination of personal dataAnd this is precisely where the specific data right granted to end users under Article 5(2) starts to take form. End users are entitled to choose whether, in order to use the online social networks Facebook and/or Instagram, they want their data to be combined or not — in this case, with data from the other CPS used by Meta for advertising purposes. The consent-or-pay model clearly didn’t offer this kind of option — something Meta began to address with its advertising model 2.0, though likely still not sufficiently for effective compliance, in the Commission’s view, given that Meta has since moved to 3.0. We will return later to the original scope and contours of this particular right granted to end users under the DMA, as further clarified by the EU legislator. For now, however, we must still address other aspects of the decision that are undoubtedly of interest going forward.

As just noted, the enforcement of the DMA is not necessarily bound by the contours of the GDPR where the DMA goes beyond it, or simply moves in a different direction. However, for those parts where the EU legislator has drawn on legal concepts from the GDPR, such as the requirement that end users give valid consent to the combination of their personal data for the purpose of serving personalised advertisements,  reference must be made to Article 4(11) and Article 7 of  the GDPR. This, in turn, triggers a duty of sincere cooperation with the supervisory authorities responsible for monitoring the application of that Regulation. Here, however, a possible complication arises, which in the ideal world of EU law enforcement perhaps shouldn’t exist. And yet, it does. We won’t dwell on it for long, but it’s worth highlighting (and it may at least partly explain Mario Draghi’s well-documented allergy to the GDPR, reaffirmed only a few days ago). The cast of characters on the GDPR enforcement stage has often found it difficult to offer a consistent reading of this Regulation. Unsurprisingly, this has weakened enforcement and worked to the advantage of those less inclined to embrace it. This risk has been avoided under the DMA, notably by assigning quasi-exclusive enforcement power to a single entity: the Commission (though arguably creating others in the process, which we won’t go into here). Echoes of the considerable effort supervisory authorities expend to ensure even a modicum of consistent GDPR application also surface in the text of the DMA non-compliance decision, where the Commission finds itself almost compelled to justify having taken into account the EDPB’s opinion on the matter when applying the GDPR (e.g., explaining that "the fact that some members of the Board voted differently or expressed reservations about the Opinion 08/2024 does not diminish the value of that Opinion, in the same way that the Court’s judgments produce their effects irrespective of whether they were decided by majority or unanimously"). What matters is that DMA enforcement not be delayed or undermined by such tensions, which, one imagines, some gatekeepers could hope to exploit, should a tempting opportunity arise. 

Series 1
In the decision, the Commission doesn’t merely argue for the legal autonomy of the notion of “specific choice” — which, in itself, would already be enough, and quite rightly so, in my view, especially given the legislative history of the provision (one recalls that, in the DMA Trialogue, the alternative under serious consideration was a straightforward ban on data combination given the broadly recognised limitations of GDPR consent in light of the DMA's specific objectives). Of some significance, also for the wider GDPR community, should be the Commission’s careful reading of the very (few) words of the Court’s ruling that Meta has been brandishing since shortly after it was issued, in defence of its consent-or-pay advertising model. I’ve just rewatched last year’s Compliance Workshop, partly to check whether I remembered it correctly: a certain weary note, as if Meta had found itself navigating a web of regulators and Grand Chamber judges. From the ashes of that multi-directional crossfire, the consent-or-pay model seemed to arise like a phoenix. The words of the Meta interpreted by as a green light for the model are: “if necessary for an appropriate fee” (Recital 150). As previously recalled, the Court’s words must be read in their specific context: it was addressing whether an imbalance of power resulting from a dominant position could invalidate consent when tied to data processing not strictly necessary for contract performance. The Court held that, in such a situation, consent may still be valid only if users are free to refuse processing and are nonetheless allowed access to an equivalent service without such processing, if necessary, for an appropriate fee.

 

 TBC