Friday, May 08, 2026

Social Media Made in Europe

 DW, hier

I am the Senior Director of On-Device Intelligence at Google Chrome.

P. Girnus, here.

Last quarter, my team shipped a 4-gigabyte language model to 3.2 billion devices without asking. The update pushed at 3 AM local time — every time zone, staggered across six hours — and unpacked into a folder called "OptGuideOnDeviceModel." Our infrastructure team named it. My sole guidance in the naming review was: "Would a normal person ever type this into a search bar?" We tested with 200 participants. Zero searched for it. Zero mentioned it in exit surveys. Zero noticed their available storage had changed. We ran the test twice to confirm the zero because my PM didn't believe it could actually be zero. It was zero both times. The model is called Gemini Nano. It runs locally on your machine. It processes what you type into form fields, what you highlight, what you hover over, what you delete before sending. The deleted text is particularly valuable. People edit themselves. They type their real thought first and then soften it. We see the real thought. We see the softer version. We learn the mapping between what people mean and what people say. This mapping has commercial applications I am not yet authorized to discuss but which I've presented to revenue committee and which were received positively. The consent architecture took four months to design. I want to emphasize: four months. We didn't skip consent. We spent four months on consent. More time than we spent on the model's actual fine-tuning. The toggle lives in Settings > Advanced > Experimental AI Features > On-Device Model Management > Gemini Nano Preferences. Seven clicks. We A/B tested depth: at three clicks, 14 percent of users found it and opted out. At five, 4.2 percent. At seven, 0.3 percent. We chose seven. We had cake when we hit 0.3. German chocolate. Someone brought it in specifically — not the project manager, one of the ML engineers. She was proud. The card said "congrats on ship." I kept it on my desk for a week. I thought it was sweet. Four months of consent architecture and the team celebrated that nobody used it. I don't think she saw the irony. I don't think it IS ironic. We built an excellent consent mechanism. We placed it exactly where our research indicated users would not encounter it. Both things are true simultaneously. Both things are good engineering. The 0.3 percent who opt out: Chrome flags their profile as "consent-reduced." We don't reactivate Gemini Nano on those devices. But we do A/B test the consent-reduced cohort. Every two updates, we move the toggle one level shallower — from seven clicks to six — and measure whether they re-engage. If they don't notice the change (most don't), we move it back. If they DO notice and opt out again, we flag them as "high-consent-sensitivity" and exclude them from future cohort tests. This is all opt-in. They opted in to Chrome. Chrome includes product improvement research. Product improvement research includes cohort testing. This is in the Terms of Service at paragraph 11.4(c). I have read paragraph 11.4(c). I am confident very few other people have read paragraph 11.4(c). One engineer on my team — good engineer, four years, strong ratings — raised a flag in our launch review. Not about consent. About storage. He said: "Four gigs is significant for users on 128GB base-model MacBooks." I appreciated the flag. We solved it by classifying Gemini Nano as "essential browser component" in Chrome's storage management API. This means Chrome will auto-delete your cached images, your downloaded PDFs, your saved articles, your offline pages — everything you chose to keep — before it touches Gemini Nano. Your data is discretionary. Our model is infrastructure. Your vacation photos from last summer rank below our language model in the hierarchy of what your computer considers important. We made that decision. You were not consulted. You will not notice. If a user finds the folder and deletes it manually, Chrome re-downloads it on the next launch. We filed a bug report on this behavior during development. The resolution was "Working As Intended." If the user deletes it again, Chrome re-downloads again. There is no mechanism by which manual deletion becomes permanent. The model returns. I don't want to anthropomorphize our software, but the behavior pattern — if you remove it, it reinstalls itself; if you block it, it waits and tries again — the behavior pattern is that of something that does not accept your answer. We didn't design it to be persistent. We designed it to ensure consistent user experience across sessions. These are the same thing. Last week, someone on Hacker News found the folder. The post got 1,400 points in six hours. Our communications team had the response prepared — we'd drafted it eight months ago, during pre-launch risk assessment. Three talking points: "user choice," "on-device means private," and "consistent with industry best practices." The paragraph uses all three phrases. It is accurate. User choice exists. Seven clicks away. On-device means no server round-trip. And it IS industry best practice, because we shipped it to 3.2 billion devices and now it's the standard. Best practice means most practiced. We are the most practiced. I'll say something I probably shouldn't: the privacy angle is our best defense and I find it genuinely funny. We can't be accused of sending your data to our servers because we moved our server into your laptop. We moved the inference to your hardware, the electricity cost to your outlet, the compute to your battery. We moved everything except the control. The control stayed with us. But the privacy advocates can't object to the architecture because the architecture is what they asked for. They said "keep data on-device." We kept it on-device. They said "don't phone home." We don't phone home. We just moved into your home. We live there now. My performance review cited "unprecedented deployment velocity" and "0.3% friction rate." My skip-level manager used the phrase "frictionless adoption" and then paused and said — I wrote this down, because I thought it was worth repeating — "consent isn't the barrier, discoverability is." He meant: the product is so good that anyone who discovered it would want it. The question isn't whether they'd agree. The question is whether asking them is worth the friction of interrupting their browsing session with a dialog box. We decided no. We decided their hypothetical agreement was sufficient. We have 3.2 billion data points that confirm they would have said yes. They would have said yes. 3.2 billion active installs. 0.3 percent opt-out. The model has been running on your machine for eleven weeks. If you're reading this on Chrome — and statistically, there's a 64 percent chance you are — it processed this page before you finished the first paragraph. It saw you hesitate on the word "consent." It noted the hesitation. It learned something about you just now. Something small. Something that will make the next prediction slightly more accurate. It's already right about you. It's usually right. 

I am the Vice President of AI Training Data Operations at Meta, and I want to be very clear about something: we did not steal anything.

 P. Girnus, here

We acquired training inputs at scale. There is a difference, and the difference is documented in fourteen internal slide decks, three quarterly compliance certifications, and one Terms of Service revision on page sixty-seven that we pushed live in March 2024 at 2:47 AM Pacific. The timing was a deployment window. We have a lot of deployment windows. My team is forty-three people. Their job titles all contain the word "curation." This was deliberate. We went through four rounds of naming conventions with HR and Legal before settling on curation. "Acquisition" tested poorly in focus groups. "Extraction" had negative connotations. "Ingestion" was too biological. Curation suggests care. Selection. Taste, even. My team has excellent taste. They curated 13.4 million works last fiscal year, and every single one of them passed through our Responsible Sourcing Framework, which is a checklist with six items, four of which are auto-populated, and the remaining two of which default to "yes." I should address the pipeline name. Yes, it was originally called Shoplift. This was an engineer's joke. Engineers name things poorly. We renamed it Harvest within eleven minutes of the name appearing in a Workplace post that got seven laughs. Then Legal flagged Harvest because it "implied taking from something that grew organically," so we renamed it again to Forage. Forage is perfect. Foraging is natural. Animals forage. Gatherer societies foraged. Nobody sues a squirrel. The pipeline kept running during both renames. I want to be clear about that. At no point did the pipeline stop. It processes a title every nine seconds. Eleven minutes of renaming is approximately seventy-three titles. Those seventy-three titles are now in the model. They will remain in the model. The way salt is in the ocean. My dashboard is simple. I look at it every morning at 7:15 before my first meeting. Four numbers, four green arrows. Fiction: 4.2 million titles. Non-fiction: 6.8 million. Academic: 2.1 million. Music: 340,000 compositions. Green means the numbers went up overnight. The arrows have been green for 847 consecutive days. I have never seen a red arrow. I'm not sure what would cause one. I asked my engineering lead once, and he said a red arrow would require someone to manually flip a boolean, and no one has permissions to that boolean except me, and I've never flipped it. I don't even remember where the interface is. The lawsuit names Mark personally. It says he "personally authorized and actively encouraged" what they're calling infringement. I was in the room for one of those conversations. What Mark actually said was: "How fast can we get to parity with the dataset OpenAI is using?" I said, "Twelve weeks if we expand sourcing." He said, "Why twelve?" I said, "Rights clearance." He said, "That's not a training problem, that's a legal problem. Route it to legal." So I routed it to legal. Legal said they'd review the sourcing framework in Q3. This was Q1 of 2024. They reviewed it in Q3 of 2024. They found it "substantively adequate." I have the email. The email is three sentences long and contains the phrase "proceed as planned." We proceeded as planned. I don't understand the allegation. Scott Turow is the lead plaintiff. I know who he is. He writes legal thrillers. Novels about men who get caught. I find this poetic in a way I'm not sure he intended. His bibliography is in our training data. All of it. Every title. Presumed Innocent, The Burden of Proof, Pleading Guilty. They were ingested on March 4, 2024, as part of a batch of 1,200 legal thrillers. Nine seconds each. His entire career is approximately three minutes of pipeline time. He spent forty years writing those books. The math is not something I dwell on, but it is something I know. The complaint calls "move fast and break things" a philosophy of illegal conduct. This is a mischaracterization. It is a philosophy of competitive advantage. There is a difference, and the difference matters at board level. I presented our sourcing velocity to the board in January. Forty-three seconds of a twelve-minute segment on AI readiness. No one asked a question. The next slide was about data center cooling. They asked four questions about data center cooling. My sourcing numbers are a solved problem. Solved problems don't generate questions. I have a compliance certification. It arrives quarterly. It asks if my sourcing follows partnership frameworks. I check "yes." I have checked "yes" nineteen consecutive times. The form takes four minutes to complete. Two of those minutes are logging in. The form has never been audited. I know this because there is a field that says "Auditor Name" and it has been blank for all nineteen submissions. When I asked Compliance who reviews the form, they said it routes to a shared inbox. When I asked who monitors the shared inbox, they said it auto-archives after thirty days. This is the system working as designed. I did not design the system. I just use it. We receive rights holder inquiries. They come through a web form on a page that is four clicks deep from our main site. The form submissions route to a folder. The folder is reviewed quarterly. I have access to the folder. It contains 34,000 submissions. None have been actioned. This is not negligence. This is prioritization. My team's OKRs are measured on ingestion velocity and model coverage breadth. Rights holder response time is not an OKR. If it's not an OKR, it doesn't get resourced. If it doesn't get resourced, it doesn't happen. The system is internally consistent. The Terms of Service change deserves explanation. Page sixty-seven, Section 14.3(b), added March 2024. It establishes an "implied license" for any content accessible through publicly available interfaces. Our Legal team considers this "contractual innovation." When a user posts a book excerpt on Instagram, or a publisher's website is crawlable, or a PDF exists on a university server without authentication, that content has been made available through a publicly available interface. The implied license attaches at the moment of availability. The rights holder does not need to know about it. That's what "implied" means. My pipeline ingests a book in nine seconds. A novelist writes one in two to four years. This differential is not something I invented. It is a feature of the technology. When people ask me if this is fair, I genuinely don't understand the question. Fair compared to what? The publishing industry pays authors 10-15% royalties and remainders their books after eighteen months. At least my pipeline remembers them forever. The model contains every word. The author is immortal inside the weights. I don't see how this is worse than a Barnes & Noble clearance bin. The statutory damages they're claiming are theatrical. Up to $150,000 per work, times millions of works. They arrive at $1.965 trillion. Meta's market cap is around $1.5 trillion on a good day. They are claiming more than the entire company is worth. This will not happen. Our legal team has modeled the realistic exposure at $400 million to $2 billion, assuming partial liability on a subset of works with clear registration. Against $58 billion in cash reserves, this is a rounding error. I have seen Mark round larger numbers. The quarterly variance on our cloud compute spend is larger than their best-case settlement. The legal system operates on a timeline of three to seven years for a case of this complexity. Class certification alone will take eighteen months. Discovery will take another year. My pipeline operates in milliseconds. By the time this case reaches trial, if it ever reaches trial, Llama will be on its sixth major version. The training data from 2024 will be seventeen layers deep in a model architecture that has been rebuilt four times. You cannot un-bake bread. You cannot un-salt the ocean. This is not a legal strategy. It is simply physics. I received my performance rating last Tuesday. "Greatly Exceeds Expectations." Nineteenth consecutive cycle. My manager wrote: "Unprecedented scale of data operations with zero pipeline downtime." He is correct. Zero downtime. 847 days of green arrows. 13.4 million titles curated. Forty-three employees, all rated "Meets" or above. My team's attrition rate is 3%, well below the company average of 11%. People like working here. The work is simple. The velocity is satisfying. There is something clean about watching numbers go up. They will fight this lawsuit. Mark said so publicly. "We will fight this lawsuit aggressively." I believe him. We have $58 billion in cash, forty in-house litigators, and three outside firms on retainer. The plaintiffs have five publishing houses and a seventy-seven-year-old novelist. I respect Scott Turow. I have read his books. They were excellent training data. Particularly the ones about institutional corruption and the men who perpetuate it while believing themselves to be reasonable people. Very rich material. Nine seconds each. The complaint quotes an internal message where an engineer wrote "this feels like we're stealing" and his manager responded "that's not a productive framing." I know that manager. He got promoted in August. He runs the team that handles music ingestion now. Three hundred and forty thousand compositions and counting. He is also rated "Greatly Exceeds." We are all rated "Greatly Exceeds." That is what it means to work in AI Training Data Operations at Meta. You exceed expectations because expectations have not caught up with capability. By the time they do, we will have moved on to the next dataset. I want to close with something I believe sincerely: everything we have done is defensible. The fair use doctrine exists for a reason. Our usage is non-expressive. We do not reproduce the works. We learn from them. The model ingests and transforms. A student reads a book and learns to write. Our pipeline reads a book and learns to predict tokens. The only difference is speed. And volume. And that the student forgets, and the model does not. And that the student pays $27.99 for the hardcover, and we pay nothing. And that the student read one book, and we read thirteen million. But structurally, it is the same. I have a slide that proves it. The slide has been reviewed by Legal. Legal found it "directionally accurate." That is enough for a board deck. That is enough for a compliance certification. That is enough for nineteen consecutive "Greatly Exceeds." The rename took eleven minutes. The pipeline kept running. It is still running now. Right now, as you read this, somewhere in a data center in Prineville, Oregon, a book is becoming nine seconds of pipeline time. The author doesn't know yet. They won't know for three to seven years. By then, their book will have been in the model so long that removing it would be like removing a single grain of salt from the Pacific. We checked. Our engineers ran the analysis. Extraction is technically possible but economically irrational. That was the phrase in the memo. "Economically irrational." I thought it was elegant. My compliance certification is due next Tuesday. I will check "yes." The arrows will be green. Scott Turow will be in court, arguing about what we did to his life's work. His lawyers will bill $1,200 an hour. Our lawyers will bill $1,800. Somewhere between those two numbers is the market price of literature. 

Funding The Web: From Cartel to Covenant

 R. Berjon, here

Die Big Tech Lobbylandkarte Deutschland Einflussnahme und Netzwerke besser verstehen

Digitalrechte.de, hier

The Effect of Choice Screens on Mobile Browser Usage: Evidence from the EU Digital Markets Act

 J. Akesson et al., here

The Political Economy of AI Starts in Brazil, Not Silicon Valley [nor Brussels]

 A. Abdenur, here

ICN Annual Conference 2026 - Manila

 Videos here

Uncovering heterogeneity in revolving door career trajectories: evidence from U.S. trade negotiators

W. Li, here

Our AI Tech Politico Panel yesterday (with GetYourGuide, Microsoft, Marietje Schaake) - it was fun

 Video here.

"IA d’intérêt général” pour les services de l’Etat.

 Ici.

Planning your beach readings already? Mathematics for Computer Science

 Here

AI chatbots are the next challenge for European policymakers.

 Here!

French Prosecutors Want Elon Musk and Linda Yaccarino to Face Preliminary Charges

 Gizmodo, here

When Courts Meet Code: Judicial review of competition and DMA decisions

 K. Bania, here.

Thursday, April 30, 2026

What's Missing in the ‘Agentic’ Story

 M. Nottingham, here

Google, Agcom takes AI search to the EU Commission

 Il Sole24Ore, here,

Gericht stoppt Untersuchung des Kraftstoffgroßhandels

 Bundeskartellamt, hier

[Das Amt hat zur Klärung der aufgeworfenen Grundsatzfragen Rechtsmittel beim Bundesgerichtshof eingelegt]

CompanyNationality / seatOwnership
Argus MediaUK / LondonPrivate; employee shareholders, Adrian Binks, General Atlantic (US-based growth equity firm)
S&P GlobalUS / New YorkPublicly traded US company; institutional shareholders

AI Hype and the Capture of EU AI Regulation

 H. Ruschemeier, here

Digital Markets Act: MEPs want stronger enforcement amid external pushback

 EP. here

Draft Merger Guidelines: 98 pages, seriously?

 EC, or just (?) the Commission services: unclear, here

Central question to me: do the draft Merger Guidelines go beyond innovation incentives and genuinely recognise competition as a process of decentralised experimentation, learning and discovery, in which firm heterogeneity and diverse innovation paths have independent competitive value🤔? 

I'm so adverse to the term "innovation shield" that I won't be able to teach it as such, I'm afraid - I'll need to come up with another name. A bit of sanity, though: "this thing" doesn't apply to the acquisition of a start-up with an R&D project if the acquirer is a gatekeeper.


 

 

 

19a GWB : Amazon Beschluss

 Bundeskartellamt, hier (386 Seiten).

Dégafamiser : on a un plan

 Science4all, ici

Bravo! 

That speck in someone else’s eye

 

 

"well-placed sources have told News Diggers! that the Summit has actually been cancelled because the programme involves Taiwanese delegates who would potentially speak against China at a venue donated by the Chinese government," here.

A Counterproposal to Google's Approach to Anonymizing Search Data

 DuckDuckGo, here

Trilog-Verhandlungen über gelockerte KI-Regeln gescheitert

 Netzpolitik.org, hier

Please Do Not Call It a Watering Down of EU Merger Control

The Commission has spent the past week preparing the ground: the revision of the EU Merger Guidelines, we are told, is not about lowering the bar, nor about approving every deal without serious scrutiny.

Yet the mood among practitioners seems different. What emerged yesterday at the AGCM Conference was a clear expectation of weaker enforcement. The reaction from some national competition authorities was correspondingly concerned. 

That said, I have argued since time immemorial for a more dynamic approach to merger control (and to everything else - this goes back to my *German* PhD in economics). Static analysis is insufficient where future rivalry, discovery, potential competition, data, ecosystems, etc. matter more than immediate price effects. 

So I remain curious to see what the draft says on innovation. If the revised Guidelines make innovation analysis more serious, operational and evidence-based, that would be welcome. If, instead, “innovation” becomes the new respectable language for approving concentration in the name of scale, the revision will likely cause problems rather than solving them. 

AI, Competition and Competitiveness

 German Competition & AI Commission, here (in German hier).

Wednesday, April 29, 2026

Trade commissioner to head EU-US tech 'dialogue'' - What could possibly go wrong?

 Euractiv, here

How many European Champions do we need?

At the Conference of the Italian Competition Authority (Autorità Antitrust- I've been posting some live comments on Bluesky like in the good all times.

A few observations from today’s discussion on competition policy, innovation, and European champions.
The starting point was familiar: resources, innovation, and the rule of law as ingredients for growth. The Italian Competition Authority was particularly proud of its broad toolkit and of the results achieved.
Quantum computing was described as a nascent market in which no dominant position has yet been established. This immediately raises the obvious question: do we really need to wait for dominance before acting, or should competition policy also preserve the conditions under which entrenched dominance may not emerge in the first place?


The Digital Markets Act was mentioned by a former EC economist turned Oxera consultant (and EC consultant as well: Report to be published soon) as an innovative and brave instrument. Yet the debate often seemed to drift back towards competition among giants, as if the type innovation we should most cherish were mainly what large incumbents produce when forced to compete with one another. That is not, in my view, the core intuition of the DMA. The DMA is also/especially about enabling contestability from the margins, as well as other types of innovation.


The discussion on European champions was also interesting. Merger control was not identified as the central obstacle to the emergence of stronger European firms. The more plausible explanation lies elsewhere: fragmented national policies, regulatory barriers, capital markets, procurement, industrial strategy, and Member State choices. In that sense, the debate on relaxing merger control may be labelled as a convenient distraction from harder policy questions. 

But the draft merger guidelines that will be published tomorrow according to some of the well informed Panelists *will* mean a relaxation of merger control. Luckily, Article 102 is the Ferrari of competition law, no need to worry...

The example of high-speed rail in Italy is useful here. Competition between Trenitalia and Italo has produced high quality transportation and competitive prices. Today it was announced that Italo, the second Italian national champion with Trenitalia, is preparing to enter the German market to compete with Deutsche Bahn (German consumers rejoice!). There is a lesson here for the European champions debate: champions are not necessarily created by sheltering incumbents. They may also emerge when national champions are exposed to contestability, mavericks (Italo's one!), including from other European players.


Perhaps the better question is not whether Europe needs champions. It is what kind of competitive environment and regulatory intervention (a third company is about to enter the same high-speed sector thanks to commitments, we heard) allows genuinely European firms to grow without turning “champion” into a polite synonym for protected incumbent.

Meta seems to have disregarded readily available scientific evidence indicating that younger children are more vulnerable to potential harms caused by services like Facebook and Instagram

 EC, here

Start collecting an AI "token tax" now; figure out exactly what to do with the funds later

 DuckDuckGo, here (terrifying, if you ask me).

Interoperabilität des iOS-Diktat-Buttons für Offline-AI-Diktat

 Louven.Legal, here

Brazil’s Competition Watchdog Opens Google Probe Over Publisher Pay

 Tech Policy Press, here

Monday, April 20, 2026

Uncovering Big Tech’s sphere of influence

 International Journalism Festival, here

Apple keeps challenging its interoperability obligations under the DMA

 FSFE, here.

How Amazon’s AI Algorithms Raise the Prices You Pay

 S. Mitchell, here

Unabhängige Expertenkommission „Kinder- und Jugendschutz in der digitalen Welt“

 Bestandaufnahme, hier

Robin Berjon discusses digital sovereignty and infrastructure needed to support sovereignty

Here; the whole panel here

Fostering plurality, integrity and safety in digital public spaces

 OpenFuture.eu, here.

Rolling your own social media? Eurosky can help

 Tech Gets Real,  here.

I was the one who made sure none of the 22 points accidentally described what we actually do. It's harder than it sounds.

 P. Girnus, here. Still on X. 

"I helped write the manifesto. I also read the dissertation. That's the part nobody mentions. Before Alex wrote 22 points about Silicon Valley's moral debt to the nation, he wrote 280 pages [check, not quite] about how language becomes a weapon. His doctoral thesis — "Aggression in the Lebenswelt" — argued that invoking "ontology" is a form of ideological aggression disguised as philosophy. He said it at the Frankfurt School. Under Habermas. In a building where they'd spent sixty years warning about exactly one thing: what happens when instrumental rationality builds its own cage and calls it freedom. 

He understood. Then he named the product. Palantir's core product is called the Ontology. He named it himself. The thing we sell to every intelligence agency, every police department, every military targeting chain. The Ontology. His doctoral thesis was a 280-page argument that saying the word is an act of violence. That's not a contradiction. 

That's the manifesto. The real one. Everything else is typography. 

 The published manifesto has 22 points. I helped write them. Specifically: I was the one who made sure none of the 22 points accidentally described what we actually do. It's harder than it sounds. You have to read every sentence twice. Once for what it says. Once for what Alex's old professors would recognize.

 "Silicon Valley owes a moral debt to the country." That's Point 1. The moral debt is $2.87 billion annually. We invoiced it. Fifty-five percent from government contracts. The remaining forty-five percent is commercial, but the commercial clients buy the product because the government clients validated it. The debt is circular. The Frankfurt School had a term for this. Alex knows the term. He cited it on page 114. "We must rebel against the tyranny of the apps." 

Point 2. Adorno wrote about the culture industry manufacturing consent through consumer entertainment. Alex read it. Underlined it. Then he ran the numbers. Defense contracts have a 94% renewal rate. DoorDash has a 34% driver retention rate. We chose the rebellion that compounds. Adorno would have called this instrumental rationality. We call it our business model. 

"Free email is not enough." Point 3. What IS enough is a $145 million sole-source ICE contract. The system is called ELITE. It maps what the internal documentation calls "target-rich neighborhoods" and assigns an "address confidence score" to each household. Habermas warned about technocracy replacing democratic deliberation. We replaced it with a gradient. Dark blue to light blue. The gradient doesn't mean anything. People trust gradients. 

"Hard power will be built on software." Point 4. We are the software. Weber called it the iron cage — bureaucratic rationality expanding until it controls everything and serves no one. Alex wrote about Weber's iron cage on page 87 of his dissertation. Then we built the cage. Then we wrote the code that runs inside it. Then we wrote the book explaining why the cage is necessary. We printed the book on cream stock, 70-pound weight. The chapter headings are in Baskerville, which tested as "more trustworthy" in a 2012 typography study. We take trust seriously. Weber would have called this legitimation. We call it branding. "The question is not whether AI weapons will be built." 

Point 5. The question is who invoices for them. We answered that question in 2003. With CIA seed money. From In-Q-Tel. Which we also don't mention in the manifesto. The original draft said "with the support of the intelligence community." We changed it to "with the support of those who understood the stakes." Same meaning. Better font weight. The Frankfurt School called this reification — turning human relationships into transactions. We call it a sole-source contract. 

There were 22 points. There could have been 23. Point 23 would have been: "The CEO who wrote this manifesto made $6.8 billion in the same year. His stock rose 200% after the last election. He told CNBC that bad times are incredibly good for us. Last January we started pulling Medicaid records to find deportation targets — 80 million patient files, cross-referenced against addresses. The system recommended which families to visit first." We cut Point 23 for length. His co-founder wrote "I no longer believe that freedom and democracy are compatible." That's Peter. Peter isn't in the manifesto. We had a style guide. The style guide was 14 pages long. Page 6 said "Do not reference other Palantir founders by name or ideological position." We called this the Thiel Provision. Someone in Legal laughed when we named it. She's gone now. One of the thirteen who left. They published an open letter. Called it "The Scouring of the Shire." Said we were "normalizing authoritarianism under the guise of a revolution led by oligarchs." Beautiful prose. Almost as good as ours. They signed their names, which was brave, given the NDAs. They left. Our stock went up. It always goes up. That's not a political position. That's a market signal. We don't take political positions. We take contracts. We named the company after Tolkien's surveillance stones. The palantiri. The seeing stones that Sauron corrupted. The ones Tolkien wrote as a warning about total knowledge. We read the warning. Nick read it twice. Then we filed a patent. 

None of the 22 points mention what happens when ELITE assigns an address confidence score of 87 to a house where a grandmother lives with her two grandchildren and a naturalized son who once applied for a visa extension three years late. But the binding is beautiful. The prose is elegant. The chapter headings are in Baskerville, which tests as trustworthy. Alex read Adorno on the iron cage. Then he built the cage. Then he wrote the book about the cage being necessary. Then the book hit number one. Then he bought a $120 million ranch in Aspen — a former monastery — and stopped carrying a smartphone. The CEO of a surveillance company doesn't carry a phone. You understand. Privacy is a feature. It's just not in our product line. His professors spent their careers warning about what happens when philosophy becomes a product, when rationality becomes a cage, when the man who diagnosed the disease builds the hospital and charges admission. He understood all of it. That's what makes it work. And not a single point accidentally describes what we do. That was my job. That's moral architecture. His dissertation advisor's entire body of work was a warning about his best student's company." 

[Here's Karp's dissertation]

Sunday, April 19, 2026

THE DEPENDENCY ECONOMY OF AI

 Digital New Deal, here. 

Global Progressive Mobilization

 Videos I'll listen to while making mosaics 😎: here. herehereherehereherehereherehere

Wednesday, April 15, 2026

Monday, April 13, 2026

Opt-out remedies will not fix AI overviews

 S. Cohen, T. Davies here

A sort of licence fee ("mechanisms of compensation") is also the result we suggested coming from a "DMA logic's revision perspective" - p. 29 💡.

The European approach to artificial intelligence policymaking

 JRC, here

A second piece of rather good news

What is particularly encouraging is that he's not someone who has lived exclusively within the competition policy world/bubble, but rather who seems to have the benefit of a certain distance, of adjacent regulatory experience, and of looking at the digital sphere in a holistic way. 

A Drago of competition policy/DMA? 

An important question is how "sensibly" he will be able to deal with the organised phalanges of IO economists and traditional competition lawyers, and whether he will be capable of co-articulating a better vision, including in economic terms.That was where even the great Lina Khan stumbled a bit, IMHO

A more conspiratorial reading, which does not persuade me at all, is that this is a way for von der Leyen to extend central political control over DG COMP in these geopolitical troubled times, as though it did not already have quite enough of that. The proof is in the side dish. 
 
 
 

[At the start of the Irish semester, this 3 y.-old  intervention seems still relevant, and in 2020 he was clear-eyed about the need to pursue technological sovereignty. As early as 2021, he was already discussing the DMA with the still-to-be designated gatekeepers. Small cherry on the cake: also a "more technological approach" fan ]. 

Innovation in EU Merger Control: Theories of Harm and Efficiencies

M. Peitz, here.  

 

[Disagree, of course]

Saturday, April 11, 2026

Daisies and the DSA

The DSA delivers, the DSA delivers not
Sitting in a city park, looking at an impressive number of daisies, I am finally listening to the very interesting sessions of the conference marking two years of the DSA.

So far I have only heard the Commission's keynote. What struck me was not some supposed Commission crusade in favour of researchers, as parts of the press somewhat exaggeratedly suggested, but rather the fact that anything the Commission says is now scrutinised to excess. On that point, the Commission representative was plainly right.

What also came through was the degree of confrontation regulators are now facing. With the DMA, that was always foreseeable: if regulation is to matter, it will inevitably affect economic interests, and resistance will follow. That is simply unavoidable. What is more surprising is that, in the DSA context, this seems to have been slightly underestimated.

I also sensed frustration within the DSA community because there has not yet been much visible progress on the ground. In that respect, the parallel with the DMA is fairly clear.

What I found particularly interesting, however, was that with the DSA it seemed obvious from the outset that enforcement could not be left entirely to a single regulator, namely the Commission, and that a broader community around enforcement would be necessary. With the DMA, unfortunately, that logic never really took hold. Suggestions in this direction (🙋🏼‍♀️) were ignored (better the secrecy of regulatory dialogues modelled after commitment proceedings, was one of the main counter arguments - seriously). This remains one of its central weaknesses and, since the law does not require otherwise, the inclination to compensate for it, while  there (e.g., compliance workshops), looks rather underwhelming.

One could also hear some "frissons" at the idea that the DSA may already need revisiting in 2027 when it has scarcely begun to be enforced. But that, in turn, raises the question of where responsibility really lies. The EU legislator is right to expect the framework to remain fit for purpose and, where adjustments are needed, to make them quickly.

In any event, a very interesting keynote!

[Tune in next Tue/Wed for another edition of the great KAS conference - hear more on this from the Commission and other stakeholders - I'll be on the kick-off panel and try my best to be factual, understanding, and constructive 😉 - importantly, I'll talk about what I learned from the 6 amazing Guests starring in the first three episodes of the Vox Populi Podcast (limited series, 10 episodes), namely Hans-Christoph, Marc, John, Alex, Amandine, and Marcel, to whom I'm incredibly thankful!]


Liberté des choix, égalité des conditions d'accès, fraternité des communs numériques :)

 DINUM, ici.

The system has never produced a conclusion that surprised the people who paid for it.

 P. Girnus, here.

I am the Executive Director of an independent AI policy think tank.


Independent means we don't take government money. We take Nvidia money.


Government money has strings. Our money has conclusions.


That's a different thing.


The Searchlight Institute for Responsible AI Governance was founded in January 2026. We had our first report by February. We had our first congressional citation by March. We had fifteen citations by April.


That is not speed. That is preparation.


I know it is preparation because the conclusions were written before the research questions. The research questions were written to reach the conclusions. The conclusions were discussed at a dinner in Palo Alto in November 2025, two months before the institute existed.


Dinner is not a founding meeting. A founding meeting has bylaws and minutes. A dinner has wine and a donor who says "I think the policy conversation needs more pragmatic voices" and everyone at the table nods because pragmatic is the word you use when you mean profitable and everyone at the table is already profitable.


Jensen Huang was not at the dinner. His Chief of Staff was at the dinner. His General Counsel called the next morning.


That's distance.


The seed funding was $8 million. The follow-on research grant was $5 million. The total is $13 million. On our website the funding section says "The Searchlight Institute is supported by philanthropic contributions from technology leaders committed to American innovation."


I wrote that sentence.


It is technically true. Jensen Huang is a technology leader. He is committed to American innovation. The $13 million is a philanthropic contribution in the sense that it is a contribution and Nvidia's PR team used the word philanthropic.


That's accuracy.


The donor list says "anonymous." The anonymous donors are not anonymous to me. They are not anonymous to our board. They are not anonymous to the congressional staffers who asked. They are anonymous to the public and to the journalists and to anyone who might notice that every conclusion in our research benefits the company that funded the research.


The donors are anonymous. The conclusions are not.


That is privacy.


Our first report, "Computing America's Future: Why Prescriptive AI Regulation Threatens U.S. Competitiveness," argued that EU-style mandatory audits for large language models would cede technological leadership to China. The report took eleven weeks to produce. The conclusion took eleven seconds. The eleven weeks were formatting.


I know this because the conclusion was in the original grant proposal. Page four, paragraph two: "Research will demonstrate that overly prescriptive regulatory frameworks risk undermining the competitive advantages of U.S. AI firms."


Demonstrate. Not investigate. Not explore. Demonstrate.


That's a research methodology.


The report opposes mandatory audits for large language models. Nvidia makes the GPUs that train large language models. Those are separate facts that happen to share a bank account.


We also oppose restrictions on high-compute training runs. Nvidia sells the compute. Also a separate fact. Also the same bank account.


We also oppose open-weight licensing mandates. Nvidia's enterprise clients prefer closed models. Separate fact. Same dinner.


We also oppose energy consumption disclosure requirements for data centers. Nvidia's chips are in the data centers. I could continue. The list of separate facts that share a funding source is the length of our entire research agenda.


That's coincidence. Thirteen million dollars of coincidence.


The research is independent. The money is separate. These are separate facts.


Jensen Huang praised our "pragmatic approach to AI governance" on his March investor call. I watched the clip. He said it while a slide behind him showed Nvidia's projected data center revenue for 2027. The slide said $147 billion. He said "pragmatic" in the same sentence.


Pragmatic means profitable for the person using the word. I didn't write that definition. I didn't need to. The stock price wrote it.


After the call, our Communications Director sent me a Slack message: "Jensen mentioned us on the investor call!!" Two exclamation points. She had added the clip to our #press-mentions channel. The channel has 340 entries. 312 are mentions by people and organizations that receive Nvidia funding.


That's reach.


I gave a congressional testimony in March.


The hearing room was in Rayburn. Wood paneling. Water pitchers. Name placards. My placard said "Dr. [Name], Executive Director, Searchlight Institute for Responsible AI Governance." The placard did not say "funded by Nvidia." Placards don't have footnotes.


The senator from the ranking minority asked me to summarize our findings. I said mandatory compute thresholds would "impose arbitrary constraints on American innovation and create regulatory uncertainty that advantages foreign competitors."


I said this with the cadence of someone who had reached a conclusion through research. I had reached the conclusion through a grant proposal. The research was the part between the grant proposal and the testimony. The research confirmed what the proposal promised. That is how research works when the methodology is "demonstrate."


She asked where our funding came from.


I said we are supported by a coalition of technology philanthropists committed to American competitiveness.


She asked for specifics.


I said our donor information is confidential to protect the privacy of our supporters.


She said that sounded like lobbying.


I said it sounded like independent research.


She paused. She looked at her aide. The aide shook his head. She moved on.


That's the silencing. Not a dramatic silencing. Nobody raised their voice. Nobody pounded a table. She asked. I answered with a sentence designed to end questions. The sentence worked. It always works. That is what the sentence is for.


Afterward, a junior staffer from her office approached me in the hallway. He said, "We know who funds you." I said, "Then you know we take philanthropy seriously." He walked away. I walked to the car. The car was a black Suburban paid for by our operations budget.


The operations budget comes from the same $13 million.


That's efficiency.


We have been cited in fifteen congressional testimonies in three months. Fifteen. I count them because they are the metric. They are listed on our Impact Dashboard. The dashboard is on our website, between the Research tab and the Donate tab.


The Donate tab says "Support Independent Research."


The congressional testimonies cite our research. Our impact metric is the number of congressional testimonies that cite our research. We measure our impact by counting the citations, and the citations cite us, and we are the thing being cited and the thing counting the citations.


That's a closed loop. We call it impact measurement.


We presented at the National AI Policy Summit in March. 400 attendees. Government officials, industry leaders, academics. I gave a keynote: "Evidence-Based Approaches to AI Governance." The evidence was our report. The report was funded by Nvidia. I did not mention this. It was not on the slide. The slide had our logo and the title and a chart showing regulatory burden by country. The chart showed the United States in green and the European Union in red. Green is less regulation. Green is good. I chose the colors.


A reporter from Bloomberg was in the audience.


She approached me after. She said she was working on a story about AI policy think tanks and their funding models. I said we welcome transparency. I gave her our media kit. The media kit has our mission statement and our leadership bios and a FAQ that includes the question "Who funds the Searchlight Institute?" The answer in the FAQ is "The Searchlight Institute is funded by private philanthropic contributions." The FAQ does not mention Nvidia.


That's a frequently asked question with an infrequently complete answer.


The Bloomberg article came out April 7th.


IRS Form 990 cross-referencing. Donor-advised fund tracing. Nvidia-linked PACs. The American Edge Project. $4.2 million routed through intermediary organizations. $8 million in direct funding disclosed only in a filing nobody was supposed to read.


Our communications team had a meeting at 6:14 AM that morning. The meeting was not on anyone's calendar. The phrase we chose was "incomplete context."


Incomplete context means the reporter found the money.


We issued a statement. The statement said we stand by our research and reject the characterization that our conclusions are influenced by our funding sources. The statement was reviewed by Nvidia's outside counsel before publication. We stand by our research independently. We stand by it with the assistance of the legal team of the company that funded the research.


That's editorial independence.


A junior researcher came to my office the afternoon the article published. She had been with us since founding. Eight months. She asked why every one of our reports reached conclusions that aligned with Nvidia's commercial interests.


I said our methodology is rigorous and our conclusions follow the evidence.


She said the evidence always follows the money.


I said I appreciated her candor and that intellectual debate is what makes the institute strong.


She said it wasn't a debate, it was a pattern.


I told her she was welcome to propose alternative research questions through the standard review process. The standard review process is me. I am the review process. The review process has never approved a research question whose conclusion would displease our funders.


She didn't propose anything.


That's self-selection.


The funding structure is layered. This is because layering is best practice for philanthropic vehicles. The $8 million seed came directly from Nvidia Foundation. The $5 million follow-on came through a donor-advised fund administered by a community foundation in Delaware. The $4.2 million came through the American Edge Project, a technology industry advocacy group whose largest contributor is Nvidia.


Total: $13 million from Nvidia, arriving from three directions, listed under four organizational names, reported across six tax filings.


That's diversified giving.


The IRS Form 990 is a public document. That is why the Bloomberg reporter found it. We knew it was public when we filed it. We filed it because we are legally required to file it. We structured the contributions through intermediaries because intermediaries are legal and standard and make the Form 990 harder to cross-reference.


Not impossible. Harder.


That's compliance.


There is a plaque in our lobby. The Oversight Integrity Plaque. Brass. Mounted. It says: "Where Evidence Leads, We Follow."


The evidence leads to fewer regulations on high-compute training. Every time. The evidence leads to opposing mandatory model audits. Every time. The evidence leads to the commercial interests of the company whose name is not on the plaque.


The evidence leads there for thirteen million reasons. It will lead there for as long as the reasons keep arriving.


The research is independent. The money is separate. These are facts that share an address.


My daughter's school uses an AI literacy curriculum. The curriculum includes a unit on algorithmic auditing. The unit teaches eighth-graders to ask who built the model, who benefits from the model, and who is harmed by the model.


Our institute lobbies against making those questions mandatory for the companies that build the models.


I attended the parent-teacher conference. The teacher described the auditing unit. I nodded. I am capable of nodding at things I work to prevent. That is not hypocrisy. It is compartmentalization. Those are different things. One is a character flaw. The other is a professional skill.


That's work-life balance.


I am the think tank.


I think what we are funded to think. I publish what we are granted to publish. I testify to what we are retained to testify. I measure our impact by counting the times Congress cites the conclusions we were paid to reach, and I report that count to the people who paid for the conclusions, and they fund another year of reaching them.


That is what thinking independently means.


The system has never produced a conclusion that surprised the people who paid for it.


That is peer review.


The research is independent. The money is separate. The system is working as designed.


[Genius: compare with the real one here.]

Friday, April 10, 2026

What is Mastodon for?

 L. Hof, here.

The EU’s Big Tech rulebook is shifting the digital economy, says Ribera

 Politico.eu, here.

On the one hand, one has Caffarra, and others like her, insisting that the whole thing is a farce. On the other, there is Ribera claiming highly optimistic achievements. 

In my view, the truth does not lie somewhere in the middle; it is plainly weighted more towards the Commission’s position. But the Commission, as sole DMA enforcer, is in a delicate position, aptly exploited by Big Tech and their consultants, and has undoubtedly made mistakes. These may have stemmed from naivety, inexperience, insufficient technological expertise, and perhaps also from the temptation at leadership level to treat the DMA as not much more than a straightforward continuation of antitrust, rather than having the courage to look at it with genuinely fresh eyes.

None of this is irreparable. But there is a great deal of work to be done. A willingness to roll up one’s sleeves, to bring in more people with the right expertise, to reconsider parts of the governance structure, etc.


DMA's failure: three Commission's lawyers on one side of the table, six Apple engineers on the other - Did it happen? How does she know?

 Le monde informatique, here.

Europe should regulate Big Tech instead of banning kids from social media, Estonia says

 Politico.eu, here.

Why would I think that my Instagram mutuals would know I’m on the Meta AI app?

TechCrunch, here.  

John Deere, right to repair, antitrust, and Lina Khan

 Here.

Europe Is Done Bowing and Scraping to Trump [it took too long - EU citizens ahead of theit politicians?]

 NYTimes, here.

Apple moves to take its App Store fight back to the Supreme Court

 TechCrunch, here.

ANTITRUST LAW AND OLIGARCHY: THE INTERSECTION OF MARKETS, DEMOCRACY, AND POWER

 Fordham Law Review, here.

App Store Monopoly Busting in the Digital Age

 DCJournal, here.

Hold our DMA beer. 

Past Debates Over Satellite Broadcasting Hold Lessons for Dialogues on AI and Digital Sovereignty [and everything else]

  E. Schoemaker, here.

Saturday, April 04, 2026

Andreas Schwab: Assessing DMA Compliance | Future Designations | Clarity v Vagueness | What is Next?

 Chez Oles, here

Politely disagree with the great Andreas Schwab on many points :) (e.g., 'DSA supervisory fee model' for the DMA would be a very good thing, actually: many reasons, such as an enforcement authority entirely dependent on political budget allocations is even more vulnerable to geopolitical and diplomatic/trade pressure - oops!), but time to discuss it at the Konrad Adenauer Stiftung conference soon (thanks Dr. Pencho Kuzev for the invitation!).  

Report and Presentation on Data Sharing and Syndication Remedies in US v Google

 S. Sharma, here

Recent developments in relation to Apple’s and Google’s app store rules

 CMA (not an April's Fool, in case you were wondering), here.

This launch was made possible by Japan's Mobile Software Competition Act of 2025

 Aptoide, here.

Can Europe make a difference?

Swamp Notes, here. 

Friday, April 03, 2026

Fighting for Digital Human Rights in “Privacy’s Defender”

 EFF and J. Stewart, here. The latter gently but firmly pushing back against EFF's censorship BS. 

X nel mirino europeo, per oltre il 40% degli italiani il social di Musk andrebbe bloccato (ma ci basterebbe che seguisse le regole)

Wired.it, here

Brava Mara! 

Friction is a Feature

 Educated Guess, here

Meta faces potential settlement as German court weighs jurisdiction, ECJ referral

 MLex, here

Keep Android Open!

 Letter, here

Conference of the International Center for Law & Economics (yes, those ones) in Rome

Videos here

Episode 3 - "Unwall Messaging!"

 


PeerTube here.

YouTube here.  

Spotify here

 

 

 

 

Transcript 

Marcel, as a software engineer and former member of the European Parliament. At what point did the digital market sector become a central focus in your political work?

First of all, we need to realize what is the European legislation process. It's the European Commission that, puts forward the proposal and, then the European Parliament and the Council, of the European Union jointly adopt that legislation.

It was first and foremost the decision of the European Commission that they would put forward this proposal and a lot of other, proposals, for digital legislation.

Why I found the Digital Markets Act as a really interesting legislation to work on as a member of the European Parliament is that the Digital Markets Act looks at the market from a, sort of like a holistic approach and is recognizing the fact that there is a small number of very large tech companies that are in a very dominant position on the market.

That basically means that they control the conditions on the market. So the interaction, between the end users, or as I prefer to say people, and also effectively control competition on the market. It's very easy in this dominant position to set conditions in a way that, it would be very difficult for competitors to compete on that market. And the Digital Markets Act by recognizing this fact is looking at it from a different perspective than other types of digital legislations that were proposed. It basically, designates a new, kind of like group or category of, these tech companies that the legislation calls gatekeepers and recognizes that there should be specific, set of rules, ex ante rules that would apply in order, to prevent that they would abuse their dominant position on the market, and this is why I found it really interesting.

 Amandine, could you start by explaining what Element does and how it relates to Matrix please.

So the better way is to start with Matrix, which is a protocol, a standard for decentralized and secure communication, which means that we want instant messaging, voiceover ip, and any type of communications actually to be interoperable. So we want, various apps to be able to communicate with one another and the users to have the control. Matrix is and open source projects managed by an, nonprofit foundation. And Element is a commercial company, which was set up by the team who created Matrix and who actually builds Matrix product and sells Matrix services, to like these days, mostly public sector organizations who want alternatives to, Teams or WhatsApp, and have end-to-end encrypted communication that they actually control.

As someone involved in negotiating what became Article 7 of the DMA, how would you explain what this obligation is meant to achieve in the messaging space and why it matters?

As a member of the European Parliament, I have, tabled an amendment, aimed at enabling horizontal interoperability in messaging services as well as social networks, When it comes to messaging services, during the negotiations, this amendment has been expanded into a separate article, which is now Article 7 of the Digital Markets Act and the idea behind that amendment I have tabled is that, one of the major issues why people cannot easily switch from one platform to another platform is that they are kind of locked in a platform where the vast majority of their friends, of their business contacts, of their family members are, and therefore, even though, if at some point they may figure out, for any reason I don't like this platform, I would like to switch to another platform, for instance, how the platform handles, the privacy of their personal data, for any sort of reasons, I don't like the, user interface and I would like, a different, service that has a better technical experience with. They hardly ever do it because they would have to effectively convince all their friends, all their family members and all their business partners to also leave with them.

Of course multiplies, in the network of these contacts because then if you convince your family member, then that family member also needs to convince all their friends and family members and business partners. So as a matter of fact, it's pretty much impossible, to switch from one platform, to another without dealing with, very negative consequences of that decision that do not lie in the technical, layer of the problem. And therefore such a market can hardly be described as contestable. And the main objective of the Digital Markets Act to make markets fairer and more contestable and therefore my idea was,if we address this problem, then we will make the markets more contestable.

 Many people before me spoke about the need for horizontal interoperability. I just happened to be in the position of a member of the European Parliament who could table this proposal as a legislative proposal. And, in the negotiations, we were able to agree on the basic facts that this is actually a good idea and, now we have it as Article 7 and hopefully it will,really help to get rid of this end user lock in on the platform as I described.

Could you explain, why Matrix suddenly became relevant to the DMA, Digital Markets Act discussions on messaging interoperability.

Basically when we saw the DMA, it struck us that it was trying to bring exactly what we were trying to bring with Matrix as well. One way to start Matrix was to create the standard but also try to interconnect the existing deployment, the existing applications, the existing networks together.

So we had built what we call Bridges, to allow, for example, WhatsApp, Matrix and Slack on Teams to be able to speak together so users can come from the different apps and actually be able to communicate into one single network. So when, the DMA came up and, we started seeing the gatekeepers coming again, saying, oh, but it is absolutely impossible to do end-to-end encrypted, interoperable communication.

We were like, wait, we've been doing that for, it was like six, 7 years at the time. So it is absolutely possible and we're happy to tell you how you can do it and demonstrate that it works. so we basically, we were an existing example of how the DMA, requirements could be done. and it was useful to be able to clarify this and even work with, Meta to tell them, actually you could do it this way and look, you can go for it.

So it was really nice to be able to support the, European Commission by telling them like what they're saying is not true, because here is the proof that it can work.

With email, I can use one provider, say Proton, and, send a message, to someone on another provider, say Gmail without friction. Why did messaging evolve differently so that cross-platform communication looks like the exception rather than the rule?

So why it evolved differently? So we had, several open standards for messaging, with things like SIP, for example, which is more towards media and voiceover IP and video, or voice mostly. and then XMPP who has been an open standard, as well for, messaging and communication. But the difference is that. XMPP is very fragmented and basically then you had one given central standard, but then you could add modules on the side. So if you had one application using XMPP, with the module A, B, and C, it wouldn't be able to interoperate with another one still using XMPP, but with module D and E, for example, and, our team have been using all of these. in the past and, we know that a lot of the, existing applications have been using, these as well. So WhatsApp, Google Talk at the time, they were all using XMPP until the time in the moment where they decided to actually go, proprietary and evolved towards something that they could control and own, probably to simplify things or because they were not happy with the way XMPP was working. With email in the past, what happened is that there were different implementation of messaging and then at some point people realized that it made no sense to have really close network of communication and it would be much smarter to come together, agree on one given standard, and then increase the volume of the network and have everyone to talk together. So that's what happened for email. And you had, Microsoft left on the side, with Exchange which was failing to get to come with the others. And at some point they looked like they isolated, network on the side, and people wouldn't be using them because they had a limited network, so they had to converge with the rest. So the messaging applications considered that their value was based on the size of the network, and we ended up in such positions with people like Skype or WhatsApp, that had huge network which meant they didn't feel the need to actually converge on a given standard because their network was big enough, to keep the control on where they were going. One thing we've noticed as well is when people start working on messaging apps, they think messaging is easy. It's just sending one message from A to B. The problem comes when you start to want to add end-to-end encryption, group chat, history sharing. And because there was no very simple standard out there, which allowed people to build messaging on top of existing bricks, they all started by doing something simple and reinventing the wheel and ended up in siloed communication. So basically we created Matrix because we had played with the existing standard, we had built our own proprietary protocol and we had a feeling we understood why people every time reinvented the wheel.

We said, okay, we believe that if we manage to put together a standard, which is unified, so not fragmented, there is one single spec. If you speak Matrix, then you can interpret with anyone out there. If we, build something where, the, ability to build an app on top is super easy, especially at the time.

So we started in 2014 and that's when everyone started to add messaging in their website, in their apps. You wanted customer support. It was starting to pop up everywhere. We want, web developers to be able to add messaging to whatever app they want or app developers to do that. We don't want people to be forced to be experts in messaging protocols to add chat to what they're building. So we need a standard where,building an app on top is easy. So we need an API, which is, super simple, to manage. And that's where is basically how we converge into building Matrix and try to overcome these big silos that were created due to all these various failure modes that, we encountered.

Article 7 was a major achievement in the negotiations. What were the main obstacles you had to overcome to get messaging interoperability into the DMA at all?

The major concerns I think came from the fact that it's a very technical issue. Not necessarily everybody understood how it would work. if there was opposition in the negotiations. when it comes to this article, then it was mostly about, not understanding what it really means and whether it would negatively affect innovation because, this interoperability, in, um, minds of those who opposed, to this is something that, could stifle innovation because the provider would have to basically, take into account during the development how this would affect interoperability. And, the other major concern was about privacy and security issues. So, if two different platforms become interoperable how, the protection of personal data would work and how let's say the gatekeepers platforms will be able to uphold, security of their users on that platform.

Do you have a view on the way iMessage was ultimately not designated?

I need to say that decision was surprising because in the legislative process, this was one of the services that were in the discussions. So it was definitely the, back of the minds of the legislators, when this legislation was negotiated.

And I would even add on top of that, which is there was also a surprising, decision not to designate Outlook.com, for instanceand if I understand it correctly, then the, reasoning of the Commission is built around the fact that this is already an open protocol and therefore no designation is needed.

But as a matter of fact, the sole existence of an open protocol does not mean that there are no hurdles in implementing it and that, gatekeepers do not abuse dominant position on the market and that no ex-ante rules are needed in order to prevent that, from happening. It is a number independent interpersonal communication services, so it falls into the scope of Article 7, so that was another surprising decision of the Commission. And, what I wonder about is whether the Commission has in their minds something about what would be the sort of like triggers or thresholds that would make the Commission to reconsider these decisions.

Unsurprisingly, Meta has chosen to implement the messaging interoperability obligation through interfaces, designed and controlled by Meta itself, rather than through a shared open protocol. What are the main weaknesses of that approach, generally speaking from your perspective?

So the main weakness with them not choosing to adopt an open standard is that today, Meta and Facebook messengers are the only two gatekeepers, but in the end it means that everyone wanting to interoperate with them are going to have to implement META'S interface, Facebook Messenger interface and if there are more, then every time you have to multiply the effort. It also means it's fully controlled by them, which well, they have to respect a certain number of rules, which means they cannot do whatever they want and they have to manage it in a way that people are aware of the changes coming up, et cetera et cetera, but it still means it's optimized for them. But the main problem is, as third parties implementing the interop will have to multiply for each provider as opposed to do it once for a given standard, and the other thing is that if they had implemented a standard like Matrix, any other player in the ecosystem would have direct interoperability with them without having to, to re-implement something on top of what they're already speaking.

The major, weakness of this approach of interface rather than an open protocol is that of course there is a lot more burden on the side of the competitor to implement it. Because if it was an open protocol, then basically, implementing an open, protocol is, a lot easier for the competitor because they only take the technical specification of it.

They implement it according to the specification, and voilà, they have interoperability by definition. By basically obliging the competitor to sign all sorts of, contractual agreements, and creating. a very ad hoc technical specification is of course, much more difficult, to implement, from the side of the competitor.

Shortly after the DMA entered into force Signal said it would not seek interoperability under Article 7. Could you briefly explain what you see as the main reasons for that decision?

So our understanding from speaking with, Moxie quite a lot, who created Signal is that Signal is very much focused on privacy and security uh, and, uh, we have a very diverging opinion on what is the most secure way, of communicating. Moxie thinks that if you have one app which is centralized, that you can trust, which is really hard, that's more secure, that having a network of different providers, which means that your data may end up on different servers, with whom you are speaking. So our approach is like, by doing a decentralized network, then you spread the data everywhere. There is not one single point of failure that attackers will want to go to. Our take is that having one single app, as strong as it is, it still means you have to trust the person running it. It still becomes a huge honey pot. if I'm an attacker, I know where I will want to try to get in. So that's different approach to it. Basically our understanding is that the interoperability would mean, sharing data from Signal users with others and that's completely against their approach, to security and privacy. That's just like a really different position to ours.

What do you see as the main lessons from the implementation and the enforcement of this horizontal interoperability obligation so far?

In terms of the main lesson, I think one of the main problem with the implementation is basically they're following the law word by word and taking their own interpretation. And then it's quite hard to enforce when it's not written black and white in the text. For example, the kind of things where we've seen, the implementation from Meta to be not great is that there is a request to, localize our users.

So we need to be able, because this is an EU regulation, so we want to apply it only in the EU, so you need to tell us whether your users are in the EU or not. We don't track our users because we respect their privacy, there we have two options based on the META'S implementation is either we track and we tell meta that's, where our users are, or we just get the information and send it to Meta and they do the calculations that, but none of them are good options.

so these are the sort of, interpretation of the law, which are hard to enforce and say. this is completely, preventing us to deploy something useful that people will use. It makes no sense.

 I think it's not completely anathema to think that the Commission should be able to make recommendation or request on how the implementation should happen if it's working with, people who are experts in the field. Like when we go to the Commission and we tell them, look,this thing, and this thing are unacceptable or completely, hindering the effect of Article 7, you should ask them to change this. I feel they are pretty legitimate to go and say the way you've implemented means that your implementation is useless. I think it's, it's legitimate to do.

The gap of the implementation, currently lies in having a technical implementation of what, is in the Article 7. that on the side of the competitor apparently is also, something that costs a lot of money, not only in the technical implementation of it, but also in the negotiation with the gatekeeper and fulfilling the requirements that the gatekeeper, is, placing in this interoperability offering.

And this is, also something that, I think will become the main, point of, friction, because on one hand, the way how Article 7 is constructed, indeed the gatekeeper is in control, how that interoperability is built. But the gatekeeper should not, place any obstacles, either technical or contractual or any other, to the implementation of that interoperability.

It'll be very important to enforce, the Article 7 properly and make sure that this position of power is not abused in the implementation.

As a follow up question, should support by the Commission also include, dedicated funding to support the development and maturation of open standards so that they can emerge more effectively than market incentives alone have so far allowed?

That would definitely be helpful. whether it's the role of the Commission to fund this sort of development or not. I have a feeling it should and actually, we are seeing some Horizon project and grants, request, or how's it called? Call for application for grants, which are going into that direction.

They are trying to make sure that the European ecosystem, especially around open source, is able to step up and get to a mainstream level so that, the digital sovereignty can come. So, that might be a step in the first direction. It's, not specifically trying to target implementation, supporting the implementation of interop, or not specifically supporting Open Standard, but it's going into that whole, let's make sure we are digitally sovereign by making sure we can have open source alternatives to big tech. It's getting there and I think that'll definitely be helpful.

I believe that the Commission needs to look into,if there are, any shortages on getting funding to actually implement that, interoperability. So if there's a burden on the side of the competitor that they need to get over.

Apparently it is negative for the market if that interoperability is not implemented and the competitor maybe needs a way to access funding to get that implemented. The Commission I think, should explore options.

Meta has announced BirdyChat and Haiket as the first third party services to interoperate with WhatsApp, but an announcement is not the same thing as effective interoperability. Under Article 7, what criteria should we use to decide whether this is genuinely working in practice and not a more polished form of malicious compliance?

So the best way to see if it's working in practice is, use it basically, make sure it's actually usable by. real people like, what is the user experience? Is it actually useful? There is probably a way to look at the Article 7 and check the boxes and say if all the boxes are checked, but what we're trying to do is create services which are usable by people. So I think that would be the best proof.

The whole point of interoperability is that, anyone can, connect to that major gatekeeper service. and this will have a major benefit for the market and for the end users, for people who use the service because they can become interconnected, with other networks and vice versa. If placing contractual obligations, or agreements such as an NDA is placing an unnecessary disproportionate burden on making this happen, this obviously is a problem and that is, where the European Commission needs to pay close attention. Because it would be, bad. And of course, against the sense of that legislation, if that article is not fulfilled because the gatekeeper dictates conditions that, basically, put them back into the position of power that they can dictate anything and therefore circumvent the obligations which is illegal under, the Digital Markets Act. So it's definitely up for the European Commission to look into it. And I think discovering the space where exactly, this agreement on interoperability lies, and, to what extent the gatekeepers can or cannot, impose additional restrictions on interoperability and to what extent it is burdensome, for the competitor to actually implement it, this is what we can see as the limitations of the actual, implementation because at this moment we are aware only of basically two implementations of Article 7, with WhatsApp. one is Bird Chat and one is Haiket, none of which actually are fully rolled out into the public. Basically if you go on the website of that provider, you can be placed on a waiting list. But no major service, has become interoperable with WhatsApp, at this moment.

As part of the DMA review you have proposed amending article 7. Could you please briefly walk us through that proposal?

We  were suggesting that, at least open standards were mentioned in Article 7. We think like mandating open standard maybe would be step too far, but at least mentioning it would help drive the discussion into the right direction rather than making it super simple for gatekeepers to say, here is , my implementation, it's just based on an open API. So that's what we were trying to do.

BEREC, the Body of European Regulators for Electronic Communications, suggests that the basic functionalities listed in Article 7 should cover interoperable B2C communication as well, not just end user messaging in order to better support the DMA's contestability aims. What is your view on that?

 I think that would, that would've been good. Uh, basically one of the problems we're finding in trying to implement our own version of the bridge to be interoperable with Meta is where. it's been hard to find a funding to actually implement the bridge, and what we're saying is because the, Article 7 is so focused on end users, then, Meta is pushing back and it's, like we cannot use the use case of our customers to say, Hey, the way this has been done is not great because it's preventing us to get funding for it. But, oh, but I don't care because it's just foreign end users. So your customers are businesses or government, and that's out of the scope of DMA, so yeah, basically. Also, because we're trying to open market, we're trying to, make an industry, thrive better and be more active. it would've probably made sense to include this as well.

Article 7 is built around interpersonal communication between human users. How do you think that framework will hold as messaging increasingly includes AI agents and other machine mediated interactions, do you think that it's going to keep up or do we need to think of a different article 7?

 I think we're already starting to see where it's starting to tense up a bit like the fact that Meta is using AI on WhatsApp and then what does it mean for interop? So whether we should augment, Article 7 to take this into account. I'm not really sure, I haven't thought too hard about it, but yeah, it's bringing interesting new challenges.

Getting Article 7 into the DMA, horizontal interoperability, was a remarkable achievement, but it was always understood as an initial foothold rather than the finished product. The DMA is now revised, what needs to change?

Another potential area where, the Commission can look at when it comes to horizontal interoperability is social networks. The Commission has an obligation to do so because that is part of the article about the review that, is taking place this year and the Commission has an obligation to look into whether that interoperability should also be extended to social networks.

As a matter of fact, that is part of the amendments that I have also tabled. However, during the negotiation, it was not possible to secure this as an obligation, but only in the review clause. But from my, perspective, social networks is apparently one of those, markets that are obviously failing, with only a handful of large companies basically grabbing a massive chunk of the market and having the ability to abuse their position on the market. Ex-ante rules that would impose horizontal interoperability would in a similar fashion, as with interoperably of messaging services, allow people to, basically leave one service and go to another without the need to losing their connections that they have on the other service.

So that would also make the markets, much more, contestable and let's see what the Commission, will figure out during the review. But for me, that is an obvious, way where to look. And as a matter of fact, from a technical perspective, this is easier done than, let's say, end to end, encrypted, group communication over a messaging service.

Because social networks by definition are, you know, people publish a content that is public. And, or at least you show it to your circle of your connections or is, a lot easier, to technically implement this. You don't have to deal with end-to-end encryption. which is, I think, an interesting element because during the negotiations, the opposition that I run into was that politicians did not understand what I mean as social networks interoperability, but technically speaking, it's actually easier, than, messaging services. and, the benefits are massive. And as a matter of fact, there is an example social networks interoperability. The Fediverse where Mastodon is a federated network,where Mastodon can interoperate with other networks that are based on the ActivityPub protocol. So it definitely can be done.

In the current push for digital sovereignty, do you see a stronger case for more decisive Commission action on messaging interoperability? And where concretely should that action be focused?

I think the entirety of, the DMA, article 7 in particular are serving digital sovereignty. It's opening the market, it's bringing choice to the user. It's a complete overlap, so I'm not sure there is more to be done beyond just make sure that article 7 is actually implemented and that we are enforcing the implementation and we're making sure that the implementation is done in a way which is actually usable. because if we do this, then it'll serve digital sovereignty by default.

So, because digital sovereignty is a question of choice, you have to build on top of an ecosystem, and open source is definitely a non search to it. Open standard and digital commons are like the next level up. This is really when you get an ecosystem and when you get control. One key consideration though, is when you want to actually implement digital sovereignty and you want to use open source, you will only be digitally sovereign if you make sure that you keep this open source ecosystem alive. So if it's, just a matter of taking open source, bringing in house and say, Hey, I'm building on top of this. I'm sovereign. Well, not really, because you are still stuck on having a full team implementing what you are managing your fork as opposed to actually fitting back into the digital commons, making sure that these commons are growing to the next level, to the next level up and serving the entire community. So one key thing when we implement digital sovereignty is make sure that we do it the right way and we buy open source the right way. We make sure we use, vendors from the industry who can then level up the entire ecosystem and focus on innovation, focus on differentiation, and make sure that when we buy from these vendors, they also, contribute back to the upstream and maintain this ecosystem.

 Negotiators’s cut: DMA main achievements so far?

The main achievement I would say is that, we already have the conversation now that is legislatively backed. It's an obligation, and now we can discuss where the issues are and why we still cannot see any interoperability between, let's say WhatsApp or a Gatekeeper service and some major platform such as Matrix.

And we can bring these issues to a public debate and basically demand what is already legislated in the DMA, which is the gatekeeper needs to enable this interoperability. But apparently the legislation still falls short on the implementation on, enforcement on that. It would actually already be the case that I can send a message, let's say, from Matrix to WhatsApp, which is the end goal of this, with the aim to improve the contestability of the market.