web stats

Meredith Whittaker's exquisite Davos rant about LLM monoculture

Whittaker with black curly hair wearing a black suit, gesturing as she speaks.
Meredith Whittaker, president of the Signal Foundation, speaking at the WEF "town hall" "Dilemmas around Ethics in AI".

I've been spending a lot of my spare time watching and taking notes on the discussions of the 2026 World Economic Forum. At first I just wanted to get a more holistic perspective on PM Mark Carney's speech (I'm working on another piece following my last post, Carney's Davos speech was a sales pitch – and with the additional context of the other discussions, the speech is actually quite a bit darker than I first guessed), but once I started watching the various conversations I found myself engrossed. I'm an existential sort of person, I get really frustrated with "not knowing". And given these chaotic shifting times, I just really need to know who is making the decisions, what they care about, and what their motivations are. I can't really explain it, but watching the sessions feels a bit like doing a crossword puzzle – the clues appear asynchronously, eventually forming meaning while a few rubrics remain outside the bounds of my knowledge. It's not a bad way to spend the depths of winter.

My brain has become fatigued by news media – the takes are noisy and frantic, more often than not just regurgitating the same surface-level information over and over, but in watching the WEF sessions I'm finding the original signals much more satisfying to digest. Disseminating top-down perspectives through my own lens rather than clamorous media narratives or frazzled journalists is satiating my anxiety considerably, and I do feel like I've gained a much more grounded outlook on how this "new order" is shifting. I have a lot of drafts lined up and am using these deep winter days to process, write, and see where the flow takes me.

I came across the January 22nd "Town Hall" titled "Dilemmas around Ethics in AI", which included Mat Honan from MIT Technology Review as moderator, with perspectives from Max Tegmark: Swedish-American physicist, machine learning researcher and author; Rachel Botsman: researcher and author on "Trust"; and Meredith Whittaker: president of the Signal Foundation.

The bulk of the conversation touched on issues most of us are already familiar with; ethical concerns, environmental concerns, hype narratives, copyright infringement, business model values, and as the session turned to regulating AI, an audience member asked how regulation could be pushed or enforced. Botsman noted the markets made regulation impossible at this point, and Tegmark insisted it was governments' responsibility to hold AI to safety regulations just like any other industry. Biotech was brought up as an example of an industry that survived regulations without collapsing, which is where Whittaker, with a touch of exasperation in her voice, patiently but sharply painted a picture so clearly and thoroughly that by the end I did a fist pump and whispered fuck yes.

I was transcribing as I listened (as I've said before, I don't use LLMs, not even for transcriptions), and as the quote got longer and longer, I decided I had to share the whole thing here. If you weren't a fan of Signal or Whittaker already, she's a badass, and I recommend watching the recording too.

Here it is:

Biotech is struggling now, it's really hard. I have a biotech company, I'm not going to name any names but they're doing incredible AI-centric research on cancer, it's high risk high reward, they're using Deepmind's protein folding database – extremely hard, this is long term research, it may not pan out, but you know, that is a social good QED, and they can't get series B because the VCs want a chatbot to cash out. That's what we're looking at here.
That's one story among many that you hear in this ecosystem. This is a monoculture. I think I do want to answer this question because I think I worked on my first AI regulation, like, how would we think about these guardrails around machine learning in probably 2015. That was right after the Deepmind acquisition at Google, 2012 was AlexNet, and that's where everyone started getting interested in deep learning again. So this is pretty early, and I've probably had this conversation once every quarter since then, "what are we going to do?"
In some sense it's gotten worse and worse, the level of discourse has gotten more and more baby brained, to be blunt about it. Like yeah, just accelerate, it'll all come out in the wash. And meanwhile, I'm leading Signal (asked how many people use Signal, many raised their hands, she said "I love you all"), but Signal is core infrastructure to preserve the human right to private communication. It is used by militaries, it is used by governments, it is used by boardrooms, it is used by dissidents. Ukraine uses it extensively. These are contexts where the ability to keep your communications private is life or death. Signal is an application built on top of operating systems, we build one version for Android, one version for iOS, one version for Windows, and you know, Mac OS on your desktop. And we take this responsibility really seriously. We open source all of our code so people can look at it, we open source our cryptographic protocol and our implementation, we're working with some people, with Max and some folks, to formally verify that, which means mathematically, you can prove that what it says in the code is what it's doing. And that's a level of assurance we put out there because we recognize that people could die if we mess this up.
But we have to run on top of these operating systems, and you are seeing the three operating system vendors now rushing to integrate what they're calling AI agents into these operating systems. Now, AI agents, the marketing promise is you have a magic genie that's going to do your living for you, right? So you ask the AI agent, hey, can you plan a birthday party for me and coordinate with my friends? That sounds great you can put your brain in a jar and you don't have to do any living. You can just walk down the promenade, you know, seagulls in your mind while the agent plans your birthday party. But what is an agent actually doing at the level of technology? In that case it would need to access to your calendar, access to your credit card, access to your browser to simulate mouse clicks and place orders, and, in this hypothetical scenario, access to your Signal messages to text your friends as if it were you, and to coordinate and then sort of put that on your calendar. Right?
That is a significant security vulnerability. So while we're telling stories of inevitable magic genies that exist in a bottle, what these agents are doing is getting what's akin to root permission in your operating system. They are reading data from your screen buffer at a pixel level, bypassing Signal. They are hooking into your accessibility APIs to get audio, to get image data from the screen. They are rooting through your file system, the sort of deepest level of your operating system that controls what the computer can and cannot do, what software can and cannot do, what data you can access, and what you can do with it. They are making remote API calls to third party services, they are sending all of that to the cloud, because there is no LLM small enough to run on your device, and they are processing in the cloud to create statistical models of your behaviour so they can guess what you might want to do next. These are incredibly susceptible to prompt injection attacks. There is no solution for that because when it comes to natural language, which is what these agentic systems are processing, they fundamentally cannot tell the difference between an authentic desire of the user and language that may look like an authentic desire, but is in fact not representative of what people actually want. 
If you look at the architecture of these agents on the operating system, they look a little bit like the architectures of targeted malware given the permissions that they allow, given the data access they allow and given the vectors to send that off your device, process in the cloud, access a website outside, etc. 
This is an existential threat to Signal, to the ability to build securely and privately at the application layer, and it is a fundamental paradigm shift in the history of computing, where for decades and decades we've viewed the operating system as a neutral set of tools that developers and users can use to control the fundamentals of how their device worked and how the software worked. These are now being effectively remote controlled by the companies building these agents and these systems that are ultimately taking agency away from both developers like Signal and the users of these systems. 
I went into this level of technical detail because that's the level at which any dignified adult conversation should be happening. Like come on, you think you can take responsibility for the entire world, but you don't have to answer for it?? 
I did not come up in a world where that had any dignity to it, so I think in some sense we just need to demand a bit more spinefulness of the people who claim to own the future and recognize this is what's happening. If you're hooking into my accessibility API, you have ultimately decimated Signal's ability to provide this existential, fundamental service. And I will shut it down before I will continue to operate it without integrity, because I do not want people being harmed because they trust us to provide a service we no longer can.

My god I loved the phrase "dignified adult conversation". After watching other sessions that landed somewhere between AI boosterism and flat out infomercials for LLMs with zero technical context beyond "yeah it's great, everyone should get on board", it was wildly refreshing to witness Whittaker's sharp-as-hell expertise dominate the room.

It didn't lift my anxieties about the existential destruction that the tech is likely to inflict on our societies, but it was amazing to see someone so brilliant, who cared so goddamn much and who was trying so goddamn hard. I'm completely inspired by her. I hope LLMs don't achieve the devastating effects on private communications she described.

Over the next couple weeks I'll share more on the themes I've picked up on while watching the other discussions, and as I said earlier, I'll have a "part 2" of my analysis of Carney's speech ready shortly.

Read more