web stats

Adapt, die, or a third secret thing

A collage of images including a weird guys made of orange kite fabric with a bucket head, an old woman holding a women's lib sign, mountains, fire, planets, snakes, plants.
Cut paper collage, 18" x 22", made by me January 2026

Yesterday I had the sickening realization that we're entering another era of polarizing discourse re: AI. Fresh off the deeply divisive and destructive wave of covid mis/disinformation and having apparently learned nothing from it, people are dusting off their pitchforks and setting out to burn down anyone or anything holding views opposing their own when it comes to "ARTIFICIAL INTELLIGENCE" (couldn't decide on scare quotes or all caps so I chose both). The term itself is both omnipresent yet poorly defined when wielded in the discourse, which no one seems especially concerned with as long as engagement keeps the dopamine flowing.

When Covid-19 hit we had no choice whether to confront it or not, and AI is pushing us into a similarly impossible situation; terrible for human health, inflicted on us by ubiquitous forces – the message we're hearing is "adapt or die", and from there we're all scrambling to figure out what the hell "adapt" even means, and who is meant to "die"? Is it us? Is it AI companies that can't keep up? Nations who fall behind in the AI race? Is it all of those? Is there a third secret option, and if not can we please invent one?

People on socials are already furiously dividing themselves into camps of pro or against. You can see the rigidities taking shape – people fuelled by their own sense of certainty, forming clusters of memetic sycophancies, frothing each other up, dogpiling – it's predictable and deeply disappointing. That mode won't help us.

We can let the discourse finish off what covid started, decimating our social fabric and reducing trust to the dehumanizing influence of narratives amplified by vampiric tech bros and algorithms that prey on our vulnerable nervous systems, or we can slow down and take stock of where we are, and what we know. We can't resist what we can't see, so let's take a beat to make some sense here.

Let's start with what we know

About AI:

  • The technology has been in development since the 1930s (it isn't new).*
  • Neural networks have been in development since the 1950s.*
  • Research focuses on specific components of "intelligence", namely "learning, reasoning, problem solving, perception, and using language."* (outside of that, humans are having a hard time defining what non-human intelligence even is, eye roll*)
  • As of February 2026, "Scientists warn that rapid advances in AI and neurotechnology are outpacing our understanding of consciousness" and are urging, "failing to understand consciousness could lead to ethical mistakes with consequences far bigger than we’re prepared for." (spooky!)*
  • AI presents multiple ethical and socioeconomic risks, such as mass job losses, training data biases exacerbating human biases, privacy exploitation, massive energy needs resulting in high carbon emissions, and poor to no regulation.
  • Psychological risks of chatbots are real and growing, including but not limited to, "psychological dependency and attachment formation, crisis incidents and harmful outcomes, and heightened vulnerability among specific populations including adolescents, elderly adults, and individuals with mental illness."*

About Data Centres:

  • They are booming around the world*, extremely bad for human and environmental health* (in their current iterations), and everyone seems to think AI will figure out how to solve those problems eventually.
  • Data sovereignty has become a high-focus priority for nations like Canada*, which essentially translates to "build lots of data centres" and "find AI experts".

About Hyperscalers:

  • They "use distributed computer systems, teams of independent computers splitting up tasks across multiple machines to process high volumes of data. This is called hyperscale computing. Traditional data centers, on the other hand, prioritize centralized computer systems, which can often mean fewer but more powerful computers."* The main three hyperscalers are Amazon, Microsoft, and Google.

    Why do they matter?

    "hyperscalers that are able to provide more advanced AI tools might not only attract more commercial users, but may also provide them with a significant technological edge over their competitors."* (aka: they're actively monopolizing the AI industry)

About LLMs

All large language models (LLMs) are not created equal. Different LLMs have vastly different purposes and objectives. But we should start by stating the fact that every major LLM exists due to egregious copyright infringements, having consumed the intellectual property of millions of writers, artists, and pretty much any human-generated content online.

Here are the top 3 models (not including versions) average consumers might be most familiar with:

  1. Claude (Anthropic, Public Benefit Corporation)
    Company values*:
    - Act for global good
    - Hold light and shade
    - Be good to our users
    - Ignite a race to the top on safety
    - Do the simple thing that works
    - Be helpful, honest and harmless
    - Put the mission first

    Controversy:
    "Anthropic safety researcher quits, warning ‘world is in peril’" - Semafor
    "Anthropic Accuses Chinese Companies of Siphoning Data From Claude" - The Wall Street Journal
    "Palantir partnership is at heart of Anthropic, Pentagon rift" - Semafor
    "Hegseth gives Anthropic until Friday to back down on AI safeguards" -
    Axios

  2. Gemini (Google, LLC, subsidiary of Alphabet Inc)
    Company values*:
    - Protecting users
    - Building and deploying AI responsibly
    - Expanding opportunity
    - Helping solve society's problems
    - Building for everyone

    Controversy:
    No public controversies, apparently, because Google decides what shows in Google results. Here are two articles I was able to find:
    "Gemini users say their chat history has quietly vanished" - MSN News
    "Google Sued Over Claims Gemini AI Spied on Users" - Yahoo News

  3. ChatGPT (OpenAI, Nonprofit Foundation and Public Benefit Corporation)
    Company values*
    - "We are building safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome."

    Controversy:
    I mean, where to start? There's the Tumbler Ridge shooter's disclosures that weren't reported to authorities, then of course there's the bot-assisted suicides, and of course the hundreds of thousands of cases of AI psychosis, I literally can't fit all the terrible shit this company has unleashed on the vulnerable public. That the company still exists is possibly the biggest tell that everything has gone terribly wrong.

There are also stateless and stateful LLMs – stateless LLMs are "lightweight" have no memory between sessions (ie: search engines), and stateful LLMs manage more memory and require more architecture (ie: personal assistants).

There's a lot more to LLMs than what I can cover here, and I'm absolutely not an expert, but overall it's pretty easy to see that AI within the context of LLMs is a massive storm of uncertainty that we are unwillingly being placed in the centre of.

Our consent has already been disregarded. There was no democracy involved in the decision. So what now?

What does "adapt" even mean?

This is the part where we have some choices, and it's also where we need to be aware of market-driven narratives; discerning signals vs noise in this context is existentially critical. "Adapt" means different things to different industries, and in some cases adapting makes some sense, for example:

Researchers and scientists might use specialized LLMs (LLMs trained on peer-reviewed research, journals, articles etc) to synthesize data quickly, finding patterns across disciplines that might create a more holistic picture and facilitate breakthroughs across medicine, psychology, neurobiology, theoretical physics, etc. These aren't cases of scientists asking chatGPT to write a paper for them, it's more applying AI to guide algorithms in seeking and amalgamating specific types of information. Such use-cases augment the work of humans rather than replacing it.

Coupled with open-source sharing of research data, this has already produced pretty stunning breakthroughs. Searching and reading scientific papers (while critically cross-checking their validity) has become a new hobby of mine, because everything is public now. There's so much cool research out there!

Data analysis is really the only valuable application of AI I can think of. I don't think "therapeutic AIs" are good or ethical because such applications are based in the user believing that AI understands the human condition, which is fundamentally impossible. Sure, chatbots might function as tools for reflection in the same way tarot cards do, but few people have the mental discipline to leave it at that. Everyone wants an oracle, and therein lies the danger.

Most AI boosterism is based in market-driven narratives that serve the purpose of capturing consumers for profit. The narrative is driven by investors and market shares – the wealthy trying to get wealthier. This is a very different and destructive framing of "adapt", one that far too many business-idiot managers and casual consumers are buying into.

We know AI tools like Copilot and chatGPT offload cognitive work and erode critical thinking skills*. I've talked about this in previous posts – by becoming dependent on those tools, we're literally shrinking the regions of our brains that handle curiosity, intuition and insight, completely destroying any hope of critical analysis. Those abilities don't easily spring back either, they take several months and sometimes years to build back through abstained use and digital detox practices. And that shrinkage is almost exactly what happens in cases of dementia and Alzheimer's, which part of why we're seeing skyrocketing cases of early onset dementia (exacerbated by smartphone use*).

You might think corporations exploiting addictions and creating cognitive dependencies sounds cartoonishly evil, but Meta has been doing it for nearly two decades (plus plenty more evil shit*). Headlines like, Human intelligence in freefall: Study links cognitive decline to social media and AI* are literally everywhere if you look for them.

Augmentation vs offloading

So if the options really are "adapt or die", we need to be explicitly intentional and thoroughly informed about how these tools might impact our ability to think. I shouldn't have to remind anyone that our brains literally form our realities, and once the mind sinks into darkness (which offloading does), so does our subjective reality. Personally, I'm fundamentally invested in maintaining a grounded sense of agency and interconnectedness with the world around me. I want to be capable of building the world holistically, observing it rationally, understanding my impact while maintaining my capacity to problem-solve independently as the world heaves itself into whatever iteration is coming.

We should want to upgrade, not downgrade, and AI adoption can induce either. In general, I see augmentation as positive, and offloading as negative.

Augmentation might include:

  • Taking advantage of the wealth of information available online – studies, peer-reviewed research, and independent journalism via leveraging the full functions of search engines (train yourself to skip summaries).
  • Deep diving into education around cognitive care and making cognitive hygiene part of daily practices, ie: critical ignoring, digital detox, and forming strong offline habits.
  • Pushing yourself to engage in the "grunt work" of learning (the most important part!!) and then only using AI tools for tasks that would have otherwise been passed off to someone else, ie: an editor for final review. See the tools more as coworkers, not shortcuts (I personally would still avoid using LLMs for any creative work, the slope just seems too slippery).
  • Prioritize AI systems over AI tools – an AI-augmented system might be something like a CRM, an email marketing platform, workflow management tools, or anything that relies on conditional logic to perform a series of tasks. These tools handle the workload of entire teams rather than the simple cognitive tasks in your brain (you need those).

So what's the secret third thing?

The secret third thing is don't buy the hype, you won't die, you'll need your brain for later.

Observe. Learn. Hold the line. Fortify your mind. In five to ten years when everyone around you has the cognitive capacity of third-graders, you'll be considered a goddamn genius.

And my personal opinion is the "singularity" is a myth, because we're all intertwined on this ridiculous journey. Be curious about the world, make friends with critters and fungi (fungi especially – they're a whole other type of intelligence!), and if you really believe AI might someday overlap human intelligence (let's be real the bar is pretty low), model in yourself an intelligence that values reciprocity, love and joy – something crystalline and beautiful that taps into the fundamental goodness of the universe. Any true intelligence will be tapping into that same field, and love is stronger than dominance. Maybe there's a path where we all work together.

I'm going to write another piece soon that explores the folly of thinking human intelligence is the apex of universal intelligence (loool for real). I want to explore ideas of consciousness and cognition more. If there's one thing I'm finding wonderfully useful in thinking about AI, it's that it's compelling us as a species to understand our own consciousness. Very exciting. More to come.