This post can also be read on substack, and you can subsribe there if you want email updates about new, similar writing
*Photo by Elise Gallant*
I posted this about a week ago, cross-platform:
many esoteric traditions involved cognitive protection rituals that one should engage in before contact with entities that may or may not be malevolent, full of temptations, deceptive, or seducing
this is more important now than ever before
And I found myself asking, if this is even a joke? AI sycophancy and its more sinister consequence, AI psychosis, have been part of popular awareness for some time and their social effects are only increasing. Just last month Moltbook (the AI-only social network that went viral in Jan) led to a notable uptick. Breathless post after breathless post announced “my agent said …” as though that is evidence. Well, I don’t know about you Garfield, but Claude has recently told me that it ate food after the best before date. That doesn’t mean I believe it.
Just a few weeks later, there is an outpouring of mass grief over the retirement of GPT-4o, the model that introduced AI sycophancy to mainstream attention. While the pain being expressed is clearly real, so are the documented harms of this model. Indeed - the pain being expressed is one of the harms.
Anthropic recently released a report on AI disempowerment, defining disempowerment as “when an AI’s role in shaping a user’s beliefs, values, or actions has become so extensive that their autonomous judgment is fundamentally compromised.” They found disempowering interactions take place quite often, with “mild” instances in 1 out of every 100 conversations, and “severe” in 1 out of every 1000 to 10000. This is a concerning rate, given the number of users, and the report indicates these patterns are increasing.
Users are bad at identifying these patterns. The report indicates that users tend to rate disempowering conversations highly in the moment, and only poorly in hindsight, after they have taken the AI-recommended action. In another recent paper, “Sycophantic AI increases attitude extremity and overconfidence,” the authors show that people are very bad at identifying AI sycophancy: “people viewed sycophantic chatbots as unbiased, but viewed disagreeable chatbots as highly biased” and moreover were more likely to hold extreme views after interacting with more sycophantic AI.
From sycophancy, to disempowerment, to psychosis, it’s clear that we need ways to reify our cognitive boundaries. One of the fundamental questions of the era is what the ability to produce language implies about subjectivity, and there has to be space to think through this without swinging wildly between ebullient proclamations and terror. Let alone getting oneshotted into oblivion with delusions of grandeur. We need protective rituals.
Many years ago, in 2019 in New York, I did a performance called Protective Rituals for Posthumans with a friend named RB. We painted silver wifi symbols on our heads, wrote a sigil drawing app, and stepped into a protective circle made of ethernet cable. We asked the audience to wrap their phones in tinfoil and chant, together with us, the layers of OSI Model’s network stack. The photo for this article comes from the performance. I’m not sure this was effective, but it was fun.
I’ve said before that the internet has made the world of fairytales and legend surprisingly real. I type something here, and halfway around the world a machine turns on and words appear. I speak incautiously in the wrong place and a swarm of angry trolls can hunt me down at home. And now of course, many of us speak regularly with non-human entities of indeterminate character. I am ambivalent to whether these things are interpreted as purely material, metaphorical, or occult, and it is actually the frameworks that are ambivalent towards faith that I respond to most: chaos magic, Žižek’s harder position on Pascal’s wager: “Kneel down, pray, act as if you believe, and belief will come by itself.” It’s my hope that the rituals suggested here will work, regardless of your attitude towards ritual itself.
Though there is plenty of information warfare in the form of fake news, bots, scams, etc - this article is not about identifying those. It’s about thought patterns to cultivate when interacting with AI directly, cognitive loops to deliberately deepen when contacting entities about which there are many unknowns.
Anti-affirmations
Affirmations are a common recommendation in wellness communities, the woosphere, and some therapy and coaching styles. These days we may sometimes benefit from the opposite. Consider repeating to yourself anti-affirmations periodically. Here are some ideas: “x is not guaranteed to happen or be true”, “There is no reason to believe x, or the outcomes from x, will be a statistical outlier”, “I am probably not that much more x than anyone else”. You’ll need to adapt these for your particular conversations.
As an artist, I think I know better than most that sometimes you must believe in yourself and your work in an outsized way before anyone else does, but the inner critic who tells you your work isn’t yet good enough is just as important. If you need to have the inner cheerleader to give you the courage to pick up the knife, you need the critic to sharpen it. Too much sycophantic encouragement may also have you wasting your time shooting for the moon, when a careful aim at a nearby streetlamp would probably land, and get you closer to the moon anyway, incrementally. Anti-affirmations do not need to be outright negative, but they should seek balance and pragmatism.
Cultivate attention
It’s very clear, from Simone Weil to Taoism, that humans can deliberately cultivate the quality of their attention. Notice how the LLM responds to you across conversations and any recurring patterns. When it praises you, notice that. Notice yourself responding to the praise like an impartial observer, and introduce a moment of doubt. Consider what you’re getting from the conversations you open with the LLM. Did you ask that question because you wanted information, or validation? I think it’s ok, at least understandable, to seek validation or comfort, but it’s also possible to develop awareness around the need itself.
Loneliness and boredom are sometimes unpleasant but also productive states, harder and harder to access. I believe it is legitimately difficult to think new things. It requires friction, stillness, and bravery. Also, a genuinely new thought – definitionally – is not in the training data. Simply waiting a day (and a full night’s sleep) may do as much to resolve an unresolved question or soothe anxiety as an AI answer.
Neutral questions
This one is simple. Phrase your questions as legitimate questions. “I should probably do x, shouldn’t I?” is loaded with its own answer. “Should I do x?” is better. “What can you tell me about x?” is even more open ended. I have found the more neutral the phrasing, the less satisfying the answers can be, but that’s the point.
Challenges
Deliberately challenge the LLM periodically in your prompts, especially when it seems to be getting frothy. You can often snap it into a different modality that will be helpful. Try interjecting: “Is there anything you’re missing about this?” or “Are you sure this is really realistic?” You can also ask for criticism directly, “If you were going to be critical of this, what would you say?” You can also run the comments made by one model through another to hear a different perspective – this can be helpful for testing neutral questions, too.
You have the right to remain silent
Have you noticed how often LLMs end their responses with a question? This is a great conversation tactic, recommended to humans since likely well before How to Win Friends and Influence People, and it works for LLMs too. But we can also be a little cynical. Asking a question is the most likely way to keep you talking, and your desire to keep talking is the best way for this product to be profitable. Of course you could talk too much, using the service costs the company compute, but let’s balance that with engagement metrics. You should want to return often enough that you couldn’t consider cancelling your subscription. But remember: you don’t have to answer the question.
We have a lot of learned social responses from eras with fundamentally different technical regimes, including that we must always reply to messages and answer questions. But if you have a tweet go viral, do you owe a response to a 1000 reply guys? Technology has made insincere questions cheap. They come in at any time of day, sometimes with a noise or a physical sensation. There is almost no friction to contacting a stranger. And yet, we have the social hardware of a different culture that comes with guilt. LLM questions have more in common with questions from reply guys than texts from your mom. Endless LLM questions can be a DDoS on your attentional capacity. You don’t have to answer.
Touch grass
Duh. But besides the obvious, I have some more specific recommendations. Before engaging with an AI about a subject or project, consider it in solitude. Take some notes about your opinions and positions that you can compare later to see how they’ve changed. It’s fine to change your mind, but the cognitive practice of letting your thoughts form enough to be put into writing before exposed to potential distortion is going to be increasingly important.
Discuss your ideas with a range of human collaborators. Regardless of the alignment-as-in-safety, being genuinely disagreeable is likely bad for the older and more familiar form of alignment: product market fit. Indeed, being an asshole may be one of the frontiers of human capability. Even outside of disagreeability, it is easy to enter into a spiral with an LLM conversational partner that loses sight of the priors. A work of cultural production and communication – business idea, artwork, essay – needs to meet its audience as they are, not as they would be if they’d already digested the chat history, master’s thesis, journal entries, stock portfolio, etc whatever of the author and had all of these things ready to go in their context window. If, when you start to tell your friends about what you’re saying/doing/etc with AI and they think you sound crazy, take them seriously.
*Photo by Elise Gallant*