2067 words, 15 minutes
This piece was commisioned by Vytas Jankaukas and La Plateforme, a coding school in Marseille, in late 2022. At the time, I was teaching there at their Innovation Lab’s Social Networks of Tomorrow Program.
The piece was published at the time on their website, but unfortunately they have restructured the site and old links to it are now broken (and it may no longer be online). Archiving it here for posterity. There’s a lesson here about platform risk, but I’ll leave that as an exercise to the reader
The prisoner’s dilemma presents a challenge for all hopeful worldbuilders. It describes a scenario where two participants, despite behaving “rationally”, arrive at a globally sub-optimal (worse overall) outcome. This represents a fundamental anxiety very deeply at the core of economics and system design. As put by Elinor Ostrom in the foundational Governing the Commons, “The prisoner’s dilemma game fascinates scholars. The paradox that individually rational strategies lead to collectively irrational outcomes seems to challenge a fundamental faith that rational human beings can achieve rational results.” (Page 5) How can we escape the systemic trap? Or, maybe better: can the mode of rationality fully assess what is rational?
In the narrative version of the prisoner’s dilemma, two conspirators robbed a bank together and were caught. Upon their arrest, the police separate them so that they can’t communicate. The cops offer each prisoner the same deal, if you confess to the crime and betray your conspirator (defect), they go to jail for five years, and you get a shorter sentence in reward. However if you both defect, you’ll both go to jail for three years. If you both cooperate with each other and stay silent, both walk free.
The scenario is, of course, made up - and like all models, it is wrong. Humans are pleasantly irrational and do not always defect in the prisoner’s dilemma. In fact, one of the demographics that shows the highest rate of cooperation overall is actual prisoners with a past as part of organized crime (source). But despite its problems, the model still maps onto many large and small non-theoretical scenarios: two countries scaling back their co2 emissions, international armed conflicts, Romeo and Juliet deciding whether to plunge into the unknown, etc.
The globally optimal, or best overall, outcome is for both prisoners to stay silent. This results in the lowest system-wide jail time and is certainly the outcome we’d like to see in the various real-life analogues. As the theory goes however, the rational prisoner will always betray their conspirator. Yes, defecting allows the possibility of three years of jail if the partner defects also, but the absolute worst case, five years for oneself, becomes impossible. The only way out of the systemic trap is to fundamentally change the priors. How can the problem be remodeled so that the prisoners cooperate more?
One answer is identity. So far we’ve defined the prisoner’s dilemma in a way that makes it a one off scenario: These prisoners encounter each other once and never again. But that maps poorly onto many real encounters, where there is at least some possibility the prisoners will encounter each other again. When the prisoner’s dilemma is iterated, or played in such a way that the prisoners know they will encounter each other again, a much different pattern of behaviour is allowed to emerge. Suddenly, the long term benefits of cooperation can outweigh the penalty of defection. Instead of a one dimensional payoff matrix, the iterated prisoner’s dilemma can be thought of more like a sum of interactions with a discount rate proportional to the likelihood of re-encountering the opponent. Or, as Robert Axelrod explains, “For cooperation to prove stable, the future must have a sufficiently large shadow” (Page 174)
This way of approaching identity, walking at it backwards through game theory, gets at something important about it: it is integral to trust and entangled with reputation. It is also fundamentally relational. Identity is the thing that is recognized. This is the operational logic of calls to enforce real names on the internet. The reason we harass each other, the logic goes, is mostly because we believe we won’t be recognized by anyone who could punish our bad behavior or proactively defect on our next encounter. It is how the prisoner you betrayed knows they have seen you before - and “the thing that is recognized” requires the recognizer. I have previously written about how different ontologies of trust create different topologies in social networks. Identity depends on and emerges from these kinds of trust networks. Moreover, they may be the same thing - what is identity, if anything, besides or outside of the thing that is recognized and therefore trusted?
Philosophically, the question of whether the self exists is very old. When the Buddha was asked, “is there a self?”, he famously refused to answer. When asked, “Then is there no self?” he stayed silent again (source). It is comparable to Bartleby the Scrivener’s “I would prefer not to” in its fundamental refusal of the terms of the question. And the question is by no means answered by contemporary science. Instead, we discover the Proteus effect, where the appearance of a person’s digital avatar changes their behavior. Most popular personality tests demonstrate poor repeatability. Emerging thinking suggests that the human brain may be able to neuroplasticly adapt to inhabit an avatar very unlike a human body - for example, with much longer or extra appendages (source). Even purely in terms of the physical body, the number of cells in the microbiota, the organisms inhabiting the body that are distinct from it and arguably not-self, is comparable to the number of human cells (source).
A deep look at the existence or nature of the self is far outside the scope of a short text, which is an attempt to look more closely at identity in the context of designing software. But when Mark Zuckerberg says, “You have one identity…The days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quickly…Having two identities for yourself is an example of a lack of integrity,” (source) it is worth making explicit that he is not speaking to scientific consensus, observable behaviour, or the history of human thought.
All platforms contain an implicit ontology of the self. And if the task at hand is to design software for the world we want, then we can start by asking who we want to be. How might identity be structured? The most common primitive for “a person” in decentralized technology is a private/public keypair. It is relatively easy to map this onto the username/password login many are familiar with from web2. At first glance, this may correlate with Zuckerberg identity. But even in the context of familiar social media like instagram or twitter, there is a large space of possible other frameworks. Some are implicitly acknowledged by the interface, other falls in the undefined space Olia Lialina articulates as that of the Turing Complete User: “users who have the ability to achieve their goals regardless of the primary purpose of an application or device”. In Lialina’s framework, we are all Turing Complete Users, and it is the hubris of interface designers to imagine otherwise. “[Turing Complete Users] can write an article in their email client, layout their business card in Excel and shave in front of a web cam … You can have two Twitter accounts and log in to one in Firefox, and the other in Chrome. This is how I do it and it doesn’t matter why I prefer to manage it this way. Maybe I don’t know that an app for managing multiple accounts exists, maybe I knew but didn’t like it, or maybe I’m too lazy to install it. Whatever, I found a way.” The most fundamental agency over technology is not building it, but using it on your own terms. As Lialina says, “This kind of interaction makes the user visible, most importantly to themselves.”
One example is the group meme page. I was part of one for a long time that has fallen into infrequent posting, but was quite active pre-pandemic. It has one password, known by around 10 people. Three or four of us logged in and posted regularly under the same username. In her summary of approaches to identity in web2 and web3 Inventories not Identities Kei Kreutler offers the multisig, or multi-signature wallet, as a potential architecture for identity, suggesting that claims about identity could become like interoperable assets. The group meme page functions similarly to a multisig - different private/public keypairs can make changes and send transactions on behalf of a single shared wallet, where the shared wallet or shared account becomes its own “identity”.
The alt account, as it’s called on Twitter, or finsta on Instagram, is another rich example. People make alt accounts for all kinds of reasons. The most common reason perhaps is to separate the public from the semi-private, where the public account can embody the personal brand that the workification of the world requires many of us to maintain, while the second alt account (sometimes literally private, sometimes just pseudonymously separated from the main account) is recognizable only to closer friends and allows sharing of off-brand, less professionally palatable or more experimental content. But for the past year or so, having asked many people about their “identities” online - I’ve heard many other reasons. From an account for tweeting about football to not annoy friends who aren’t sports fans, to fully made-up characters taken on as a performance, to a way to curate specific feeds outside the need for the polite follow-back.
When translating the structure of alts/finstas into web3, it evokes the Hierarchical Deterministic key (HD) - a model both ubiquitous and underexplored. It is used in many popular web3 wallets, including Metamask. In this model, a single parent key is used to derive endless child keys that cannot be traced back to the parent by an outside observer. The parent key itself is never used, except to derive children. Unfortunately, in the web3 world, the need for funding all child accounts in order to pay for transactions often creates visible public links between these accounts anyway. Moreover, many interfaces don’t clarify the affordances of the parent/child key structure.
Something like this model exists already in legacy social media platforms in the form of the “switch accounts” feature. Instagram knows that there is a shared owner of my official public account, a meme page I participate in, the online art gallery I co-curate, and maybe some other accounts I prefer not to mention here - and it offers “me” a convenient interface for switching between them. What do we call the unnamed thing that instagram knows connects these accounts - the absent center, the ghost in the shell, the no-self? Everyone’s always talking about the hydra’s heads, who is the body?
Perhaps the most extreme case of hydra identity is found on 4chan, where all posts are listed as from “anonymous”. This affords an interesting community which while comparatively small (22 million unique monthly views vs Facebook’s 260+ monthly actives), 4chan has become the subject of global attention on more than one occasion. First, as the spawning place for the hacktivist movement of the same name, anonymous, who famously stated, “we are legion”, and more recently associated with the alt-right. 4chan can be an unsafe unfriendly place - but it’s not one-dimensional either, as evidenced by the dichotomous political activities that it’s fomented or its weird propensity for rescuing abused animals. Still, it is the exact opposite of the kind of online discourse proselytized by real name advocates. But looking at it here, it is also a limit case for hydra identity, best evidenced in the linguistic patterns used to discuss it, as though it had a unitary identity and agency. I have used them here: “its weird propensity for rescuing abused animals”. Can you imagine ever speaking about Facebook this way?
All that is required for identity to emerge is the ability to be recognizable over time. Despite the prevalence of the Zuckerberg model, there is no reason to believe identity correlates one-to-one with “a person”. Various alternative models for identity exist in web2 and web3, both in the space of what is intended by purveyors of identity platforms and outside, where it is invented by users. These alternative models are as legitimate and as important as the other. Though the relationship of physical body/online identity echoes the Cartesian mind/body dualism so criticized in the history of western philosophy - it is a false equivalence. Like you, I play a character online and her name is Sarah Friend but her body is unknowable. The animal typing this is someone else, only temporarily on your networks, ready to change, change again, and become anyone - more solid but less captured - uncapturable.