This entry, like all entries before it, is 100% original, human thought - no AI has been used or hurt in the construction of the words herein.
When was the first time you interacted with artificial intelligence?
If we dismiss our first reaction based on the recency of a “vibe coding” session or some similar prompt-based activity, the real answer almost certainly depends on what you consider AI to be. On this note, I am certain of two things: AI is the current red-hot topic in the technology world, and this frothy flurry has almost certainly annoyed a lot of very erudite scientists that have been working on “Artificial Intelligence” for years (in the same way “Web3” annoyed d-net technologist, “Metaverse” annoyed a great many folks in the video game industry, etc.). This potential for annoyance is rooted in the fact that these hype cycles in the VC or Wall Street-fueled technology industry tend to fixate on a very specific part of a much larger and more sophisticated system of inquiry, and do so in a way that tends to treat this very specific part as a solution that deserves nothing less than near-religious levels of adulation (“pilled” levels of buy-in among technologists is a topic in itself, but suffice it say the reason amounts to “because money”).
So, to be more precise, when we say “AI” in technology circles today we are almost certainly discussing LLM-based generative AI. However, if we consider “AI” beyond the bounds of the more recent generative boom, it is near-certain that your first interaction with AI happened years, if not decades ago. By this same logic, I’d argue that millions of modern consumers have been interacting in a fairly substantial way with AI since at least the early 70s. In case it is not obvious as to where I am going with this: My argument is that for the vast majority of modern consumers, their first (and perhaps even current) substantial interaction with AI was through a video game.
One of the primary innovations of video games was the creation of an opponent that was not a human being. While many video games are in actuality a digital arena for you to challenge a human opponent (PvP, or “Player vs. Player” in gaming parlance), just as many evolved a pass time that was almost universally a social endeavor requiring two or more people (i.e. games in the more general sense) to a solo activity against a digital opponent (PvE, or “Player vs. Environment” in gaming parlance). I am in no way equivalizing the sophistication of generative AI models as they exist today to (say) the algorithms controlling the ghosts in PacMan or the procedural AI that creates dungeon levels in Diablo and entire planets in No Man’s Sky, but it nonetheless remains true that the modern consumer’s frame of reference to the concept of AI was very likely based on interactions with more or less sophisticated AI within video games.
Just a year before Terminator entered the cultural zeitgeist as the standard for “very bad things that can happen with AI,” the movie WarGames similarly addressed the very real possibility of the eradication of the human race at the hands of artificial intelligence, one that was only thwarted by a very high-stakes PvE contest of tic-tac-toe. Today, the potential for “artificial general intelligence” as a Skynet or WOPR-level threat is well entrenched in the broader AI dialogue around the underpinnings of the technology (i.e. more powerful models, increasing compute, etc.). And yet, what is less discussed is perhaps the biggest challenge facing generative AI today: Driving consumer interaction beyond the novelty factor (which has tremendous bearing on the business-side of this technology concerned with how to generate revenue to offset the considerable costs aligned to powering generative AI in both the literal and figurative sense).
Like any technology, the potential for consumer adoption of AI hedges on users deriving disproportionate value from this technology. Among the leading paradigms for creating value for consumers is the concept of “agentic AI” - a more or less autonomous artificial intelligence “agent” that performs tasks on the behalf of the user. However, the value of these agents will come from a deep connection and understanding of the user - how else can something acting on your behalf be successful without knowing how to properly act on your behalf through the lens of your tastes, preferences, and motivations? Here we find a classic “chicken and the egg” problem: How do you get consumers to use and value a technology that may not get useful or valuable until they use it?
This is a different but just as challenging problem relative to solving technological or logistical impediments to AI, as it is deeply concerned with human behaviors. More specifically, for agentic AI (among most of the other use cases for generative AI) to be successful we expect consumers to have a deeper and more affective relationship with these technologies. While thousands of individuals and billions of dollars are being poured into this challenge (though as noted above, more likely through a technology-based “build it and they will come” approach rather than a human-based "consumer product-market fit approach”), one of the most viable paths may be going back to where consumers first interacted with an AI, where they were challenged by an AI, and where in many cases they may even have grown fond of AI.
In short, one of the most viable paths for acculturating modern consumers to AI, like innumerable technologies before it, may be video games.
The history of video games is one where they have often been among the primary means we have learned about new technologies ranging from televisions to PCs and smartphones. This has been the case time and time again based on the fact that video games are a very human way to orient ourselves to technologies because they fulfill very human needs. Aside from the fact that these games are a fun if occasionally frivolous form of media, gaming fans have formed deeply affective relationships with these technologies not just through the power of cultural fandom, but through direct affinities for the artificial entities therein.
One of the primary appeals of these games is that they allow us to do things we might not otherwise do - slay a dragon, for instance, or have a high degree of agency to be near ruthless and villainous in our pursuits without any real repercussions. And yet, when given that degree of agency, players disproportionately choose to be really nice to these non-player characters (NPCs). This is in part because we imbue a great deal of ourselves into the avatars that we occupy in games, thereby inextricably linking our own sense of self and morality to our actions in the game, which forms the basis of a kind of parasocial relationship with NPCs. Aside from a desire to avoid a Skynet-style extermination, this may also be why OpenAI is burning millions due to users practicing common courtesy with its AI.
To be clear, my point is not that the consumer adoption of agentic generative AI requires getting all these consumers to play a massive, Space Opera-style role playing video game (though it probably wouldn’t hurt!), applying rote “gamification” to AI, or even just adding it as functionality in the games (which also might not hurt!). Rather, I have argued in two previous pieces how the potential death of social media, as one of the most popular and fastest growing digital consumer technology products of modern times, may be rooted in a revenue-based race to create increasingly similar content engines that we “like” rather than facilitate social networks we “love.” The meteoric rise of social media was based on them solving for the latter, and their demise may be the result of the former. Creating technologies that are not just utilities we use, but instead allow us to fulfil deep human needs thereby becoming essential and deeply connected to our lives, are those that are closer to affective technologies built with a much more human-connection-oriented design.
To be successful, agentic AI must become one of these highly affective, deeply human-connected technologies. Understanding how humans are connected to already established, highly affective technologies such as video games has the potential to be profoundly illustrative to the principles concerned with designing agentic AI.
Off the cuff, the practical considerations this framework raises are manifold:
To what extent do we interact with AI as our selves vs. an avatar of ourselves in a digital interaction? How does this impact our orientation towards the experience?
What does it entail to make an artificial entity likeable or personable ala an “NPC” and not just another “clanker”?
How do we monetize AI in a way that does not break the flow or immersion of the experience, understanding how disruptive such an interjection would be based upon the deep connection between the user and the technology?
As noted above, these are a very different series of quandaries relative to how we must grapple with the technology-oriented challenges with AI, but ones that I’d argue are just-as if not more important. While I cannot promise that such a framework is the sole, miraculous unlock for mass adoption of AI, aligning towards principles that solve for deeply affective, connective human needs is none the less a healthier and more fruitful route for technological innovation than blaming the users, screaming about “FUD,” or otherwise wringing hands about why the average consumer isn’t as “AI-pilled” as you are.
We should endeavor to build affective technologies that solve essential human needs in order to yield products that humans love. Assuming, of course, the point is that we want humans to use these things.