AI is gaslighting you.
Maybe you’ve come across an image like this on Instagram: a picturesque interior, walls lined with packed bookshelves, midcentury modern furniture, wall to wall windows, greenery all around; or maybe it’s more of a visual pun: a beautifully browned loaf of bread braided seamlessly into the shape of a Labrador, a chanterelle mushroom Lego set.
Your first instinct is to slow your scroll to its siren song. It is an image in the shape of a worthwhile image. Maybe you will tag a friend in the comments or silently dm it to them (it is an image that does not ask for commentary).
But maybe, just maybe, you’ll consider the image long enough for your eyes to come into focus. The stairway in that interior doesn’t lead anywhere. The letter-like forms on the Lego box are, upon further inspection, just abstract shapes, nothing more than letter-like. The pup loaf feels…familiar. Not its content, but its form. The lighting, the angle, the focus. Could this be the work of AI?
Maybe you open up the comments to investigate further. Among a sea of bland mentions someone inevitably accuses the poster of using AI, and the response is dismissive: “I never said I didn’t.”
When I screen candidates’ job applications, or read my students’ homework, I’m struck with the same questions. Is this cover letter / reading reflection the output of a large language model, or is it just generic? I know that any confrontation would only yield one of two responses: a defensive "of course I didn't" or a flippant "of course I did." The conversations I’m having in my head — about what constitutes authorship, and the social contract between creator and audience — make me feel existentially dizzy.
Sports Illustrated recently came under fire for publishing [allegedly] AI-generated posts with [irrefutably] AI-generated author bylines and bios. In response, SI issued a statement that somehow both denied that AI was used and blamed a third party contractor. CNET, confronted with similar accusations about error-ridden, poorly-disclosed AI generated articles, took the other approach: “AI engines, like humans, make mistakes” (in other words, what’s the big deal?).
Is the future we’re meant to cozy up to one in which if humans complain about being deceived, they are told both that they are wrong and that they are right but shouldn’t care?
The Turing Test, proposed by Alan Turing in 1950, sought to determine a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. In the years since, AI has advanced to the point where it can, in certain contexts, convincingly imitate human interactions.
But I’ve come to realize that when we obsess over whether something was made by AI, we’re often asking the wrong question. Sure, there is newsworthy content for which veracity is paramount. But for everything else, asking how much AI is in something may be less important that a simpler question: how much humanity is in it?
In her book The Situation and the Story, the writer Vivian Gornick unpacks a particular funeral speech that has stuck with her:
The next morning I awakened to find myself sitting bolt upright in bed, the eulogy standing in the air before me like a composition. That was it, I realized. It had been composed. That is what had made the difference.
The speaker never lost sight of why she was speaking — or, perhaps more important, of who was speaking. Of the various selves at her disposal (she was, after all, many people — a daughter, a lover, a bird-watcher, a New Yorker), she knew and didn’t forget that the only proper self to invoke was the one that had been apprenticed. That was the self in whom this story resided. A self — now here was a curiosity — that never lost interest in its own animated existence at the same time that it lived only to eulogize the dead doctor. This last, I thought, was crucial: the element most responsible for the striking clarity of intent the eulogy had demonstrated. Because the narrator knew who was speaking, she always knew why she was speaking.
A chatbot is a statistical calculator. It cannot know who it is. It is in fact the polar opposite of self: a regression to the mean of human expression. Think of all the expensive, far-reaching machine learning algorithms that try to learn about you today in order to better serve you content or ads, and how profoundly they fall short of anything resembling “knowing” you.
I have a nontraditional set of standards through which I encourage my students to evaluate their work (curiosity, criticality, communication, conscientiousness). This year, perhaps in order to get at Gornick’s why, I added a new Turing-esque test to my list of rubrics: expressiveness. It has three simple criteria:
It feels like it came from someone. It contain evidence of complex, emotive human detritus. Feeling human-like isn’t enough: it couldn’t have been made by “just anyone,” and instead leans into the unique perspective of the specific person/people who made it.
It feels like it was meant for someone. It is a work concerned with and designed for a particular audience, and the audience can feel that intention when they consume it.
It feels like it belongs in a particular context. It is aware of the place, time, culture, and artistic medium in which it will be consumed. Its form and content are in conversation with each other. It is not afraid to converse with the past, elevating, rather than concealing, its inspiration.
With this rubric, I never need to accuse my students of using AI. What matters is that the work is expressive, and contains evidence of the human that created it. If something feels robotic or generic, it is those very qualities that make the work problematic, not the tools used. I can simply say "I want to see more of you in this" or “who is this for?” or “seek out inspiration.”
From someone, for someone, in a particular context.
In What is Art, Tolstoy discusses what divides true art from its “counterfeits”:
Every work of art causes the receiver to enter into a certain kind of relationship both with him who produced, or is producing, the art, and with all those who, simultaneously, previously, or subsequently, receive the same artistic impression… Art is not, as the metaphysicians say, the manifestation of some mysterious idea of beauty or God; it is not, as the æsthetical physiologists say, a game in which man lets off his excess of stored-up energy; it is not the expression of man’s emotions by external signs; it is not the production of pleasing objects; and, above all, it is not pleasure; but it is a means of union among men, joining them together in the same feelings, and indispensable for the life and progress toward well-being of individuals and of humanity.
I want my students to transcend simply producing pleasing objects and constructing sentences to, as the newsletter writer Henrik Karlsson says, “extracting a latent possibility in the relationship with the audience.”
It’s worked. Rather than worrying about formalism and typos in their writing, I see students indulging their curiosities, allowing themselves to feel complicated, and sharing their personal experience and perspective. Their visual designs aren’t obsessed with looking fashionable, but in finding emotional resonance.
I apply the same expressiveness test as I browse the internet. If I find myself tempted to investigate the comment section to determine whether something was algorithmically generated, I instead quietly ask myself if it’s expressive. If it isn’t (it usually isn’t), maybe it's not worth engaging with at all. The presence or absence of artificial intelligence becomes besides the point. Something created without AI can still be inexpressive. We find ourselves drowning daily in content that feels completely unmoored from, well, anything: it could have come from anyone, is meant to be consumed by anyone, and might find us on any platform. It ultimately communicates nothing, and leaves us unchanged.
I will also allow that something created with AI can be expressive (but more often than not, AI might make achieving expressiveness more difficult, not less). It is not that computers can never induce feelings in their audience, but that in so doing they raise the bar for what will eventually be perceived as unexceptional, thoughtless, predetermined.
Trying to confront a gaslighter on their own terms almost never gets you anywhere. So I wonder if, in changing the conversation to one of expressiveness, we might liberate ourselves from AI’s exploitation. If the original Turing test evaluated what computers are capable of, this new Turing test evaluates what we are capable of. And that re-centering of humans, if done in a supportive environment, can turn AI from something to be feared into a challenge: how beautifully, imperfectly, perceptibly human can we be? As the AI gets exponentially better at pretending to be us, that only moves the expressive goalpost; will we rise the the challenge of actually being us, but more?
Love the expressiveness rubric and your final message—the gaslighting comparison especially rings true. This perspective is helping to ease my AI anxieties, thank you!!
This is a helpful way of thinking about one of the most challenging aspects of the advent of generative AI (for me anyway): the chance that it will displace the human writer. It reminds me that we writers will always remain relevant if we express our unique human reactions to and interpretations of the world.