ChatGPT still knows what you did last summer
The history and future of selling you, the product.
Recently, one of my colleagues said he wasn’t interested in joining our university’s institutional adoption of Google’s Gemini. Why not? He said he likes where he’s at in his relationship with ChatGPT. I get that. I didn’t expect to say something like that a year ago.
A lot of friends and colleagues ask me which AI chatbot to use. I’ve always said it doesn’t matter. I still don’t think it does. They are largely indistinguishable in capabilities and features. You might have preferences, but you have to discover those by using them.
All of that is still true. But using one consistently became a lot more powerful when they began to remember across sessions. Breaking up with one platform entirely would mean losing a lot of painstakingly established context about who you are, what you do, and how you do it. Admittedly, you have to curate its memory a bit, which deserves a future post. But if you do, the system becomes more powerful the longer you use it time.
A good memory system might be able to inspire lasting brand loyalty between otherwise indistinguishable products. I imagine that will make some people nervous. What will these companies be able to do when they have the access and technology to know so much about us and our work, including making powerful inferences from the wide variety of ways we use them?
A week ago, I would have dismissed that primarily because I didn’t expect these chatbots to be able to make sense of out the many different ways we use them all the time. The social media era taught me to feel violated and disappointed at the same time.
ChatGPT and the History of Me
I began thinking about this when ChatGPT greeted me with a message announcing that it had upgraded its long-term memory. Clicking the message began a new conversation with the automated prompt: “Describe me based on all our chats — make it catchy!” I rolled my eyes hard, hard enough to sprain something. But its response surprised me.
Drawing off all of our previous conversations, here’s how ChatGPT depicted me:
Travis Ross is where the humanities meet the command line.
A historian with the mind of an engineer, he turns 19th-century archives into structured data and Docker containers into digital scriptoriums. Whether he’s building an AI-powered research assistant to uncover new historical insights, or reimagining higher education for the co-intelligence era, Travis moves fluidly between Nietzsche and Nginx, Tocqueville and TLS.He’s a faculty member, systems thinker, and maker of workflows—with a vision: to turn universities into agile, AI-literate ecosystems, where reasoning trumps regurgitation and students build the future by thinking alongside machines. If it can be automated, he’s scripting it. If it can be rethought, he’s already two iterations deep.
Half historian, half hacker. All synthetic reasoning.
I came in expecting it to fail to integrate all the information capture it has accumulated about me, so I scoffed at the haphazard and lazy examples it pulled. I can identify the one or two conversations that involved Nietzsche or Nginx, Tocqueville or TLS.
If I were to grade this effort as a history professor, I think I’d have to say it’s doing pretty well. I select details from other peoples’ lives to represent them in my stories. It is really hard to do well. As I considered what it did in that context, I was both surprised and impressed. It portrayed me by focusing on my range, accurately capturing the core commitments that connect the seemingly unrelated things I do, but also why I do them. In a trick I often employ, it selected otherwise arbitrary examples that advanced that thesis by structuring those examples and their relationships with alliteration and parallelism. I have spent my career trying to teach those skills, first to myself and then to students. I’m a tough critic, but I’m also impressed.
If I have just one objection to its substantive depiction of me, it is that I do not have the mind of an engineer. I can see the confusion, though, because I oversample on language engineers use when I work with ChatGPT to write software on my behalf. I draw from their lexicon because I want ChatGPT to think like an engineer on my behalf precisely because I do not. In that sense, you can see how dependent it remains on the specific sources it has, a prodigious but patchy archive that represents not who I actually am, but what I use ChatGPT to do and how I strategically misrepresent myself when I do it.
Again, as a historian, I appreciate the challenge and I’m impressed.

The Future of Selling Me
All of that surprised me. Had I thought about it for a minute, I might have anticipated this result. But I implicitly expected it to take the crazy range of things I have used it to do and turn it into a muddled misrepresentation of me. Why? Because for two decades, Google and Meta have trained sophisticated algorithms on a similarly diverse sample of my many facets. Even though knowing everything about me is their entire business model, those companies never seem to get me at all. They offer up a muddled mess of promoted posts, paid search results, and product recommendations that—at best—seem targeted at flat caricatures of me.
Given our history with these predictive algorithms, I suspect many might find ChatGPT’s memory concerning. For fifty years, people have repeated a truism, adapting a single core idea to every era: “If the product is free, then you are the product.” Conditioned by the fear and frustration built up over those decades, many people worry about what will happen when these companies—with their insatiable desire for insights into every aspect of our lives—have the technology and the access to know everything about us. How will they exploit this depth and breadth of knowledge to improve their product?
I would suggest they already do. They have for a while. They just don’t care to use it. We should question whether the demand for us as a product actually is insatiable. What if no one actually wants to buy a nuanced understanding of any of us? The more I questioned this, the more it felt like an interpretive key that could unlock a much more satisfying explanation of our current situation, including the frustrating brokenness of the ad-supported internet that made ChatGPT feel so powerful.
ChatGPT and Google both know enough about me to know that I am actually a terrible product. To take just one example, they know that I am a technophile but that I almost never want to buy software. I refuse to subscribe to services. ChatGPT latched onto Nginx and TLS because those two technologies came up routinely as it assisted me in my protracted campaign to replace nearly all paid or ad-supported services in my life with free, open-source, self-hosted applications that I completely control. Google probably knows that about me, too, but it makes me less valuable.
I have always found it laughable and annoying that these companies—equipped with so much information about me and such powerful technologies—just can’t make sense of me. Instead, I would suggest capitalism can’t make sense—or should I say cents?1—out of a full understanding of me.
If we want to think strategically about how this technology will be applied, we should question whether a nuanced, holistic view of any of us can provide real value to advertisers. There is no product to sell to a person with the strange profile ChatGPT has inferred from our engagements, at least not yet. Maybe generative AI will provide new ways to get away from mass production. Perhaps some kind of micro production will emerge that can anticipate hyper specific needs, making money by creating that killer product or service that would make precisely one person scream “take my money!”
We should revisit the truism “you are the product.” Dubious as it might seem, the algorithms that learned to predict our interests were meant to create value for us, not from us. Even when the reshaped our interests in unhealthy or dangerous ways, when they provided ample doom to scroll or the absolute worst comments to hate-read, the algorithms that curated our feeds successfully offered all of us a kind of value that purchased our time and attention. The algorithms weren’t learning on behalf of their customers, the advertisers. They were learning about us for us.
As my previous example demonstrates, the companies to whom these platforms sold our attention didn’t need all that much information about us. If at some point LLM-powered technology manages to serve me ads that actually respond to a holistic view of me, I’m not even sure I’ll be mad about it. I’ll probably be mad about something I can’t even imagine right now.
Predictive algorithms did far more damage over the past decade by giving users what they wanted compared to the algorithms that tried to sell those people printer toner they didn’t want. Whether or not the chatbot will survive as viable product on its own into the future—I don’t think it will—they are the hottest product of the moment. As such, they need to provide value to users by giving us what we want, customized to us. If there is a threat to us in that, it isn’t the models themselves; the threat vector will be users’ ability to curate their own downward spiral with ease and speed.
ChatGPT cannot be faulted for forgetting to mention my liberal use of dad jokes; I write all my own material.