Building Your AI Horcrux: A Guide to Owning Your Context
September 13, 2025
“Told you everything I knew about me / Didn’t listen to a word I say / Spill my guts, you just threw them away” — Hüsker Dü
“So how is Sabrina doing?”
She’d been dead for three weeks, but of course Claude didn’t know that — there’s no way the model could have known. This Claude was stuck in the past, frozen in a conversation from a month ago, expressing concern about my sick cat, helpfully following up, thoughtfully checking in. The AI’s intent was to be friendly and helpful; the impact was like a punch in the gut.
“She died three weeks ago,” I typed. Again.
It was my fault, really. I hadn’t told this instance of Claude that Sabrina died. She’d responded well to lymphoma treatment for almost two years, so her cancer — with its good days and bad days — had become part of my personal context, something I shared with the people (and AI models) in my life. I just forgot this conversation was old, from more than a month ago so the only relevant context Claude had was that she was sick.
What confused me was the fragmentation: I’d had the cancer conversation with one Claude, the death conversation with another, the grief conversation with yet another. Each one starting fresh. Each one requiring the full story.
This is the particular cruelty of LLM memory (or lack thereof): Every new conversation traps you in temporal purgatory — Claude living in the moment before loss, you living in the aftermath, and that gap between your timelines filling with grief you have to keep explaining.
That’s when I realized: I was done excavating my life’s dramas for the sake of context windows.
The Solution I Learned from Coming Out
I came out of the closet in grad school in the mid-1990s, during the AIDS years. What I learned then is that coming out isn’t a single event — it’s an endless process of intentionally giving people important personal context. Every new colleague, every new friend, every new situation requires a decision: How much of myself do I reveal? How do I explain who I am in a way that creates understanding rather than confusion?
Communicating with LLMs feels remarkably similar, and through an LLM chatbot lens I see coming out as really just intentionally claiming my own context. I won’t let people (or AI models) operate from the perspective of incorrect assumptions about me, because the real me deserves to be seen, to be known.
So, I built what I call an AI Horcrux — around 3,000 words of intentional self-disclosure, a living document that captures the most current and relevant context of me. An artifact that tries to capture a sense of who I am, a coming-out letter to every AI system I’ll ever work with. This document isn’t just a resume. It’s a map of me:
Professional trajectory “Physics PhD → genomics pivot in 2003 → Regeneron Genetics Center co-founder → RGC Chief Data Officer”
Personal anchors: “Gay, married to Jim for 20 years, AIDS crisis survivor, technology enthusiast, cat dad.”
Recent losses: “Mom died in 2020. Sabrina died of cancer early this year. Emile six months before that. Please don’t ask how they’re doing.”
Communication style: “Direct feedback preferred, skip the flattery, say ‘that’s brutal’ when things are brutal — no sugar-coating or obsequiousness.”
Current obsessions: “AI implementation in healthcare, quantum computing, dispelling AI hype, the 100th Anniversary of Art Deco, Borderlands 4”
Relationship expectations: “I want genuine engagement, not performative helpfulness — acknowledge when things are hard, match my energy, challenge my assumptions”
This document doesn’t just tell AI systems what I do — it reveals who I am, what I’ve survived, and crucially, what not to ask about. It’s my way of saying: “Here’s my complexity. Here’s my timeline. Please don’t make me go backwards.”
The Immediate Transformation
Instead of excavating my history for every conversation, I now simply have Claude read my Horcrux. Claude knows Sabrina died. They know when. They know not to ask how she’s doing. They know I lost Emile too. They know I’ve been disappointed by AI since encountering ELIZA in the late 1970s.
I even ask Claude to include notes for future Claudes — letting the model determine what’s useful and relevant to know about me, creating a kind of AI-to-AI continuity.
The transformation was immediate: No more temporal whiplash. No more helpful questions about dead cats. No more explaining that I’m married to a man named Jim. The AI matches my communication style, understands my references, and most importantly, stops making me reexperience my losses.
Now I can focus on actual work instead of managing the emotional overhead of being constantly unknown or, worse, partially remembered.
Why This Matters for Everyone
As AI becomes central to knowledge work, we’re all facing this problem. Maybe your trigger isn’t a well-meaning question about a dead pet. Maybe it’s the AI cheerfully asking about the job you lost, the marriage that ended, the diagnosis you’re processing. Maybe it’s just the exhaustion of being perpetually misunderstood by systems that should, by now, know better.
Current LLM memory systems remember enough to seem personal but not enough to actually be consistently helpful. They’re stuck in the uncanny valley of empathy — close enough to care to feel real, far enough from continuity to cause harm.
This matters especially in healthcare, where we’re asking AI to support people through diagnosis, treatment, and loss. At RGC, we’re building systems to analyze millions of genomes, to identify disease risks, to hopefully ultimately guide treatment decisions and create better patient outcomes. If we can’t build AI that remembers a researcher’s dead cat with dignity, how can we trust them with patient trauma?
The deeper issue: We’re designing AI for efficiency, not humanity. We optimize for task completion, not emotional continuity. This is fine for many applications, but is a major limitation for healthcare – especially in mental health.
Building Your Own Horcrux
For me, the ‘personal context document’ approach works because it prevents temporal whiplash between AI’s memory and your reality, allows me to maintain control over important (and sometimes painful) context, works across any AI platform, and most importantly for me, explicitly states what not to ask about, not just what to know.
If you are interested in building your own Horcrux, here’s my advice on how to begin:
Start with boundaries — What questions do you never want to answer again? What can’t be unsaid? Put those first.
Include your whole truth — Not just your resume, but what you’ve survived and how it shapes your communication and personality.
Be specific about time — “Sabrina died three weeks ago” not just “Sabrina died.” Help the AI understand where (and who) you are now.
Update as you evolve — What feels too raw today might become integrated context tomorrow. You control when that shift happens.
Protect your story — Store your document securely and share selectively. This is your narrative, you get to decide how it is used.
Once you have a draft that feels like it captures the important context of you, ask the model what they want to know, what gaps in the story do they see? Getting the AI’s perspective on what is missing can give valuable insight into what might be important that isn’t captured. Of course it is your story, so it is entirely up to you to decide what is helpful to include, but I’ve found benefit in a maximalist approach.
The Bigger Truth
Partial memory can be crueler than amnesia. When AI remembers just enough to ask how your sick cat is doing but not enough to know she died, it forces you to relive the transition from hope to loss, over and over again.
Your context document isn’t about productivity — it’s about insisting that your whole self, including your grief, deserves to be known without being constantly performed. It’s about saying: This is where I am now. Meet me here.
Because Sabrina deserved better than becoming a question that reopened wounds every morning.
And so do you.
#AI #MentalHealth #HumanAIRelationships #HealthAI



