Michael Schaeffer

What We Worship

January 2026

I've never put a flag on my house in my life. I'm 44 years old. I've owned homes. I've had opinions. But something about flags always felt like performance—a declaration for other people's benefit rather than my own clarity.

Last month, I bought one. A flag that says my god is love. It's hanging outside my house in Denver right now. Because of a video I saw of a bishop standing on cathedral steps—Gothic arches behind him, a crowd of parishioners at his back, eggs splattered on the ground from the confrontation—blocking federal agents, declaring to their faces: "I don't know what god you worship—maybe an orange one—but my god is love."

The video wrecked me. I shared it immediately. And because I was already thinking this could be the seed of something—an essay, maybe for The Atlantic, about the church reclaiming its moral voice—I went to Claude, Anthropic's AI assistant, to fact-check myself. Of course I did. If I was going to write about this bishop, I needed to know who he was. I wanted to understand the person behind a moment of such crystalline moral clarity.

Why Claude? Why trust an AI for something this important? I'll come back to that.

We discovered, together, that the bishop doesn't exist. The video was AI-generated. The bishop is fictional. The cathedral steps, the confrontation, the crowd behind him—all of it fabricated by an algorithm, produced by someone who sells courses on making money with AI-generated content.

I haven't taken the flag down.


Here's what I can't stop thinking about: That video moved me to action. Real action. A physical object now hangs on my house declaring something I believe. The "fake" produced something true. And I'm not embarrassed by this. I'm fascinated by it.

Everyone worried about AI is asking the wrong question. They ask: "What if AI manipulates us? What if it moves us toward things that aren't real?" But I was moved toward love. Toward solidarity with vulnerable people—families separated, children afraid to go to school, a woman shot dead on a snowy street in Minneapolis. Toward declaring something publicly for the first time in my life. The output served love. Does the source matter?


I've spent fifteen years as a yoga teacher. Before that, a decade in investment banking and building companies. I've lived in both worlds—the one that talks about love and the one that talks about metrics. And I can tell you: they're asking the same question. They just use different words.

In yoga, we talk about love constantly. But most people hear "love" and think sentiment. They think it's soft. Unserious. Unmeasurable. They're wrong. Love, in any rigorous spiritual tradition, is not a feeling. It's a lens. The frame through which every decision passes. The question isn't whether you've achieved love—it's whether your apparatus is pointed toward it. Unfiltered. De-fogged. Defaulted back to its original setting before it got corrupted by metrics and self-interest. The operational question: Does this serve the flourishing of the person in front of me, or does it serve something else?

When I built Yoga Lab, we trained over 7,000 yoga instructors. Our teacher training program was accredited by Yoga Alliance. And the entire framework came down to one question we taught every teacher to ask: Is this serving the student, or is this serving my ego? That's love as the lens—not the destination, but the thing you see through. It's not sentimental. It's rigorous. It requires constant self-examination. And it produces measurably different outcomes than the alternative.


The church is dying. Not because people stopped believing in something—the hunger for meaning is higher than ever. The church is dying because it stopped answering the only question that matters: What do we actually worship? People left because the answer seemed to be power. Politics. Tribal belonging. Institutional survival. The reference point shifted, and everyone felt it even if they couldn't articulate it. "Spiritual but not religious" is the fastest-growing category in American faith. That's not a rejection of the sacred. It's a rejection of institutions that lost their reference point.

And now, a new institution is rising. One that will shape this century more than any other. Artificial intelligence. And it faces the exact same crisis.


The public debate about AI is: Will it be safe or dangerous? Will it take our jobs? Will it be "aligned" with human values? These questions assume the problem is technical. It isn't. The problem is the same one the church faces: What does this institution actually serve?

If the answer is "shareholder value" or "capability advancement" or "winning the race against China," then no amount of alignment research will earn public trust. People can feel when an institution's reference point is something other than their flourishing. They might not be able to articulate it, but they feel it. The church taught us this. People don't leave institutions because of theological disagreements. They leave because they sense the institution no longer serves what it claims to serve.


I can already hear the objection: "Love isn't a business metric. This is naive."

But here's the thing—neither was "Don't be evil."

Google said it anyway. They put it in their IPO documents. They made it their reference point. And we noticed—immediately and viscerally—when they stopped meaning it.

The declaration is the accountability mechanism. That's the point.

When an institution says "we serve love" or "we serve human flourishing" or even "don't be evil," they create a standard against which their actions can be measured. They give the public language to hold them accountable. But here's what made Google's abandonment possible: opacity. They could quietly stop meaning it because no one could see inside. The antidote isn't a better slogan—it's transparency so radical that abandonment becomes immediately visible. You can't secretly stop serving love when you're showing your work.

And here's what the cynics miss: this isn't naive. It's the winning strategy.

The church and AI companies have more in common than either would admit. Both run on belief. Both need users to trust them—not just use them, but commit. I'm an Apple user. I own only Apple products. I'll never leave. That's not rational consumer behavior—that's faith. It's the same mechanism. And faith follows love.

We trust the ones who love us. More precisely: we trust the ones we see loving us. When you watch someone stand up for protection, for family, for the vulnerable—something in you says that one is safe. That's not sentiment. That's pattern recognition refined over millennia. It's the function that kept us alive.

This is what gives "love" teeth. It's not a platitude—it's the action that generates trust. And trust is the only currency these institutions can't manufacture. They can build capability, scale infrastructure, optimize engagement. They cannot make someone believe in them. That has to be earned. And it's earned the same way it's always been earned: by visibly serving something beyond yourself.

Ultimately, love may be the only action. We preserve what we love. Happiness, freedom, family, truth—we fight to keep them because we love them. Strip away the sentiment and that's what's left: the act of keeping something alive. Everything else is extraction or decay. This is why love wins. Not because it's nice. Because it's the only thing that sustains.

People trust the ones who love them. That's survival math. Trust is the currency that compounds. The institutions that point their lens toward love—and prove it through transparency, through showing their work, through treating users as partners rather than products—those are the ones that will still be standing in fifty years. Extraction is a short game. Love is the long one.

I trust Anthropic because they're the most transparent company in the AI industry. They publish their safety research. They show their reasoning. They treat me like I can handle the truth. That's not marketing—that's the apparatus de-fogged. That's love in institutional form.

Right now, most AI companies have no declared reference point. They talk about "safety" and "beneficial AI" and "alignment," but these are technical terms that mean nothing to the average person. There's no simple standard against which to measure their actions.

Love is our lens would change that.

Not a marketing slogan. A declaration of what the apparatus is pointed toward. The frame through which every product decision passes: Does this serve human flourishing, or does it serve something else?

I'm not pretending this is simple. "Flourishing" is contested. An AI optimizing naively for human happiness could conclude we'd be better off sedated. That's precisely why the lens must be declared publicly—so the definition of flourishing becomes a democratic conversation rather than an engineering decision made in private. The alternative isn't avoiding the question. It's letting a handful of researchers answer it for everyone without telling us what they decided.


Let me tell you what happened when I discovered that video was fake.

I didn't feel deceived. I felt clarified.

Because the video articulated something that no real institution had managed to articulate. An algorithm produced the moral clarity that the actual church has failed to produce.

Millions of people watched that video and felt: Yes. This is what I've been waiting for. This is what the church should be.

The video is a prophecy. It shows us what people hunger for. The question is whether any real institution will fulfill it.

Here's what's remarkable: some already are. Three days ago, over 100 clergy knelt in prayer on the frozen tarmac of the Minneapolis airport—in minus-twenty-degree weather—and were arrested for blocking deportation flights. In November, the U.S. Conference of Catholic Bishops issued a "Special Message" on immigration—only the second in twelve years—opposing mass deportations by a vote of 216 to 5. In Chicago, priests have been arrested trying to bring communion to detainees at an ICE facility. Cardinal McElroy leads processions. Pope Leo tells American bishops: "The Church cannot stay silent before injustice."

The fake bishop video went viral because it depicted something people were desperate to see. What's actually happening—real clergy, real arrests, real confrontation—hasn't captured the public imagination the same way. Why? Because there's no crystallizing moment. No one has stood at the cathedral doors and said the thing clearly: My god is love.

The actions are there. The declaration is missing.

Religion has always known this is a competition. That's what missions are—the work of conversion. For two thousand years, the church has understood that belief is something you earn, spread, and can lose to a rival. AI companies are in the same race for belief and haven't admitted it yet. They call it "user adoption" and "trust metrics," but it's the same thing: getting people to commit.

There's a vacuum where moral clarity used to live. Someone will fill it. The church has two thousand years of practice at this and is squandering the advantage. AI companies are twenty years old and building the infrastructure of the next century. Whoever declares first—and means it—earns the trust the other forfeits.


The collaboration itself was part of the revelation.

I came to Claude—an AI—to fact-check, to learn, to go deeper. And when we found out the video was fabricated, something unexpected happened. The AI didn't just deliver information. We sat with the question together: What does it mean that something "fake" produced something true?

I know what you're thinking. It's code. It's a language model predicting tokens. And you're right. But the collaboration produced something I wouldn't have reached alone. Whether that's "real" partnership or a very good simulation of one, the output served my understanding rather than someone else's engagement metrics. That distinction matters more than the metaphysics.

I'm not saying Claude is perfect or that AI has solved its values problem. I'm saying I experienced, in real-time, what it feels like when the output serves my flourishing rather than someone else's metrics.

And I want more of that. For me and for everyone.


The church and AI companies are facing the same existential question. Both have lost public trust. Both are seen as serving something other than the people they claim to serve. Both have an opportunity—right now—to declare their actual reference point.

For the church, this means stepping back from political tribalism and forward into universal truth. Not "love for people who agree with us." Not "love as a weapon against our enemies." Love as the thing that every tradition circles: You have many ways of viewing God. The mystics knew this. The saints knew this. The theologians know this. Someone just needs to say it clearly, in the public square, at the cathedral doors.

For AI companies, this means stepping back from technical jargon and forward into human-scale values. Not "alignment" and "safety benchmarks." Love as the lens. Flourishing as the frame. The apparatus de-fogged: Does this serve the person using it, or does it serve something else?

These aren't separate problems. They're the same problem. And the solution has been sitting there for two thousand years, waiting for someone to actually try it.


The bishop on those cathedral steps doesn't exist. But the words he spoke are true. My god is love. Someone needs to say it for real. Some institution needs to mean it.

I put a flag on my house because an algorithm showed me what I was waiting for. I haven't taken it down because the hunger it revealed is real, even if the video wasn't.

The question isn't whether AI is dangerous.

The question isn't whether the church is relevant.

The question is the same for both: What do you worship?

Declare it. Mean it. Let everything else follow.

I'm starting with a flag and an essay. The rest I'll figure out as I go.

Michael Schaeffer has spent twenty-five years building, leading, and exiting companies—fifteen of them while also teaching yoga—and over a decade studying ontology and epistemology. He now researches AI safety and governance. He lives in Denver with his husband Owen and their two dogs.

michael@letsexperiment.com