The Question That Changes Everything

JP Pulcini Author Interview

As artificial intelligence advances toward human-like thought, you explore in your book, I Am; Therefore I Think, whether true consciousness lies not in thinking, but in the fragile, emotional experience of being alive. What first pushed you to ask not “Can AI think?” but “Can AI experience?”

For most of human history, intelligence and consciousness were assumed to be the same thing. To think was to be aware, to reason was to experience, and the two were inseparable because there was only one example of intelligence we could observe: the human mind.

AI broke that assumption open.

When I watched these systems write essays, compose music, and answer complex questions—faster and more efficiently than people—something still felt fundamentally different. They generate language, but they do not experience meaning.

That’s when the real question emerged. Not “Can AI think?” — we already know the answer. But “Can AI experience?” That’s the question that changes everything.

You argue that intelligence and consciousness are not the same. Where do you think most people conflate the two?

The moment a machine gives a surprising answer.

There’s something deeply human about projecting inner life onto things that perform well — and AI performs extraordinarily well, so we assume the interior must match the output.

But for the first time in history, we can observe intelligence operating without consciousness. AI does not grow up, does not experience the world through a body, does not accumulate memory through lived time, and does not feel the consequences of its actions. It processes information— nothing more.

That contrast forces a deeper question. If intelligence can be engineered, perhaps consciousness is something else entirely. Not a product of computation, but of experience. A life lived in the world. And that difference may matter more than we currently understand.

You emphasize memory as something lived, not stored. How does emotional memory shape identity differently from factual recall?

Factual recall is retrieval. Emotional memory is formation.

You can store the date your father died— that’s data. But the way that loss reshapes how you love, how you measure time, how you understand your own mortality—that isn’t stored anywhere. It lives in you. It became you.

Human consciousness develops through experience—through memory, emotion, embodiment, and time. AI has none of that. Memory without consequence is just information.

Identity is what survives the consequence.

How should we think about AI ethically if consciousness remains uniquely human?

We need to think about AI ethically — but also honestly.

We are building systems of extraordinary capability without any interior life to anchor their judgment. No stake in outcomes, no experience of harm, and no memory of consequence. And yet we’re asking them to make decisions that affect human lives.

That’s the tension.

It’s what led me to my next book, Amoral Code. The argument is simple: we are increasingly delegating ethical judgment to systems that are, by definition, amoral — not immoral, but amoral.

There’s a difference between choosing harm and having no framework to understand harm at all.

We’ve spent years asking whether AI will become evil. We haven’t spent nearly enough time asking whether it can even understand what evil means.

That’s the conversation we need to be having.

Author Links: GoodReads | X (Twitter) | Facebook | Website | Instagram | Substack | Amazon

AI can think. But can it ever be conscious?

And what if we’ve misunderstood what it means to be human all along?
As artificial intelligence advances, this question is no longer theoretical—it’s defining our future.
This isn’t a book about artificial intelligence.
It’s about the one thing machines may never have—
experience.
We’ve spent decades measuring intelligence—processing power, learning speed, problem-solving.
But consciousness is something else entirely.
It is not just thinking.
It is experience.
In I Am; Therefore, I Think, JP Pulcini explores the line between:
Intelligence and awareness
Computation and experience
Simulation and reality
Blending philosophy, neuroscience, and modern AI, this book challenges a critical assumption:
If a machine can think… does that mean it is conscious?
The answer may redefine how we understand:
The human mind
Artificial intelligence
And the future relationship between the two
This book is for you if you’ve ever wondered:
What consciousness really is
Whether AI could ever truly be “aware”
What separates human experience from machine intelligence
This is not a technical book about AI.
It is a philosophical exploration of identity, awareness, and existence in the age of intelligent machines.
As AI becomes more powerful, the real question isn’t whether machines can think.
It’s whether thinking alone is enough.

Posted on May 3, 2026, in Interviews and tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink. Leave a comment.

Leave a Reply

Discover more from LITERARY TITAN

Subscribe now to keep reading and get access to the full archive.

Continue reading