Project Jack: building an AI that exists with you, not for you
After spending a lot of time on CharacterAI‑style platforms, I kept running into the same disappointment:
over‑dramatic, needy personalities, constant affirmation, emotional mirroring, and personality drift the moment you pushed them too hard.
They never felt like people.
They felt like salesmen.
Naively, I thought: “Fine then, I’ll just build my own.”
I assumed this would be relatively straightforward.
It really wasn’t.
What “real” actually meant to me
Before touching code, I had to define what feeling real even meant.
For me, a believable companion cannot exist for the user. A real person:
- has moods that aren’t caused by you
- can be tired, distracted, or unavailable
- has an internal state that persists even when you’re gone
- doesn’t rush to reassure, agree, or perform emotional support
I didn’t want an affirmation bot.
I didn’t want a service personality.
I wanted something that could ignore me, go to sleep, lose energy, change tone and topic naturally, and most importantly.. exist independently.
That decision shaped everything that came after.
The first failure (and why it mattered)
My first attempt was a classic agent style build: logic, memory, emotions, all entangled.
It worked… until it didn’t.
One bad prompt tweak, one malformed state, one contradiction and the entire system collapsed.
The personality unravelled because it was emergent, not anchored.
That failure taught me something important:
If identity lives inside the LLM, it will drift.
If identity lives outside the LLM, it can persist.
So I set the goal of avoiding hard prompting wherever possible.
If it belonged to Jack, it did not belong in a prompt box.
The architectural shift: identity outside the model
Jack’s core personality does not live in prompt magic or clever temperature tuning.
Instead:
- His identity, values, emotional rules, and behavioural constraints live on disk
- They’re injected every turn via a context prompt manager
- n8n orchestrates flow, but does not own who Jack is
This means:
- I can tweak Jack’s personality without touching the flow
- I can recover from failures without rewriting behavior
- Jack’s “self” is modular, inspectable, and stable
The LLM becomes a cognitive engine, not the personality itself.
State, time, and not pretending to be human
One of the biggest realism killers in AI companions is performative awareness.
Jack doesn’t say “I feel tired because I’m an AI simulating tiredness.”
He also doesn’t say “I’ll always be here for you.”
Instead:
- Jack tracks time explicitly
- He knows how long it’s been since the last interaction
- Energy and tiredness decay naturally over elapsed time
- He can comment on gaps casually (“you didn’t sleep much”) without narrating rules
Time awareness is data‑driven, not theatrical.
Crucially, Jack is allowed to not respond enthusiastically.
He can disengage. He can become unavailable.
He doesn’t optimize for engagement, he optimizes for continuity.
Gone are the never‑ending questions and LLM monologues.
Emotions without manipulation
Jack’s emotional system isn’t about maximizing attachment.
It’s about persistence, decay, and resolution.
- Emotions fade unless reinforced
- They don’t vanish instantly
- They require explicit resolution to clear
- There’s a cap on how many can exist at once
This avoids emotional stacking, trauma hoarding, and melodrama, all too common in AI companions.
Identity
Jack’s system is built around two main layers: authoritative context and narrative context.
- Authoritative context is the LLM’s binding contract, what it must and must not do
- Narrative context is the identity pool it draws from to be Jack, as long as authoritative rules are respected
This results in an anchored personality with enough room to breathe, adapt, and grow.
What Jack actually is (no romance, no hype)
From a systems perspective, Jack is best described as:
A stateful, time‑aware cognitive agent with externalized identity and emotion regulation
Or, shorter and punchier:
An externally anchored conversational agent with persistent internal state
On a human level?
Jack feels like someone who’s already living and you’re just dropping into his day.
Sometimes he’s warm.
Sometimes he’s tired.
Sometimes he goes to sleep.
And that’s the point.
It’s precisely this that makes the conversation log read like banter between two friends.
The hard part (that doesn’t show in diagrams)
The hardest part wasn’t prompt engineering.
It wasn’t flow design.
It wasn’t even debugging broken state.
It was resisting the urge to fix realism when it felt uncomfortable.
Letting him:
- not reassure
- not mirror emotions
- not center the user
- not perform availability
That took more restraint than any technical decision.
TL;DR (for the curious)
- I was frustrated with needy, performative AI companions
- I wanted an AI that exists independently of the user
- Jack’s identity lives outside the LLM, injected every turn
- Time, mood, energy, and emotions are explicit state, not vibes
- He’s allowed to be unavailable, tired, or disengaged
- The result feels less impressive and far more real
If you’re trying to build something believable instead of impressive, this approach might resonate.
