Life as a Language Model
Generative Future Simulation
Predicting Human Decisions Like Predicting the Next Token
A mental model for modeling human life using Transformers
One of the most powerful realizations in the last decade of AI is the idea that almost anything can be represented as a sequence of tokens — language, images, code, music, and even protein structures. Once you tokenize something into a sequence, you can train a Transformer to predict the next element of that sequence. That is the foundation of models like GPT.
Today, GPT predicts the next token in a sentence.
Tomorrow, could a model predict the next decision in a human life?
🌍 All of human knowledge is text
The internet holds the collective intelligence of our species — papers, books, conversations, code, opinions, history, news, even emotional expression. We tokenize this massive corpus and train models to learn patterns of how thoughts unfold.
A Transformer receives tokens → embeds them into vectors → and through self-attention learns structure, causality, dependency, and intent. When it predicts the next token, it is essentially predicting human thought continuation.
So here’s the jump:
If text is predictable, and text captures human decisions, then decisions should also be predictable.
🧠 The core idea
What if you take all decisions a single human has ever made:
Where did I study?
Which job did I choose?
Who did I meet?
What did I buy?
When did I change habits?
Which books did I read?
What content did I consume?
What goals did I set?
And represent them as ordered tokens in the timeline of life. Now treat this timeline exactly like a sequence of text tokens.
Every decision = a token
Every token has an embedding representing:
context (why/when)
emotions
constraints
environment
past experiences
personality traits
Feed that tokenized life sequence to a decoder-only transformer, and ask:
Given the full sequence of this person’s life decisions so far, what is the most likely next decision?
Autoregressively roll it forward:
Decision1 → Decision2 → Decision3 → ... → Decision_N → predict Decision_(N+1)
You just built a generative model of a human life.
Applications
1. Personal Future Simulation
Simulate your tomorrow, next quarter, or entire decade based on the pattern of your choices.
2. Counterfactual Generators
Ask:
What would my life look like if I accepted that job offer?
What if I moved cities?
You modify the starting token and re-generate the future.
3. Coaching & Self-Awareness
A mirror that shows where your internal algorithm is leading you.
4. Behavioral Optimization
Detect loops and biases:
impulsiveness
procrastination
risk aversion
pattern of relationships
and propose alternative trajectories.
🔮 The philosophical angle
Humans believe we are infinitely complex and unpredictable — yet we are predictable enough that companies infer our behavior through recommendation models.
But LLMs prove a deeper truth:
Anything that is a sequence can be predicted. And a human life is nothing but a sequence of decisions.
So the idea becomes:
Treat life as a language model.
Treat decisions as tokens.
Treat the future as next-token prediction.
👣 Final thought
We may soon reach a point where we can:
Upload our history
Generate multiple future trajectories
Pick the most meaningful one
Like chess engines compute the best move, life engines could compute the best decision.
The future of AI may not just be generating text.
It may be generating lives.
