## Claude as a Personal OS
I've been experimenting with using Claude Code as a kind of personal operating system. The idea is simple: give it enough context about your life — your priorities, your todos, your calendar, your ongoing projects — and let it act as a chief of staff. You talk to it throughout the day. It knows what you're working on, makes suggestions, takes actions, and updates files as things change.
The implementation lives in a `.claude` folder inside my main working directory. There's a separate context directory with files about my priorities, how I like to work, and what I'm thinking about. There's a set of skills — slash commands that do specific things. `/today` gives me a morning briefing. `/done` marks something complete and suggests what's next. `/reflect` helps me capture how the day went. `/interview` is an opportunity for Claude to fill in its own knowledge gaps based on the context it is building up. Each skill reads the relevant files, does something useful, and updates state.
What makes this different from just chatting with an AI is the persistence. Claude knows what I told it yesterday. When I say "what's next?" it has actual context to draw from. It's less like talking to a tool and more like talking to someone with enough context to actually give real feedback.
The technical setup is straightforward. Claude Code already has access to the current filesystem and can run shell commands. I added Gmail and Calendar through MCP integrations. The skills are just markdown files that describe what each command should do, and Claude interprets them at runtime. The context files are plain text, which means they're easy to read, edit, and version control. Everything stays local. No database, no server, no account to manage.
I don't know if this is the future of personal computing or just a useful hack. But there's something very compelling about having an AI that accumulates context over time, and not just in the ChatGPT Memory sense, but in the "now I can hand this off because I know you know the context" sense. It feels like the beginning of something — a way of working where the computer actually knows what you're trying to do, asks you questions when it doesn't, and then helps you do it.
What strikes me most is how much the value compounds over multiple sessions. The first conversation is just a conversation. But by the twentieth, by the hundredth, the AI has enough context to become opinionated. And when I run an `/interview` command after the 10th, (or the 15th) session, the questions I get are remarkably nuanced and genuinely worth thinking about. The system begins to push back in its own way. It notices patterns I didn't. It remembers that I said I'd do something three weeks ago and asks why I haven't yet. This feels qualitatively different from automation.
I'm still figuring this out. The gap between "AI assistant" and "AI that actually assists" might just be context, and this system is a small experiment in that direction.