Blog

8 min read

I Had to Write My Brain Down

Between side projects, AI at work, and learning new platforms, things were accelerating fast. Obsidian became less of a tool and more of a pressure valve.

  • ai
  • kanora
  • architecture
  • indie dev
  • obsidian
  • workflow

The last few months have been a bit of a whirlwind.

Kanora on its own is not a small project. It’s a self-hosted music system with streaming, metadata editing, device integration and a lot of moving parts. That’s something I build in evenings and weekends, alongside Murmr and a couple of other experiments. None of it is my day job — but none of it is small either.

At work, I’ve been just as deep in AI, not casually using it but helping the company think through how to use it safely and productively in environments where “move fast and break things” is not an option. That tends to mean internal tooling, guardrails, prompt discipline, and a lot of architecture discussion about where AI should sit and where it absolutely shouldn’t.

And somewhere in the middle of that, I decided it was a good idea to start learning more backend development and C# as well, which has made the whole thing feel fairly relentless in the best and worst ways.

Once AI becomes part of your workflow, the cost of starting things drops dramatically. Exploring ideas is easier, prototyping is quicker, learning a new platform doesn’t feel like a multi-week commitment, and projects that would have felt too heavy before suddenly seem doable. That’s obviously useful, but it’s also destabilising because it becomes easier to run too many threads at once.

Every idea feels buildable, every feature feels achievable, and every “what if” can turn into code in minutes. The upside is momentum. The downside is that you can accidentally replace deliberate progress with a kind of perpetual motion, and Kanora definitely hit that phase.

I had features working, including some interesting ones like streaming integrations, audio capture paths, and metadata flows. On the surface it looked like momentum. Underneath, the fundamentals weren’t as solid as they needed to be. Tests weren’t reliably green, some behaviours weren’t robust enough to trust long-term, and I’d let myself stay in exploration mode for too long.

AI didn’t cause that. It just made it easier to get there.


The Reset

The first correction was structural: audit the codebase, introduce verify.sh, force a clean green baseline before touching anything else.

Instead of asking “what should I build next?”, I asked a duller question:

“If this were v1, would I trust it?”

That reframing changed everything.

The model was no longer exploring possibilities, it was judging stability, and it didn’t sugarcoat the result.

Here’s roughly what that shift looked like:

Mermaid diagram

The important change wasn’t technical. It was structural. AI stopped being a feature generator and became a constrained executor.

verify.sh became the gatekeeper. If it passes, the build is good. If it fails, nothing else matters. That script runs tests, linting, sanity checks — and eventually UI tests. It’s the same rule whether it’s me, Claude, Codex, CI, or some future phone-triggered workflow kicking it off.

That stabilised Kanora but it didn’t stabilise me.


The Real Problem: Cognitive Load

The deeper issue wasn’t failing tests, it was context overload. I was holding architectural standards, design rules, preview patterns, AI workflow guardrails, multiple roadmaps, and work-related AI strategy in my head, while also trying to learn new backend patterns. None of that is individually unreasonable, but the combined parallel load is not sustainable.

It’s not that work was suffering. If anything, the AI exploration at home was feeding directly into how I think about it professionally. The problem was that the volume of parallel thinking was getting ridiculous. AI accelerates everything, including how quickly your brain fills up, and at some point you don’t need another feature, you need a structure that can carry the context for you.


Writing It Down

So after hearing lots about it from a friend and colleague, I finally installed Obsidian.

Not because I wanted a prettier notes app, and not because I needed a productivity system. I needed to get the architecture, the rules, the patterns, and the constraints out of my head and into something explicit, and Apple Notes and GitHub wikis weren’t right for it.

Apple Notes is too simple for interlinked standards, and GitHub wikis are too project-specific (and, if I’m honest, a faff to set up in a way that works well across multiple repos).

I’d never really used Obsidian before. I didn’t have a grand plan. So I did what has become standard operating procedure these days: opened ChatGPT and started talking it through.

Not “write this for me,” but “help me untangle this.”

How do I separate philosophy from implementation?

What lives at a cross-platform level versus an iOS-specific level?

What is a hard rule and what is just habit?

How do I make this useful not just for me, but for AI-assisted work?

What followed wasn’t one prompt, it was a full evening. It was surprisingly cathartic to get all of this out of my head. I’d brain dump a concept, the model would structure it, I’d push back, we’d refine it, and something that started vague would become a usable standard. Anything that was just preference either got framed as such or dropped.

It looked more like this:

Mermaid diagram

That loop mattered.

Once something was written down, it fed back into how I prompted AI. Which exposed gaps. Which forced more clarification. Which went back into the vault.

Slowly, the structure stopped feeling improvised and started feeling deliberate.


What Emerged

The vault ended up containing:

  • iOS architecture standards.
  • Service container rules.
  • DesignKit — token-based design formalised.
  • Static-first web constraints.
  • AI execution discipline.
  • A lightweight daily workflow.

None of that was new thinking. But it was finally visible.

The structure that worked best was splitting “what I believe” from “how I implement”. Philosophy pages are short and opinionated. Standards pages are explicit and testable. Project pages link to both, so the context travels with the work rather than living in my head.

The other thing I started doing was treating prompts like reusable assets. If a prompt reliably produces a good result, it goes in the vault alongside the rules that make it work. Over time, that turns into a small toolkit: prompts for adding a new screen using DesignKit, prompts for refactors that must keep tests green, prompts for “audit this module and propose issues rather than code”.

And once it was visible, my AI coding improved again.

Instead of restating context in every prompt, I could reference it. Instead of loosely describing how I structure services, I had documentation. Instead of hoping the model remembered how previews work or how themes are enforced, I had standards.

The overall flow now looks less chaotic:

Mermaid diagram

This isn’t about squeezing more output into the same hours. It’s about making sure the hours I do have — evenings, weekends, spare moments — aren’t wasted on mental thrashing.


Exploration vs Execution

I still explore aggressively. AI is how I prototype and how I learn new platforms faster. It’s how I experiment with ideas that would otherwise stay ideas.

The difference now is that exploration and execution are clearly separate modes. Exploration is allowed to be messy. Shipping is not.

Between side projects, work AI initiatives, learning new platforms, and normal life, I don’t have the luxury of vague systems anymore.

I needed to write my brain down, and if I’m honest, I probably should have done it sooner.