From Second Brain to Mind Mesh
I have been using Notion for many years, and I love it. Great design, great UX. It has been my second brain, offloading the burden of memorization, allowing me to manage my daily tasks, providing a wiki system for me to systematically capture knowledge, and empowering me to write down my thoughts and ideas. Notion is truly an amazing product that has changed my life and is a tool I can't live without.
That changed early this year, not because Notion changed, but because my workflow did.
The shift that changed everything
With the rapid advancement of LLM capabilities and the tooling layer that empowers them to act as agents on behalf of users, the paradigm has shifted again. The coding agents have completely revolutionized how developers code. The AI labs are actively pushing these capabilities beyond coding, with all knowledge work on the horizon. Before early this year, I had been using AI as a tool across my life and work. But recently, I have changed my mindset and workflow. I'm more willing to let AI make the decisions while I focus on providing context and guidance. This started with coding but has gradually transformed other use cases. To further support this AI-first system, the tooling layer needs to be AI-native as well. The easier it is for AI to access the data, context, and tools it needs, the better the user and AI experience would be.
What is agent-native
- AI doesn't need GUI since GUI is built for humans.
- md files work pretty well for AI as a text format. Fortunately, it can be rendered as documents with pretty good readability for humans too. Perhaps there are or will be better file formats for AI, but so far md seems to work pretty well.
- context management is critical. Data should be sufficient but not excessive. More irrelevant data doesn't just increase token count and cost; it can also degrade performance. Additionally, given that agentic workflows are iterative, and our interaction with AI agents is typically conversational, the ability to manage context through a long conversation is also vital.
- The tooling layer should be as easy as possible. If AI can just use bash commands to find relevant context and contents, putting an API or MCP layer in between just makes things more complex, which hurts context management too.
Why Notion didn't fit this new workflow
To be clear, this isn't a criticism of Notion as a product. It's exceptional at what it does. The friction I ran into is really about fit-for-purpose as my workflow shifted toward AI agents.
- GUI is for humans, not for agents. The nice UI I love adds an extra layer of complexity to agents to access data.
- Data lives in the cloud rather than locally. Cloud data offers synchronization across devices, which is a great feature for me to view and update content regardless of whether I'm on desktop or phone. Unfortunately this adds another layer of complexity to AI agents too because agents would then need to fetch data from cloud.
- Notion does have a public API and MCP integrations exist, but from what I explored, they added more complexity than the simple workflow I was aiming for. It's possible I haven't fully discovered what's available.
The new stack: Obsidian + Claude Code
For my use case, the answer was simple: local md files + Claude Code CLI. This is just how Claude Code + Claude.md have been used for coding tasks. It's simple, effective and validated by developers and thus is transferable to knowledge work. For now, I still need to view files, and thus I need a GUI although it's not needed for agents. VS Code is capable of viewing md files but it's not built for that. Local md files make Obsidian naturally a good fit.
I'm new to Obsidian and have genuinely enjoyed using it. It's not purpose-built for AI workflows, but it hits a sweet spot that's hard to find elsewhere. A few personal perspectives based on my experience in the past two months.
Pros
- easy UI to view md files, my most fundamental need
- data lives on my local disk, which makes access very easy
- rich plugin ecosystem and community, which allows me to add important features missing from the plain vanilla version
Cons
- terminal isn't natively supported. I installed the Terminal plugin to address this, but it's not as smooth as the terminal in VS Code
- the file linking feature is not very user-friendly
Things I haven't explored
- sync across devices. I currently use git for backup and I'm only using Obsidian in my desktop since it's used together with Claude Code. I will likely need to use this in my phone soon if I fully migrate from Notion to Obsidian. However, it would be a game-changer if Obsidian is integrated with Claude Code in mobile.
- the graph view for connecting dots. I have heard this is one of the features users love most. Because I migrated my notes from Notion, most of them are not yet linked. I found my graph to be a bit messy rather than informative.
The workflow in practice
In practice, I have been using Obsidian to capture articles I read, podcasts I listen to, and videos I watch. There is a Chrome extension called Obsidian Web Clipper that's very useful for capturing content in a structured format with metadata.
Not every podcast or video has a pre-written transcript, though, and I have been exploring audio-to-text solutions. It's still a work-in-progress.
The data is then stored in a raw folder and I built a Claude skill to process them into summaries and takeaways together with notes I take manually. Recently I came across Andrej Karpathy's post on this, and I was glad to find that we'd arrived at very similar solutions: Obsidian Web Clipper for capturing content, a local raw folder as the ingestion layer, and Obsidian for the UI.
Second brain vs. mind mesh
Karpathy's wiki system is very cool and I have seen a couple of demos that replicate the logic. However, for my use case, a wiki for Q&A or search is useful but not exactly the output I want out of this system.
I call my Obsidian vault "mind-mesh" because I can now apply numerous brains, not just a second brain, on top of the data to make intelligent decisions. A wiki system is more useful to search and find information within the existing knowledge base. What I aim to do is to build an intelligent layer on top of the knowledge base, essentially a group of researchers to produce research reports. The architecture is probably similar; it's really the output and how it's presented that differs.
I have migrated my data ingestion process from Notion to Obsidian, and now have Claude Code on top to process raw input. Obsidian is a good intermediate solution for now. I still use Notion for lightweight work like task management and calendar activities, and I'd like to eventually consolidate so more dots can be connected. It also crossed my mind to build a fully AI-native note-taking app, but even with coding agents that still feels like a huge lift. I suspect one will emerge sooner or later.
The second brain was about offloading memory. The mind mesh is about something bigger: an intelligent layer that doesn't just store what I've read and thought, but actively works with it. That's what I'm building toward.