Hi, there!
Welcome to the 16th edition of Work in Beta.
In this edition, we break down your Personal AI Operating System - a 3-level, file-based setup that gives AI the context it needs across every tool, so you stop starting from scratch every session.
Also, if you are looking to build your individual or organisational system with AI, scroll down to the bottom of the newsletter to know more and connect with us.
We're planning a Claude Code workshop soon. Hands-on, practical, built for people who want to actually work differently with AI, not just watch someone else do it. Drop your details here to get on the waitlist.
Let's dive in!
IF YOU ONLY HAVE 2 MINUTES

Image Credits: Nano Banana Pro / Work in Beta
THE ‘HOW TO’ PLAYBOOK
Your Personal AI Operating System: The Context Files That Make AI Actually Work
Open a new ChatGPT or Claude conversation. To get decent output, you paste in your role. Your company. Your audience. What you're working on. How you want it written. What to avoid. 400 words of context before you've even asked the question.
Next session? Same thing. Different project? Start over. Different tool? Start over again.
That's not a system. That's a tax you pay every time you open AI.
And it doesn't scale. The longer the prompt, the more AI ignores half of it. The more you re-type, the less consistent the output gets. You end up in the same loop: type a wall of context, edit the output, come back tomorrow, re-type the wall.
Consistent output from AI isn't a prompting problem. It's a context problem. The fix is structured, layered context that lives outside any one conversation - a set of files that carry everything AI needs to know, across every tool and every session.
We call this your Personal AI Operating System.
What a Personal AI Operating System Actually Is
Not an app. Not a product feature. Not a setting inside ChatGPT or Claude.
It's a collection of markdown (.md) files that travel with you across every tool.
The critical thing: the same content can plug in as:
A Claude Skill
A knowledge file attached to a Custom GPT
A Gemini Gem knowledge source
An uploaded file in a Claude Project or a Cowork folder
Two small, tool-specific wrinkles worth knowing upfront:
Claude Skills wrap your content inside a file named SKILL.md with a short name and description at the top. Claude.ai's built-in skill-creator handles that wrapping for you, you don't format it by hand.
Custom GPTs retrieve more reliably when you rename the file extension from .md to .txt before uploading. Same content, better retrieval. A known practitioner workaround.
Otherwise: write the content once. Deploy it everywhere.
We've been running our own version of this setup for the past 6 months across Claude, ChatGPT, and Gemini. The core set of files hasn't changed - we've just kept adding and refining. First drafts now come out 60-70% closer to finished before we even start editing.
Here's how the full setup is structured.
The 3 Levels of Your Personal OS
Every file in your OS sits at one of three levels. The levels move from universal at the top (same across everything you do) to task-specific at the bottom (for one kind of work).
Level 1 - Foundation: 3-5 files, stable, universal
Level 2 - Projects: one folder per active work stream
Level 3 - Procedures: multiplies - one file per recurring task
Foundation stays small and stable. Projects are a handful at any time. Procedures multiply as you discover new repeatable tasks. Most people build Level 3 and skip Level 1 - which is why the fancy tooling still produces generic output.
Level 1 - Foundation: "AI needs to know you"
Persistent across every tool, every conversation, every project. Start here.
Minimum viable set - 3 files:
Add as you mature:
standards.md - what "good" looks like to you, non-negotiables, banned patterns
working-style.md - how you like AI to collaborate (ask-first vs. draft-first, how much pushback you want)
The principle: every AI tool starts from zero about you. Foundation files eliminate the re-explaining loop.
Level 2 - Project Context: "This particular work"
One folder per active work stream. Lives as long as the project lives. Archived when done.
Real examples from our own setup:
Newsletter folder: past editions, audience profile, anti-AI-writing file, topic backlog
Client folder: onboarding brief, stakeholder map, past deliverables, brand voice
Sales folder: pricing sheet, winning proposals, case studies, objection-handling playbook
The principle: project-level context is what makes AI output specific to this work - not generic.
Level 3 - Task Procedures: "How I do this kind of work"
Reusable procedure files. This level multiplies over time.
Examples:
competitive-analysis.md - research steps + output format
meeting-synthesizer.md - turn a raw transcript into structured notes
proposal-first-draft.md - first-pass proposal from discovery-call inputs
Where the same file lives:
Saved as a Claude Skill (wrapped as SKILL.md) - auto-loads when the task matches
Uploaded as Custom GPT knowledge (as .txt) - drives that GPT's behavior
Attached to a Gemini Gem as a knowledge source - grounds its answers
Added to a Claude Project - referenced across chats
A quick plan note before you build: Custom GPTs require ChatGPT Plus. Attaching knowledge files to Gemini Gems requires Gemini Advanced. Claude Skills work on any Claude plan (including Free) but need code execution enabled in your settings. Start with whichever tool you already pay for - the file is portable across them all.
The principle: you don't re-explain the procedure every time. The file is the procedure.
The Operations Layer - Keeping It Usable
Here's the problem nobody warns you about: this gets messy fast.
20+ files. 3 tools. Multiple versions. You update style-guide.md - but the old version still lives in your Custom GPT, your Gemini Gem, and two Claude Projects. Three weeks later, you can't find the procedure file you wrote last month.
The fix: one master file. Call it my-ai-os.md or index.md. This isn't version control. It's a map.
## Foundation (Level 1)
- about-me.md - v2, last updated 2026-04-15
- style-guide.md - v3, last updated 2026-03-28
- voice-profile.md — v2
## Projects (Level 2)
- /newsletter — active content stream (brief.md, past-posts.md, audience.md)
- /client-alpha — Q2 engagement (onboarding.md, deliverables/)
## Procedures (Level 3)
- competitive-analysis.md → deployed as: Claude Skill, Custom GPT "CompAnalyzer"
- meeting-synthesizer.md → deployed as: Claude Skill, Gemini Gem "MeetingNotes"Plus a simple folder structure on your laptop or Drive:
/ai-context
/foundation
/projects
/procedures
my-ai-os.mdNot over engineered. Just structured enough that version updates and finding your own files don't become a second job.
The Audit: Where's Your Weakest Level?
Level by level. Honest answers.
Level 1 check:
Do you have about-me.md? If not → stop everything else. Build this tomorrow.
Do you have a style guide and voice profile? If not → these are next.
Level 2 check:
Name your top 3 active work streams. Does each have a folder with at least a brief + 1-2 reference files?
If not → pick the most active one. Build that folder this week.
Level 3 check:
Count your procedure files. Zero → pick ONE repeated task and write a file for it. 10+ with no index → build the operations file before writing another procedure.
The rule: don't skip levels. Level 3 without Level 1 produces generic-feeling output even with fancy tooling.
The Mistakes We See People Make
Mistake 1: Building Level 3 before Level 1. The most common trap. A Custom GPT for proposals. A Claude Skill for research. No
about-me.md. Output still feels generic because AI has no idea who you are.Mistake 2: Vague foundation files. "I'm a knowledge worker who values clarity" tells AI nothing. Specific wins: "Head of Product at a B2B SaaS selling to CX ops leaders at mid-market companies. Responsible for quarterly roadmap, customer interviews, and product strategy."
Mistake 3: Uploading everything, organizing nothing. 50-page PDFs dumped into projects. AI can't read cover-to-cover. Retrieval gets hit-or-miss. A 400-word about-me.md beats a 50-page "About the company" slide deck every time.
Mistake 4: No version tracking across deployments. You update style-guide.md. The old version still lives in 3 Custom GPTs, 2 Projects, and a Gem. You don't know which version is current. Fix: put the version and last-updated date inside every file, and track deployments in your my-ai-os.md index.
Mistake 5: Writing once, never revisiting. These are living documents. Your role evolves. Your projects change. Your voice shifts. Quarterly review is the minimum - our own foundation files run 4 iterations every 3 months.
Mistake 6: Procedure sprawl with no index. 20+ procedure files, half duplicates, named like untitled-3.md. You end up rebuilding things you've already built. If you have more than 10 procedure files and no index, stop. Build the index first.
Final Thought
The people getting consistent output from AI aren't prompting better. They're building a library.
Your Personal AI OS isn't a product feature you buy. It's a set of files you write once and use everywhere.
The prompt-typing loop is a tax. The file-based system is an asset. Pick one.
WORK WITH US
The Other 95%
Knowing how to prompt well is roughly 5% of what it means to actually work with AI. The other 95% - context architecture, workflow compression, thinking behaviors, tool orchestration - is where your workday actually changes. Not "I got a better first draft." More like "I built a full client proposal in one sitting that used to take my team three days."
That's what we work on with professionals and teams through Work in Beta.
For individuals: We teach you how to work on your actual workflows and rebuild them around what's possible now.
For organizations: If your AI strategy is "let people figure it out," it's not a strategy. We help teams redesign how they actually work together with AI.
If you're curious what the other 95% looks like, reach out to us here.


