Join a small cohort making personal progress runnable together. Work directly with me and alongside others to figure out what's actually in the way and what to do next.
Do you talk to yourself?
Then we should talk.
Do you need a personal consultant?
Then we should talk.
Are you a little weird?
Ew, gross. 🙃
We should probably talk.
Bring your starting point. Don't do it alone. We'll figure out what helps you move forward from here.
Not sad. Not broken. Just not getting traction on what actually matters in your life.
You've got a hunch that machines can help. You just don't have a workflow that makes it real.
You'd rather test and iterate than listen to advice.
Hands-on screen sharing on your real setup. This is not a call where I talk at you. We look at your actual tools, your actual work, and the thing that has not been working.
You can use this time for whatever is most leverage for you:
This time is flexible and used where it creates the most leverage, whether that is hands-on technical work, reasoning through stuck points in personal or professional situations, or staying with a problem until it becomes clear.
60 to 90 minutes each week. I set a structure, share tools, techniques, and things I am actively using. You bring stuck points and wins. You hear how other people are approaching similar problems. First session is January 18th. Recorded if you cannot make it live.
You are not on your own between sessions. Questions, updates, screenshots, quick unblocks. You have access to me and the cohort while you are actually testing things in your life.
You leave with something concrete that fits your situation. Not advice. Not a reading list. Something you worked on that makes progress easier to sustain.
Sometimes that looks like:
What it is will depend on what actually serves you.
Every situation is different.
You show up with dozens of notes, ideas, chats, half-projects, and a strong sense that you're "doing a lot," but nothing is landing in the world.
This is not treated as a motivation or productivity problem. The first move is to determine why output is failing in your specific case.
We may not build anything. We clarify whether sleep, stress load, or sustained overextension is the limiting factor, and define what recovery actually means for you. If it's useful, we define a "bad-day floor": 1 to 3 actions that still count when capacity is low.
We identify what the avoidance is protecting against (rejection, being seen, discovering you're wrong). If appropriate, we design the smallest external artifact that would resolve the uncertainty and close the loop, such as a message, request, or test.
We isolate the single unknown that actually blocks progress and design a short experiment to answer it, with a clear stop condition so uncertainty does not sprawl indefinitely.
We install a forcing constraint: a timebox, a "counts as done" threshold, or a ship-something rule that breaks the loop where thinking never converts into action.
We build only the missing structural piece at the point of failure (capture, task floor, review loop, or automation), not a full system and not a new workflow religion.
Most people's AI use falls into one of three patterns: not using it at all, using it inconsistently, or using it constantly without dependable impact. The results vary, the time cost is real, and it is often unclear what can be trusted.
This isn't treated as a "better prompts" problem. The first move is to determine where AI actually fits in your life (work + personal), and what level of verification your situations require.
We identify what you actually want leverage on (thinking, writing, decisions, planning, hard conversations, learning, execution) and run a few narrow, real tasks to see where AI reduces friction vs adds noise.
We build practical AI literacy: what it's doing, what it cannot do, where it predictably fails, and how to recognize "sounds right but isn't" in your domain.
We set up the environment and verification loops so outputs become runnable/actionable (e.g., Python environment, dependencies, repo/notebook structure, basic tests/checks).
We reset its role away from authority/decision-maker/therapist-replacement and toward what it can reliably do for you (generate options, draft, translate, compress, rehearse), with explicit rules for when you must verify or rely on your own judgment.
You keep reading, comparing, and refining your understanding, but the decision never closes and nothing changes in the world. You can explain the situation clearly and you're still stuck.
This isn't treated as an information problem. The first move is to determine what the analysis is protecting, and what kind of signal you actually need in order to move.
We surface what choosing would force you to face (exposure, rejection, loss, identity cost) and design a small, safe external move that breaks the safety loop without requiring certainty.
We isolate the single unknown that would change the decision and set up a direct way to get that signal (one conversation, one test, one constraint check), then stop.
We move the decision out of abstraction by testing a small slice of one or two options so the choice starts coming from lived feedback, not comparison.
We use a simple scaffold (for example, a short decision worksheet or AI-assisted prompt) to turn the question into a concrete experiment with a clear next action and review point.
We convert it into a bounded commitment with an explicit end date, so you can act without pretending this choice defines you forever.
You want to put something into the world, but doing so feels wrong. Not "not ready," not "needs polish," but like it would misrepresent what you actually think, know, or believe.
This isn't treated as a confidence or perfectionism problem. The first move is to determine what breaks internally at the moment the work is about to touch reality.
We slow down long enough to clarify what you genuinely believe now, so you're not trying to ship something that represents an earlier or aspirational version of you.
We find what's being implicitly claimed (certainty, capability, promise, identity) and downgrade the claim to what you can honestly stand behind today (observation / hypothesis / question / draft / experiment).
We identify what the ship requires that you can't yet execute reliably (skill, reps, support, time), then shrink the commitment so the next move matches your actual capacity.
We don't "push through." We change the exposure surface: ship to the smallest real audience that still creates reality contact (one person, a small group, a constrained channel), so you get signal without detonating your nervous system.
We define "good enough" in plain terms (what must be true / what can be rough / what can be wrong), then ship the version that clears that bar on schedule, not when it feels good.
We ship, then run a tight post-ship integration pass: what happened in your body, what assumptions were wrong, what you now believe, and what the next smallest honest move is.
I'm not a therapist, a life coach, or an AI guru.
I spent years actively figuring out what actually helps people get unstuck, myself included. I read neuroscience, I build tools, I run experiments. The lab is where that work meets real humans.
My role is facilitator, technical translator, and fellow experimenter. Not someone with all the answers. Someone who can usually see what is actually in the way and help you build around it.
Have feedback?
Share anonymously