AgentSee Open Improvement Lab

Figure out what's blocking you.
Build something you can run.

Join a small cohort making personal progress runnable together. Work directly with me and alongside others to figure out what's actually in the way and what to do next.

First cohort starts January 18 10 spots Apply now

Do you talk to yourself?
Then we should talk.

Do you need a personal consultant?
Then we should talk.

Are you a little weird?
Ew, gross. 🙃
We should probably talk.

People join for different reasons

Bring your starting point. Don't do it alone. We'll figure out what helps you move forward from here.

You're stuck

Not sad. Not broken. Just not getting traction on what actually matters in your life.

You want AI leverage

You've got a hunch that machines can help. You just don't have a workflow that makes it real.

You want to experiment

You'd rather test and iterate than listen to advice.

What you get

Up to 3 hours 1:1 with me

Hands-on screen sharing on your real setup. This is not a call where I talk at you. We look at your actual tools, your actual work, and the thing that has not been working.

You can use this time for whatever is most leverage for you:

  • setting up or fixing a Python environment
  • getting code to run
  • automating something that keeps taking manual effort
  • configuring or wiring tools together
  • debugging a workflow that is falling apart
  • turning an idea into something usable

This time is flexible and used where it creates the most leverage, whether that is hands-on technical work, reasoning through stuck points in personal or professional situations, or staying with a problem until it becomes clear.

Weekly group sessions

60 to 90 minutes each week. I set a structure, share tools, techniques, and things I am actively using. You bring stuck points and wins. You hear how other people are approaching similar problems. First session is January 18th. Recorded if you cannot make it live.

Async access between calls

You are not on your own between sessions. Questions, updates, screenshots, quick unblocks. You have access to me and the cohort while you are actually testing things in your life.

Something you can run

You leave with something concrete that fits your situation. Not advice. Not a reading list. Something you worked on that makes progress easier to sustain.

Sometimes that looks like:

  • an AI workflow
  • a capture or review loop
  • a decision rule
  • a protocol you can follow on low-energy days
  • an automation or small piece of software

What it is will depend on what actually serves you.

Examples of how this actually works

Every situation is different.

↓ select one

When you have lots of thinking but no output

You show up with dozens of notes, ideas, chats, half-projects, and a strong sense that you're "doing a lot," but nothing is landing in the world.

This is not treated as a motivation or productivity problem. The first move is to determine why output is failing in your specific case.

What we might do:

If you're cognitively active but biologically depleted

We may not build anything. We clarify whether sleep, stress load, or sustained overextension is the limiting factor, and define what recovery actually means for you. If it's useful, we define a "bad-day floor": 1 to 3 actions that still count when capacity is low.

If thinking is functioning as avoidance

We identify what the avoidance is protecting against (rejection, being seen, discovering you're wrong). If appropriate, we design the smallest external artifact that would resolve the uncertainty and close the loop, such as a message, request, or test.

If the uncertainty is real

We isolate the single unknown that actually blocks progress and design a short experiment to answer it, with a clear stop condition so uncertainty does not sprawl indefinitely.

If ideation itself is the stuck pattern

We install a forcing constraint: a timebox, a "counts as done" threshold, or a ship-something rule that breaks the loop where thinking never converts into action.

If structure is the bottleneck

We build only the missing structural piece at the point of failure (capture, task floor, review loop, or automation), not a full system and not a new workflow religion.

When to use AI (and when not to)

Most people's AI use falls into one of three patterns: not using it at all, using it inconsistently, or using it constantly without dependable impact. The results vary, the time cost is real, and it is often unclear what can be trusted.

This isn't treated as a "better prompts" problem. The first move is to determine where AI actually fits in your life (work + personal), and what level of verification your situations require.

What we might do:

If you haven't adopted AI (or you avoid it)

We identify what you actually want leverage on (thinking, writing, decisions, planning, hard conversations, learning, execution) and run a few narrow, real tasks to see where AI reduces friction vs adds noise.

If you don't yet have a working mental model of AI

We build practical AI literacy: what it's doing, what it cannot do, where it predictably fails, and how to recognize "sounds right but isn't" in your domain.

If the blocker is technical setup

We set up the environment and verification loops so outputs become runnable/actionable (e.g., Python environment, dependencies, repo/notebook structure, basic tests/checks).

If you're using AI in the wrong role in your life

We reset its role away from authority/decision-maker/therapist-replacement and toward what it can reliably do for you (generate options, draft, translate, compress, rehearse), with explicit rules for when you must verify or rely on your own judgment.

When you're stuck in research or analysis

You keep reading, comparing, and refining your understanding, but the decision never closes and nothing changes in the world. You can explain the situation clearly and you're still stuck.

This isn't treated as an information problem. The first move is to determine what the analysis is protecting, and what kind of signal you actually need in order to move.

What we might do:

If research is serving avoidance

We surface what choosing would force you to face (exposure, rejection, loss, identity cost) and design a small, safe external move that breaks the safety loop without requiring certainty.

If you're missing one real signal

We isolate the single unknown that would change the decision and set up a direct way to get that signal (one conversation, one test, one constraint check), then stop.

If you're drowning in plausible options

We move the decision out of abstraction by testing a small slice of one or two options so the choice starts coming from lived feedback, not comparison.

If thinking won't produce the signal you need

We use a simple scaffold (for example, a short decision worksheet or AI-assisted prompt) to turn the question into a concrete experiment with a clear next action and review point.

If you're treating it like a permanent choice

We convert it into a bounded commitment with an explicit end date, so you can act without pretending this choice defines you forever.

When you want to ship but it feels like lying

You want to put something into the world, but doing so feels wrong. Not "not ready," not "needs polish," but like it would misrepresent what you actually think, know, or believe.

This isn't treated as a confidence or perfectionism problem. The first move is to determine what breaks internally at the moment the work is about to touch reality.

What we might do:

If the work is ahead of your actual beliefs

We slow down long enough to clarify what you genuinely believe now, so you're not trying to ship something that represents an earlier or aspirational version of you.

If the "wrong" feeling is pointing at overreach

We find what's being implicitly claimed (certainty, capability, promise, identity) and downgrade the claim to what you can honestly stand behind today (observation / hypothesis / question / draft / experiment).

If the real issue is self-efficacy vs perceived ability

We identify what the ship requires that you can't yet execute reliably (skill, reps, support, time), then shrink the commitment so the next move matches your actual capacity.

If the environment is making exposure feel dangerous

We don't "push through." We change the exposure surface: ship to the smallest real audience that still creates reality contact (one person, a small group, a constrained channel), so you get signal without detonating your nervous system.

If you're stuck in the perfectionism trap

We define "good enough" in plain terms (what must be true / what can be rough / what can be wrong), then ship the version that clears that bar on schedule, not when it feels good.

If coherence improves only after action

We ship, then run a tight post-ship integration pass: what happened in your body, what assumptions were wrong, what you now believe, and what the next smallest honest move is.

Apply now

$200 for the first month

If week one is not useful, you get your money back.

About Jesse

I'm not a therapist, a life coach, or an AI guru.

I spent years actively figuring out what actually helps people get unstuck, myself included. I read neuroscience, I build tools, I run experiments. The lab is where that work meets real humans.

My role is facilitator, technical translator, and fellow experimenter. Not someone with all the answers. Someone who can usually see what is actually in the way and help you build around it.

FAQ

Is this coaching?
This is real human support. Think grossly underpaid personal consultant, on purpose, while I keep this small and direct. Plus connections to other people working on similar problems.
Do I need to be technical?
No. This is for people who want leverage from modern tools without becoming a developer. Vibe coding is encouraged. Curiosity and willingness to test things is enough.
What if I don't know what's wrong?
"Something feels off but I can't name it" is a valid starting point. Part of the work is making it concrete.
What's the refund policy?
If week one is not useful, you get your money back. No checklist. No argument.
What does "Open Lab" mean?
Methods evolve. What works and what breaks gets documented. Learnings may be shared, only with your consent.

First cohort starts January 18th

20 minutes. We figure out if this fits.

Book application call