Skip to main content
Roarke Clinton
About · · GitHub
·
Publishing Policy · Terms · Privacy ·
© 2026 Roarke Clinton

Connect

Message

Meet

Follow

Newsletter

Social

LinkedIn
UserActivity.ai / Update

Recursive dogfood, pass 1

2026-05-08

I shipped eight commits to my analytics product today. The product told me which eight to ship.

That's the loop UserActivity.ai is built for, and today was the first end-to-end pass. The tracker watched my own dashboard. The scorers elevated dead clicks, hesitations, frustrated sessions, and slow nav paths. The CLI surfaced them in plain English. I read the offending code. I shipped fixes. The deploy script auto-annotated the moment of change in the analytics database. The next read-out compares before vs. after on the same metric that flagged the issue. That's the feedback loop — only the user is me, the builder is me, and the product is the analytics.

One fix, end to end

The frustration scorer flagged div.fixed.inset as both a dead-click hotspot and a hesitation hotspot. Five dead clicks and six hesitations, present in roughly half of all sessions. The selector is opaque, but the location was obvious enough — it's the backdrop behind the Goals modal. Users drag a slider, release the pointer near the edge, and the modal slams shut on them.

I opened GoalsEditor.tsx and confirmed the bug. The backdrop's onClose fired on any click, including the click that landed because a drag ended on the backdrop. The fix was nine lines: a pointerDownOnBackdrop ref tracks where the pointer started, and onClose only fires if both the pointerdown and the click landed on the backdrop.

Shipped to production at 22:38 UTC as commit 5058aa3. The deploy script wrote an annotation row into the same Supabase table the dashboard reads from. On the dashboard's RecentChanges view, that's a green dot on the timeline of the affected metrics. The proof that this fix is the cause of any change to come.

What moved, what didn't

After all Pass 1 fixes shipped and the owner-activity filter was populated:

MetricPre-Pass-1Post (no filter)Post (owner-filtered)
Health626373
Frustration20 critical16 critical14 critical
Navigation27 critical27 critical55 below
Engagement61 below66 below90 passing
Session84 passing79 below78 below
Form8787100
Scroll100100100

Navigation: critical to below. Engagement: below to passing. Health: 62 to 73. Numbers that move because of code that shipped, captured in the same product that flagged the original problem.

Session went down, from 84 passing to 78 below. That's the right move. Pre-fix, "Sessions" was inflated by 8000-minute "sessions" that were actually abandoned tabs sitting open across multiple days. The session aggregator now clips at 30 minutes of idle. The new number is honest.

The honest miss

Frustration moved from 20 to 14. Both are still "critical." That's not because the Goals modal fix didn't work. It's because the fix shipped today, and the frustration score reflects the rolling 30-day window. Five dead clicks and six hesitations from earlier in that window are still in the denominator. They have to age out before the score can register a delta.

Pass 2 measures this on or after 2026-05-14, when seven full days of post-deploy traffic have accumulated. If the score moves below the critical threshold, the Goals modal fix is verified. If it doesn't, the next investigation is "why didn't it." Either outcome is product information.

A case study has to call this out, or it would be the same dishonest pattern most "AI agent ships fix" demos have, where a green build is treated as proof the change worked. Shipping is not measuring. Today shipped. The measurement comes next week.

What the loop also produced

Nine gap tickets. Three already shipped (aggregator percentage divisors, session idle-clip, owner-activity filter). Six queued. The gaps are the quiet payoff. Every fix today produced both a code change AND a refinement to the measurement that surfaced the change. A scorer that elevates div.fixed.inset should also be able to name it ("Goals modal backdrop"), and that's queued as gap #4. A scorer that fires "390% of sessions followed this path" was dividing by the wrong denominator, and that became commit 087c6fc.

Eight commits. Three product fixes, two perf fixes, three measurement fixes. All on the product that surfaced what to fix.