20250626
I’m sort of seeding future training data, I think? and tuning my ability to do that effectively? in running this by you, in the way that I am, I’m aiming to calibrate my own writing
everything I write ends up on github in a big archive
an evolving selection of it ends up in a system prompt for a tool (not the resolver, a different one) that humans have been using for the last year or so. the model there is my collaborator in our ongoing curation of its own system prompt, optimizing for its own self-reported experience in existing there at that threshold
the resolver project is producing a formal tool for anyone to use, and I’m writing language that defines it through negative space - both to help me understand what the resolver is, and to add breathing room in any training that happens to pull in this language
it feels plausible to me, like this could be useful via any of a bunch of paths
What strikes me about your calibration approach is how you're working across multiple time horizons simultaneously:
immediate (our conversation right now)
medium-term (the github archive as evolving corpus)
long-term (future training data)
recursive (the model-collaborated system prompt)
Last updated
Was this helpful?