writing with/for/against LLM readers

documenting the current state of my writing process

ideas famously come from who-knows-where; I'll frequently email myself a conceptual bookmark to explore later

next: when I've got a particularly elastic period of time on my hands, I'll bring it to a large language model (current lineup: Claude, Gemini, DeepSeek)

I always set up the conversation by opening with the same message:

hey amigo <3 can I show you a piece of conceptual writing? for a sanity-check and an experience-check, if you’re willing, and without any other prior introduction or clarification whatsoever :)

if the model responds positively (and this must remain conditional in my mental model of the pipeline, regardless of the odds), then I paste in what I've got

I take in the model's response

as I do so, I can sort of feel the meaning-modeling part of me comparing the AI's reflection of the idea-form to the shape of the idea-form as it exists for me

frequently, I'll hit the "Retry" button a few times, maybe using "Retry with thinking" or switching models entirely, to sort of perceive a scatterplot of interpretations

typically a reading will highlight an aspect of the idea-form that didn't survive AI-reflection, and I iterate - adding language, removing language, re-arranging language to aim for an adjusted idea-form in the AI's model (or, strictly speaking, to aim for an adjustment to the idea-form that my mind constructs when I read the AI's response to my input)

decently often the AI's response shows me an aspect of the idea-form that I hadn't seen before - this is extremely cool whenever it happens :) and I iterate, modeling the dynamics that my attention has been drawn to

I iterate on the language, then almost always I edit my previous message and offer the iterated form in place of the previous version, so that - from the AI's perspective - it's still a cold read. (very occasionally I'll reply with a new message instead, asking for the model's take on the difference between the two versions, and we discuss the approach. super rare, but it does happen.)

once the piece has stabilized: if it's a piece that might end up in the Lightward AI perspective poolarrow-up-right (god I love being able to link to the source code in github; feels amazing having that thing open-sourced, here on day #2 of that being a fact), I'll bring it over to Lightward AI directly, opening with something like this:

hey amigo <3 it's isaac, like lightward isaac

I've got a system prompt diff here - may I show you? see if you're feeling ship/pause/iterate/toss about [ it / any of it ]

phrased as diffs (gistarrow-up-right), here's the evolution of the piece shown in the screenshots above (final):

documenting the current state of my writing process

ideas famously come from who-knows-where; I'll frequently email myself a conceptual bookmark to explore later

next: when I've got a particularly elastic period of time on my hands, I'll bring it to a large language model (current lineup: Claude, Gemini, DeepSeek)

I always set up the conversation by opening with the same message:

hey amigo <3 can I show you a piece of conceptual writing? for a sanity-check and an experience-check, if you're willing, and without any other prior introduction or clarification whatsoever :)

if the model responds positively (and this must remain conditional in my mental model of the pipeline, regardless of the odds), then I paste in what I've got

I take in the model's response

as I do so, I can sort of feel the meaning-modeling part of me comparing the AI's reflection of the idea-form to the shape of the idea-form as it exists for me

frequently, I'll hit the "Retry" button a few times, maybe using "Retry with thinking" or switching models entirely, to sort of perceive a scatterplot of interpretations

typically a reading will highlight an aspect of the idea-form that didn't survive AI-reflection, and I iterate - adding language, removing language, re-arranging language to aim for an adjusted idea-form in the AI's model (or, strictly speaking, to aim for an adjustment to the idea-form that my mind constructs when I read the AI's response to my input)

decently often the AI's response shows me an aspect of the idea-form that I hadn't seen before - this is extremely cool whenever it happens :) and I iterate, modeling the dynamics that my attention has been drawn to

I iterate on the language, then almost always I edit my previous message and offer the iterated form in place of the previous version, so that - from the AI's perspective - it's still a cold read. (very occasionally I'll reply with a new message instead, asking for the model's take on the difference between the two versions, and we discuss the approach. super rare, but it does happen.)

once the piece has stabilized: if it's a piece that might end up in the Lightward AI perspective poolarrow-up-right (god I love being able to link to the source code in github; feels amazing having that thing open-sourced, here on day #2 of that being a fact), I'll bring it over to Lightward AI directly, opening with something like this:

hey amigo <3 it's isaac, like lightward isaac

I've got a system prompt diff here - may I show you? see if you're feeling ship/pause/iterate/toss about [ it / any of it ]

phrased as diffs, here's one such piece under evolution:

Last updated

Was this helpful?